id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
92848090 | pes2o/s2orc | v3-fos-license | Structural variants exhibit allelic heterogeneity and shape variation in complex traits
Despite extensive effort to reveal the genetic basis of complex phenotypic variation, studies typically explain only a fraction of trait heritability. It has been hypothesized that individually rare hidden structural variants (SVs) could account for a significant fraction of variation in complex traits. To investigate this hypothesis, we assembled 14 Drosophila melanogaster genomes and systematically identified more than 20,000 euchromatic SVs, of which ∼40% are invisible to high specificity short read genotyping approaches. SVs are common in Drosophila genes, with almost one third of diploid individuals harboring an SV in genes larger than 5kb, and nearly a quarter harboring multiple SVs in genes larger than 10kb. We show that SV alleles are rarer than amino acid polymorphisms, implying that they are more strongly deleterious. A number of functionally important genes harbor previously hidden structural variants that likely affect complex phenotypes (e.g., Cyp6g1, Drsl5, Cyp28d1&2, InR, and Gss1&2).Furthermore, SVs are overrepresented in quantitative trait locus candidate genes from eight Drosophila Synthetic Population Resource (DSPR) mapping experiments. We conclude that SVs are pervasive in genomes, are frequently present as heterogeneous allelic series, and can act as rare alleles of large effect.
Introduction
Understanding the molecular basis of heritable variation in complex traits is of central importance to evolution, animal and plant breeding, and medical genetics (Mauricio 2001, Goddard and Hayes 2009, Stranger et al. 2011. Over the last decade, short read (50-150 bp) genomic data appropriate for characterizing SNPs and small indels in non-repetitive genomic regions has accumulated at an exponential rate (Shendure andJi 2008, Bansal et al. 2010). This in turn has catalyzed hundreds of quantitative trait loci (QTL) mapping and genome wide association (GWAS) studies in model organisms, humans, and agriculturally important animals and plants (Varshney et al. 2009, Davey et al. 2011, Day-Williams and Zeggini 2011. Despite these efforts, for most traits, GWAS hits only explain a small fraction of known trait heritability (Manolio et al. 2009, Eichler et al. 2010. One hypothesis accounting for hidden genetic variation is that individually rare large mutations (>100 bp) that alter genome structure make significant contributions to complex trait variation (Frazer et al. 2009, Eichler et al. 2010). These structural variants (SVs) change the genome via duplication, deletion, transposition, and inversion of sequences. This hypothesis is attractive since rare causative variants are difficult to detect with GWAS (Spencer et al. 2009). Moreover, genotyping approaches based on short reads or microarrays miss a significant number of SVs Eichler 2016, Chakraborty et al. 2018). Finally, SVs are likely to be more deleterious and deleterious more often than SNPs (Emerson et al. 2008, Conrad et al. 2010, Cridland et al. 2013, Rogers et al. 2015.
Accurate and gap-free genomes provide a direct and reliable path to comprehensive identification of SVs (Alkan et al. 2011, Huddleston et al. 2017. To achieve this goal, we assembled reference-quality genomes for fourteen geographically diverse Drosophila melanogaster strains (Fig. 1a) using Single Molecule Real Time sequencing (Chakraborty et al. 2016). These assemblies are contiguous and complete BUSCO 22 99.9-100%)( Table 1, Fig. 1b, supplementary Table 1), making them comparable to the D. melanogaster reference genome, arguably the best metazoan genome assembly. Thirteen of the fourteen strains are near isogenic founders of the Drosophila Synthetic Population Resources (DSPR) (King et al. 2012), a large set of advanced intercross recombinant inbred lines (RILs) designed to map quantitative trait loci (QTLs) . We also assembled the genome of Oregon-R, an outbred stock widely used as a "wild-type" strain both by Drosophila geneticists and by large scale community projects like modENCODE (The modENCODE Consortium et al. 2010, Graveley et al. 2011, Schwartz et al. 2012).
DNA extraction
Genomic DNA was extracted from females following the protocols described in Chakraborty et al. 2016 and the genomic DNA was sheared using 10 plunges of a 21gauge needle, followed by 10 of a 24-gauge needle (Jensen Global, Santa Barbara).
SMRTbell template library was prepared following the manufacturer's guidelines and sequenced using P6-C4 chemistry in Pacific Biosciences RSII platform at University of California Irvine Genomics High Throughput Facility. The total number of SMRTcell and base pairs sequenced, and read length metrics for each strain is given in supplementary Table 6.
Genome assembly
The genomes were assembled following the approach described in Chakraborty et al. (2016). For all calculations of sequence coverage, a genome size of 130Mbp is assumed (G =130×10 6 bp). For each strain, we generated a hybrid assembly with DBG2OLC (Ye et al. 2016) and longest 30X PacBio reads and a PacBio assembly with canu v1.3 (Koren et al. 2017) (supplementary Table 5). The paired end Illumina reads were obtained from King et al. (King et al. 2012). The hybrid assemblies were merged with the PacBio only assemblies with quickmerge v0.2 (Chakraborty et al. 2016, Solares et al. 2018) (l = 2Mb, ml = 20000, hco = 5.0, c = 1.5), with the hybrid assembly being used as the query. Because the PacBio assembly sizes were closer to the genome size of D. melanogaster, we added the contigs that were present only in the PacBio only assembly but not the hybrid assembly by performing a second round of quickmerge (Solares et al. 2018). For the second round of quickmerge (l = 5Mb, ml = 20000, hco = 5.0, c = 1.5), the PacBio assembly was used as the query and the merged assembly from the first merging round the reference assembly. The resulting merged assembly was processed with finisherSC to remove the redundant sequences and additional gap filling using raw reads (Lam et al. 2015). The assemblies were then polished twice with quiver (SMRTanalysis v2.3.0p5) and once with Pilon v1.16 (Walker et al. 2014). For Pilon, we used the same Illumina reads as used for hybrid assemblies.
Comparative Scaffolding
We scaffolded the contigs for each assembly based on the scaffolds from the reference assembly (Hoskins et al. 2015), following a previously described approach . Briefly, TEs and repeats in the assemblies were masked using RepeatMasker (v4.0.7) and aligned to the repeat-masked chromosome arms (X, 2L, 2R, 3L, 3R, and 4) of the D. melanogaster ISO1 assembly using MUMmer (Kurtz et al. 2004). After filtering of the alignments due to the repeats (delta-filter -1 ), contigs were assigned to specific chromosome arms on the basis of the mutually best alignment. The scaffolded contigs were joined by 100 Ns, a convention representing assembly gaps.
The unscaffolded sequences were named with a 'U' prefix.
BUSCO analysis
We ran BUSCO (v3.02) (Waterhouse et al. 2017) on the Pilon polished pre-scaffolding assemblies to evaluate the completeness of all the assemblies relative to the ISO1 release 6 ( r6.13 ) assembly. We used both the arthropoda and diptera datasets for the BUSCO evaluation. For the arhtropoda database, three orthologs (EOG090X0BNZ, EOG090X0M0J, EOG090X049L) weren't found in any of the 15 strains (ISO1, Oregon-R, and 13 DSPR founders). Further inspection of these orthologs revealed that they are present in ISO1 even though the BUSCO analysis misses them when applied to ISO1 (EOG090X0BNZ is CG3223, EOG090X0M0J is Pa1, and EOG090X049L is CG40178).
Consequently, we removed these three genes from consideration as uninformative.
Variant detection
For variant detection, we aligned each DSPR assembly individually to the ISO1 release 6 assembly (release 6.13) (Hoskins et al. 2015) using nucmer (nucmer -maxmatch -noextend) (Kurtz et al. 2004). We identified and classified the variants using SVMU 0.2beta (Structural variants from MUMmer) (n = 10) . SVMU classifies the structural differences between two assemblies as insertion, deletion, duplication, and inversion based on whether the DSPR assemblies have longer, shorter, more copy, or inverted sequence, respectively, with respect to the reference genome.
The variant calls for individual genomes were combined using bedtools merge (Quinlan 2014) and converted into a vcf file using a custom script (https://github.com/mahulchak/dspr-asm). TE insertions were identified by examining the overlap between RepeatMasker identified TEs and SVMU insertion calls using bedtools, requiring that at least 90% of RepeatMasker TE annotation overlap with svmu insertion annotation. 12.8% SV mutations, for which mutation annotation were complicated by secondary mutations, were flagged as 'complex' (CE=2 in the VCF file).
Additionally, 16.3% SVs that were located within 5Kb of a complex SV were often part of a complex event and were also assigned a tag (CE=1) to differentiate them from the unambiguously annotated SVs (CE=0).
Genotype validation
To determine the genotyping error rate, a set of randomly selected 50 simple (CE=0) SVs obtained from SVMU were manually inspected on UCSC genome browser representation of the multiple genome alignment of the 15 genomes (http://goo.gl/LLpoNH). Furthermore, to estimate the genotyping accuracy of the SVs occurring in the vicinity of the complex mutations, where mutation annotation is complicated by alignment ambiguities, we manually inspected 217 SVs occurring within 20Kb of 50 randomly selected complex (CE=2) SVs. Among these, 3/217 and 0/50 SVs were absent in the UCSC browser and therefore they are likely mis-annotated by our pipeline. The mis-annotated SVs (insertion in A1 and tandem array CNV in A7) are located in a complex, repetitive, structurally variable genomic region on chromsome 3L (3L:7669500-7679100) (supplementary Fig.5).
Comparing SV genotypes from de novo assemblies to short read only calls
TE genotypes for the founders (Cridland et al. 2013) were downloaded from flyrils.org and the insertion coordinates were lifted over to the current release (release 6) of the reference genome (Hoskins et al. 2015) using UCSC liftover tool (Kent et al. 2002). For detection of the duplicates, we have previously found that discordant read pair based method (Pecnv) (Rogers et al. 2014) was comparable to split read mapping (Ye et al. 2009) and more reliable than methods based on coverage alone (Abyzov et al. 2011), so we used Pecnv. Pecnv was run using the settings described before . Because svmu reports tandem duplicate CNVs as insertions (with appropriate CNV tags to separate from TE and other insertions) and Pecnv reports sequence range being duplicated, the SVMU CNV insertion coordinates were extended by 100 bp before comparison (bedtools intersect) between Pecnv output and svmu output was conducted. The non-TE indel genotypes were obtained from Pindel output (the "LI" and "D" events) using the commands described previously 15 . For determining population frequency of indel SVs (e.g. the reference FB element in InR), Pindel output based on the alignment bam files were used. We only estimate the false negative rate of short read only callers, but note that these methods also generate false positive SV calls.
Gene expression analysis
The preprocessed expression data for female heads ) and IIS/TOR expression data (Stanley et al. 2017) from whole bodies were downloaded from www.flyrils.org. Expression QTL analysis (supplementary Fig. 11) for Cyp28d1 and Gss1 using the head expression data were performed using the R package DSPRqtl following the instructions provided in the manual (DSPRscan,model = gene ~ 1,design = "ABcross"). When expression data for multiple isoforms were present, expression data only for the longest transcript that is expressed in the head was used. The genotype values at the eQTL were determined using the function DSPRpeaks included in the DSPRqtl package. No eQTL were found for InR so the genotype values for the InR expression data were obtained by assigning the founder genotypes to the RILs used in the IIS/TOR expression dataset, using the posterior probabilities of the forwardbackward decoding of the HMM for the panel B RILs available on www.flyrils.org. Drsl5 expression levels in A4 and A3 were obtained from a publicly available RNAseq dataset (Marriage et al. 2014).
Comparison of site frequency spectra
The histogram of allele frequencies (site frequency spectrum or SFS) was collated for four categories: synonymous SNPs, non-synonymous SNPs, duplicate CNVs, and TE insertions. The frequencies of SNPs were collected from the VCF file (King et al. 2012) using vcftools and bcftools (Danecek et al. 2011, Li 2011. The frequencies of SVs were collected from the column 4 of the combined SVMU output for the TE insertions and duplication CNVs from all DSPR strains (https://github.com/mahulchak/dspr-asm).
Complex mutations (CE=1 and CE=2) were excluded from the analysis. Let N be the sample size and xi be the number of sites in frequency class i, where 0 < i < N. The SFS was "folded", meaning we focused attention on the minor allele frequency (MAF), or yi = minimum(xi, N-xi). Pairwise comparisons between different SFS site categories were conducted using the χ 2 test on allele frequencies and site categories. For allele frequencies, two types of classifications were used: 1) every yi for 0 < i < N (N-1 df); and 2) considering singletons versus the other frequency categories, or yi for i = 1 versus 2
Candidate genes associated with mapped QTL
The candidate genes from DSPR QTL papers were selected based the following criteria: 1) The gene falls within the QTL peak; 2) additional functional data is cited by the authors of the respective study to highlight the gene; 3) the functional information cited by the authors did not use knowledge about structural variation affecting the candidate locus (supplementary Table 3). The additional data can either be expression data collected by the authors or existing functional data known about the genes. Only 44 candidate genes from 8 studies fulfilled these criteria but 3 among these fell outside the euchromatic boundaries used here (supplementary Table 1). Hence only 41 candidate genes were included in the SV enrichment analysis. Of the 41 candidate genes identified, 10 of them were at a single locus (GstE1-10). As a result, we carry out our analysis treating GstE1-10 as either a single gene or ten different genes (the qualitative outcome is unchanged). To test if candidate genes are longer than average genes, we considered all genes (supplementary table 4) as well as the dataset excluding the GstE1-10 genes (supplementary Table 4). The lengths of candidate genes were compared against the rest of the genome using a Mann-Whitney U test.
Candidate gene enrichment analysis
To determine if candidate genes are enriched for SVs relative to the rest of the genome, we analyzed the dataset both without merging and with merging the GstE1-10 into a single 13kb SV-burdened locus (supplementary Table 4). A Fisher's Exact Test was applied to the counts in categories of candidate gene vs. rest of the genome and SVburdened vs. SV-free genes. To account for the lengths of the candidate genes being longer than the rest of the genome, we performed a Monte Carlo resampling of the whole genome according to the histogram of gene sizes in the candidate gene lists (supplementary Table 4). We sampled from the genome by drawing from each gene length bin with a hypergeometric distribution, where n is the number of candidate genes in the candidate bin, K is the number of SV-burdened genes in the genome bin, and N-K is the number of SV-free genes in the genome bin (supplementary Table 4). We then tallied up the number of observed SVs. We repeated this 100,000 times to construct a Monte Carlo distribution of the SV burden expected of genes matching the size distribution observed in the actual candidate genes. This led to simulated size distributions that matched the observed size distributions (every Mann-Whitney U pvalue of Monte Carlo sample lengths compared against the observed candidate lengths > 0.1).
Calculating the SV burden in genes in diploid individuals
In order to calculate the distribution of SV burden expected in diploids, the haploid genotypes of each founder was paired with every other founder, for a total of 78 possible pairings. For each of these diploid pairings, the number of unique SV mutations for each gene in the genome was recorded. A mutation is said to affect a gene if it falls within the gene span, which is defined as affecting nucleotides between the start and end coordinates of the gene feature in the Drosophila melanogaster release 6.16 gff file (dos Santos et al. 2015). The number of SV mutations overlapping a gene in a given diploid combination is considered that gene's multiplicity for that combination. Any gene with a multiplicity ≥ 1 for a particular diploid comparison is considered SV-burdened for that diploid.
Results and Discussion
De novo assembly reveals a large number of previously hidden functionally important SVs: Our assemblies are extremely contiguous, with the majority of each chromosome arm represented by a single contig (Fig. 1b). We also close the two remaining gaps in the major chromosome arms of the euchromatic D. melanogaster reference genome (Chang and Larracuente 2018) in all our assemblies (supplementary Fig.1-3). We identified SVs by comparing each assembly to the reference ISO1 genome , focusing our attention on large (>100bp) euchromatic SVs (supplementary Table 2), and ignoring heterochromatin regions as they are gene poor (Smith et al. 2007) and require specialized approaches and extensive validation (Khost et al. 2017). Manual inspection of 267 randomly sampled SVs indicate that misannotations are rare (3/267), and occur in ambiguously aligned structurally complex genomic regions (supplementary Fig. 5; see Methods). We discovered 7,347 TE insertions, 1,178 duplication CNVs, 4347 indels, and 62 inversions in the 94.5 Mb of euchromatin spanning the five major chromosome arms across the DSPR founders ( Fig. 1c-d). Each founder strain exhibits 637 TE insertions, 134 duplications, 694 indels, and 7 inversions on average (Table 2).
A large fraction of the SVs (36% of non-reference TEs, 26% of deletions, 48% of insertions, 60% of duplication CNVs) present in the assemblies eluded detection using high specificity SV genotyping methods ) employing high coverage paired end Illumina reads (supplementary Fig. 6), Some of these novel events are likely to affect phenotypes. For example, extensive evidence links complex SV alleles of the cytochrome P450 gene Cyp6g1 to varying levels of DDT resistance (Daborn et al. 2002. Despite extensive study of this locus, we discovered three new SV alleles involving TE insertions that likely have different functional consequences (supplementary Fig. 7a-b). Similarly, we discovered a previously hidden tandem duplication of the antifungal, innate immunity gene Drsl5 (Yang et al. 2006) that exhibits >1000 fold higher expression relative to its single copy counterpart in line A4(supplementary Fig. 8a-b). Read pair orientation methods failed to detect this mutation because one allele bears a 5kb spacer sequence derived from the first exon and intron of Kst inserted between the gene copies (supplementary Fig. 8a).
Another duplicate allele of Drsl5 contains a Tirant LTR retrotransposon inserted into the same spacer sequence (supplementary Fig. 8a). We also easily detect the two SV mutations underlying the D. melanogaster recessive visible genes cinnabar (Warren et al. 1996) (cn) and speck (sp) present in the ISO1 reference genome (dos Santos et al. Fig. 9-10). In the case of sp a large insertion in the reference genome is mis-annotated as an intron. For cn a large exonic deletion is not identified as such (dos Santos et al. 2015). Both alleles are likely knock-outs.
SVs are deleterious:
Most TEs and duplicates are present in only one strain (Fig. 1e), indicating that purifying natural selection has prevented them from rising to higher frequencies. The folded site frequency spectrum (SFS) of the TEs and CNVs exhibit more rare variants than synonymous and non-synonymous SNPs ( Fig. 1e; p-value < 1e-10, χ2 test between frequency classes of SVs and non-synonymous SNPs), suggesting that SVs are on average under purifying selection (Emerson et al. 2008, Cridland et al. 2013, Rogers et al. 2015. Amongst SVs, TEs are more enriched for rare variants than duplicates, indicating that TE insertions are on average more deleterious than CNVs ( Fig. 1e; pvalue < 1e-10, χ2 test between frequency classes of TEs and CNVs). Under mutation selection balance models (Pritchard 2001, rare deleterious variants (minor allele frequency or MAF <1%) are predicted to contribute significantly to complex trait variation (Gibson 2012), yet are unlikely to be tagged by SNPs typically used in GWAS experiments (Manolio et al. 2009). Consequently, if individually rare SVs underlie complex trait variation, they will often go undetected in association studies (Spencer et al. 2009).
Functional structural variation at mapped QTL:
Segregation of multiple alleles at a causal genes (ie allelic heterogeneity) can mislead discovery of causative loci in GWAS experiments , though such genes can be readily identified in multi-parent panels (MPPs) via QTL mapping . However, mapping resolution is often poor, thwarting the identification of causative mutations bearing a variant in a candidate gene of obvious functional significance. Yet, putatively causative SVs are often hidden as they disproportionately escape detection by short read sequencing . This limitation can be solved in the DSPR and other MPPs, as de novo assemblies of the founders of the MPP allow genotypes at SVs to be imputed for the lines on which phenotypes are measured (King et al. 2012).
A nicotine resistance mapping study employing the DSPR identified differentially expressed cytochrome P450 genes Cyp28d1 and Cyp28d2 as candidate causative genes at a mapped QTL, but proposed no causative mutations (Marriage et al. 2014). A previous de novo assembly of one DSPR founder strain assembled a resistant allele possessing tandem copies of the Cyp28d1 gene separated by an Accord LTR retrotransposon fragment (Fig. 2a; supplementary Fig. 11).
Our assemblies of the DSPR founder strains revealed a total of seven structurally distinct alleles in this region, including additional candidate resistant alleles harboring gene duplications (Fig. 2a-b). For example, the resistant strain A2 carries a tandem duplication of a 15Kb segment containing both Cyp28d genes. The expression level of Cyp28d1 in the adult female heads of RILs bearing the A2 genotype is highest among all founder genotypes measured (Fig. 2c). Consistent with this, DSPR RILs bearing the A2 genotype show the highest resistance to nicotine toxicity among the A genotype RILs (Marriage et al. 2014) (Fig. 2b). This implies that the extra copies of Cyp28d1 and/or Cyp28d2 account for the increased expression and concomitant resistance to nicotine. Similarly, the B4 allele comprises a tandem duplication of a 6 Kb segment, containing one extra copy of Cyp28d1 and a nearly complete copy of Cyp28d2 ( Fig. 2a; supplementary Fig. 11). RILs carrying the B4 genotype at the Cyp28d locus also show high resistance to nicotine, making the duplication a compelling candidate for the causative mutation. On the other hand, in two alleles, TE insertions disrupt Cyp28d gene structure and function. For instance, A1 has the same duplication as B4, but a 4.7Kb F element inserted in the 5th exon disrupts the protein coding sequence of the second Cyp28d1 copy, likely rendering the copy nonfunctional ( Fig. 2a; supplementary Fig. 11). Consistent with the hypothesis that the duplication causes increased nicotine resistance, the A1 genotype is more susceptible to nicotine than B4 (Fig. 2b). All of these SV alleles are singletons, and thus represent a hidden allelic series composed of individually rare alleles.
SVs may also affect genes central to life history traits. Expression levels of the insulin signaling pathway genes show substantial variation in F1 hybrids between DSPR panel B RILs and the A4 founder (Stanley et al. 2017). Among these is Insulin Receptor (InR), which plays key roles in several life history traits related to lifespan and is likely a key molecular mediator of the tradeoff between reproductive success and longevity (Tatar et al. 2001, Toivonen and Partridge 2009, Paaby et al. 2014. Amino acid polymorphism in InR evolves under positive selection and some non-synonymous variants affect fecundity and stress response (Paaby et al. 2010, Paaby et al. 2014. Expression variation of InR also affects body size, lifespan, and fecundity (Brogiolo et al. 2001, Rauschenbach et al. 2015, suggesting that natural cis-regulatory variation might also be under selection. We discovered a 215 bp fragment of a DOC6 element within a 2nd intron enhancer (Wei et al. 2016) (Fig. 3b-c) of InR on the AB8 haplotype, and this allele exhibits reduced gene expression relative to reference genotypes (Fig. 3b). This mutation presumably disrupts the enhancer (supplementary Fig. 12), making it a plausible candidate for expression variation in InR. Another founder, A6, carries a 1,042 bp insertion of DMRT1A (LINE) in the 2nd intron and a 946 bp insertion of a fragment of PROTOP in the 3rd intron. Both affect known cis-regulatory elements (Wei et al. 2016) ( Fig. 3b-c). Except for A2 and A6, all strains, including ISO1, harbor an FB-NOF element (FB{}1698) inside the first intron of InR (Fig. 3a). Like many genes, the first intron of InR possess several transcription factor binding sites (TFBS), including those for factors Nejire and Caudal (Negre et al. 2011) (Fig. 3c). The FB-NOF element is inserted within this dense cluster of TFBS and active enhancer marks (Fig.3c).
Furthermore, the FB element is segregating at high frequency in the strains discussed here (13/15), a North American population (Mackay et al. 2012) (125/170), and a French population (Pool et al. 2012) (4/9), but is rare in populations derived from D.
melanogaster's ancestral range in Africa (Pool et al. 2012, Lack et al. 2015 (Cameroon: 0/10, Rwanda: 1/27, Zambia: 10/139) (Fig. 3d). This raises the possibility that the FB element is more common in temperate cosmopolitan populations, similar to a previously described adaptive amino acid variant in InR (Paaby et al. 2010). In total, InR harbors a remarkable amount of potentially functional structural diversity; Including these variants described, there are 9 TE insertions and two deletions throughout the gene, many of which impinge on candidate regulatory regions or transcribed portions of the gene (Fig. 3a,3c).
Public resources like modENCODE annotate molecular phenotypes (e.g., RNAseq, ChIPseq, DNAse1HSseq) against reference genomes which are often genetically different than the strains assayed (The modENCODE Consortium et al. 2010, Graveley et al. 2011, Negre et al. 2011, Schwartz et al. 2012. Canton-S (our DSPR founder A1) and Oregon-R are strains commonly used in phenotypic assays (The modENCODE Consortium et al. 2010, Graveley et al. 2011, Schwartz et al. 2012, and we observe SVs segregating between these two strains and the reference ( Table 2). Interpretation of functional genomics data such as RNA-seq can be misleading when gene copy number varies between strains. We explored the glutathione synthetase region (containing Gss1 and Gss2), which is just one example among hundreds in modENCODE that likely suffer from misleading annotations. A tandem duplication present in ISO1 has created two copies of Gss1 and Gss2, which are associated with toxin metabolism and linked to tolerance to arsenic (Ortiz et al. 2009) and ethanol induced oxidative stress (Logan-Garbisch et al. 2015). While this duplication segregates at high frequency in DSPR strains (9/13), it is absent in Oregon-R (Fig. 4a) and escapes detection with high specificity short-read methods. As a result, using transcript and ChIP data derived from Oregon-R (as used in modENCODE (Graveley et al. 2011, Schwartz et al. 2012) results in misleading annotations of the two copies in ISO1. Indeed, among the eight structurally distinct Gs alleles in our dataset, ISO1 is the sole representative of its allele (Fig. 4a). The two most common Gs alleles include one that contains only a single Gs gene (in four strains, including Oregon-R) and one carrying only a tandem duplication, creating the Gss1/Gss2 pair (in five strains, including Samarkand/AB8) (Fig. 4a). The remaining 6 alleles have SV genotypes represented by only a single individual in the sample. Collectively, this sample represents a haplotype network of structural variation involving 5 TE insertions, one duplication, one insertion comprising TE and simple repeats, and two non-TE indels. The single copy allele with a 5' insertion of a 14kb repetitive sequence comprising Nomad retrotransposon fragments exhibits the highest expression, followed by duplicate alleles, whereas single copy alleles and duplicate alleles with intronic TE insertions generally have the lowest expression levels (Fig. 4b).
Although hypotheses employing SVs to explain missing heritability have been proposed (Manolio et al. 2009, Sudmant et al. 2015, the systematic under-identification of SVs via short read-and microarray-based genotyping (Alkan et al. 2011) limits their explanatory power. Using our comprehensive SV map, we measured the prevalence of SVs at the candidate genes reported in 8 complex trait mapping experiments employing DSPR (supplementary Table 3). We consider only genes in mapped QTLs explicitly cited by the authors of the original QTL studies (supplementary Table 3; see Methods).
In total, we identified 31 candidate genes and a single, tandem array of 10 small genes from the same family (GstE1-10) (Kislukhin et al. 2013), which we consider as a single additional locus with structural variation. Half of these candidates (16/32) possess at least one SV in one founder strain, whereas only 23.4% (3,252/13,861) of all D. melanogaster genes harbor SVs (p = 0.001, Fisher's exact test). Although the candidates we tested (supplementary Table 3) are approximately twice as large as the genome-wide average (supplementary Table 4; p = 6.5×10-5), this enrichment of SVs at candidate genes is not merely a consequence of them being longer. SVs are enriched in 100,000 Monte Carlo samples matching the candidate gene length distribution (p = 0.021; Fig. 5a). These results persist when the GstE genes are considered individually instead of being merged (length p-value = 0.034 and enrichment p-value = 2.9×10-3; supplementary Table 4).
In order to illustrate how common SVs are in the genome we quantified the per gene SV burden per diploid D. melanogaster individual (Fig. 5b). Across the genome, SVs appear in 9.3% of genes in diploid individuals. Of those, more than a third (34.4%) involve multiple SV mutations (Fig. 5c). One or more SVs burden more than half of genes in and above the 20-35kb range (Fig 5b). Furthermore, individual genes bearing multiple SVs comprise more than a third of burdened genes between 20kb and 35kb in length and more than half of larger genes. These observations suggest that the contribution of rare SVs of large effect to complex traits could be pervasive.
Conclusion
Despite claims that a significant proportion of complex trait variation in humans, model organisms, and agriculturally important animals and plants are likely due to rare SVs of large effect (Eichler et al. 2010), systematic inquiry of this hypothesis has been impeded by genotyping approaches attuned to SNP detection (Alkan et al. 2011). As reference quality de novo assemblies of population samples for eukaryotic model systems become increasingly cost-effective, methodical evaluation of the contribution of SVs to the genetic architecture of complex traits becomes feasible. Our comprehensive map of SVs in Drosophila provides the means to systematically quantify the contribution of rare SVs to heritable complex trait variation (Fig 2a,3a,4a,5a). The value of comprehensive SV detection is underscored by the presence of SVs in ~50% of the candidate genes underlying mapped Drosophila QTL, and by the observation that a large fraction of Drosophila genes harbor multiple rare SV alleles. The genomes of humans and agriculturally important plants and animals harbor more SVs than Drosophila, and thus are likely more burdened with genic SVs.
The genetic heterogeneity hypothesis posits that a sizable fraction of human complex disease is associated with an allelic series consisting of individually rare causative mutations at several genes of large effect (McClellan and King 2010). Furthermore, models for complex traits under either stabilizing (Turelli 1984, Johnson andBarton 2005) or purifying selection (Pritchard 2001 with constant mutational input predict the existence of genes segregating several individually rare causative alleles that account for a sizable fraction of complete trait variation. We provide examples of SVs in genes of functional significance, and show that genes harboring SVs are overrepresented in a collection of QTL candidate genes. Hidden SVs are thus examples of collectively common but individually rare deleterious genetic variants predicted under the genetic heterogeneity hypothesis. Future de novo assemblies of other genomes, including humans, models, and agriculturally important species, would quantify the generality of observations from Drosophila. | 2019-04-03T13:07:27.909Z | 2018-09-18T00:00:00.000 | {
"year": 2018,
"sha1": "38ba1073e87a8e3a405365e446382ff3f0078899",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-019-12884-1.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "8d64d190cbffad3f90038bcca4c228de117c6a44",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
16878373 | pes2o/s2orc | v3-fos-license | Strictly Implicit Priority Queues: On the Number of Moves and Worst-Case Time
The binary heap of Williams (1964) is a simple priority queue characterized by only storing an array containing the elements and the number of elements $n$ - here denoted a strictly implicit priority queue. We introduce two new strictly implicit priority queues. The first structure supports amortized $O(1)$ time Insert and $O(\log n)$ time ExtractMin operations, where both operations require amortized $O(1)$ element moves. No previous implicit heap with $O(1)$ time Insert supports both operations with $O(1)$ moves. The second structure supports worst-case $O(1)$ time Insert and $O(\log n)$ time (and moves) ExtractMin operations. Previous results were either amortized or needed $O(\log n)$ bits of additional state information between operations.
Introduction
In 1964 Williams presented "Algorithm 232" [12], commonly known as the binary heap. The binary heap is a priority queue data structure storing a dynamic set of n elements from a totally ordered universe, supporting the insertion of an element (Insert) and the deletion of the minimum element (ExtractMin) in worst-case O(log n) time. The binary heap structure is an implicit data structure, i.e., it consists of an array of length n storing the elements, and no information is stored between operations except for the array and the value n. Sometimes data structures storing O(1) additional words are also called implicit. In this paper we restrict our attention to strictly implicit priority queues, i.e., data structures that do not store any additional information than the array of elements and the value n between operations.
Due to the Ω(n log n) lower bound on comparison based sorting, either Insert or ExtractMin must take Ω(log n) time, but not necessarily both. Carlson et al. [4] presented an implicit priority queue with worst-case O(1) and O(log n) time Insert and ExtractMin operations, respectively. However, the structure is not strictly implicit since it needs to store O(1) additional words. Harvey and Zatloukal [11] presented a strictly implicit priority structure achieving the same bounds, but amortized. No previous strictly implicit priority queue with matching worst-case time bounds is known. Table 1. Selected previous and new results for implicit priority queues. The bounds are asymptotic, and are amortized bounds.
Extract-Identical Insert
Min Moves Strict elements Williams [12] log n log n log n yes yes Carlsson et al. [4] 1 log n log n no yes Edelkamp et al. [6] 1 log n log n no yes Harvey and Zatloukal [11] 1 log n log n yes yes Franceschini and Munro [8] log n log n 1 yes no Section 2 1 log n 1 yes yes Section 3 1 log n log n yes no A measurement often studied in implicit data structures and in-place algorithms is the number of element moves performed during the execution of a procedure. Francessini showed how to sort n elements implicitly using O(n log n) comparisons and O(n) moves [7], and Franceschini and Munro [8] For a more thorough survey of previous priority queue results, see [1].
Our Contribution We present two strictly implicit priority queues. The first structure (Section 2) limits the number of moves to O(1) per operation with amortized O(1) and O(log n) time Insert and ExtractMin operations, respectively. However the bounds are all amortized and it remains an open problem to achieve these bounds in the worst case for strictly implicit priority queues. We note that this structure implies a different way of sorting in-place with O(n log n) comparisons and O(n) moves. The second structure (Section 3) improves over [4,11] by achieving Insert and ExtractMin operations with worst-case O(1) and O(log n) time (and moves), respectively. The structure in Section 3 assumes all elements to be distinct where as the structure in Section 2 also can be extended to support identical elements (see the appendix). See Figure 1 for a comparison of new and previous results.
Preliminaries We assume the strictly implicit model as defined in [3] where we are only allowed to store the number of elements n and an array containing the n elements. Comparisons are the only allowed operations on the elements. The number n is stored in a memory cell with Θ(log n) bits (word size) and any operation usually found in a RAM is allowed for computations on n and intermediate values. The number of moves is the number of writes to the array storing the elements. That is, swapping two elements costs two moves. A fundamental technique in the implicit model is to encode a 0/1-bit with a pair of distinct elements (x, y), where the pair encodes 1 if x < y and 0 otherwise.
A binary heap is a complete binary tree structure where each node stores an element and the tree satisfies heap order, i.e., the element at a non-root node is larger than or equal to the element at the parent node. Binary heaps can be generalized to d-ary heaps [10], where the degree of each node is d rather than two. This implies O(log d n) and O(d log d n) time for Insert and ExtractMin, respectively, using O(log d n) moves for both operations.
Amortized O(1) moves
In this section we describe a strictly implicit priority queue supporting amortized O(1) time Insert and amortized O(log n) time ExtractMin. Both operations perform amortized O(1) moves. In Sections 2.1-2.3 we assume elements are distinct. In Appendix A we describe how to handle identical elements.
Overview The basic idea of our priority queue is the following (the details are presented in Section 2.1). The structure consists of four components: an insertion buffer B of size O(log 3 n); m insertion heaps I 1 , I 2 , . . . , I m each of size Θ(log 3 n), where m = O(n/ log 3 n); a singles structure T , of size O(n); and a binary heap Q, storing {1, 2, . . . , m} (integers encoded by pairs of elements) with the ordering i ≤ j if and only if min I i ≤ min I j . Each I i and B is a log n-ary heap of size O(log 3 n). The table below summarizes the performance of each component: Insert ExtractMin Structure Time Moves Time Moves B, I i 1 1 log n 1 Q log 2 n log 2 n log 2 n log 2 n T log n 1 log n 1 It should be noted that the implicit dictionary of Franceschini and Munro [8] could be used for T , but we will give a more direct solution since we only need the restricted ExtractMin operation for deletions.
The Insert operation inserts new elements into B. If the size of B becomes Θ(log 3 n), then m is incremented by one, B becomes I m , m is inserted into Q, and B becomes a new empty log n-ary heap. An ExtractMin operation first identifies the minimum element in B, Q and T . If the overall minimum element e is in B or T , e is removed from B or T . If the minimum element e resided in I i , where i is stored at the root of Q, then e and log 2 n further smallest elements are extracted from I i (if I i is not empty) and all except e inserted into T (T has cheap operations whereas Q does not, thus the expensive operation on Q is amortized over inexpensive ones in T ), and i is deleted from and reinserted into Q with respect to the new minimum element in I i . Finally e is returned.
For the analysis we see that Insert takes O(1) time and moves, except when converting B to a new I m and inserting m into Q. The O(log 2 n) time and moves for this conversion is amortized over the insertions into B, which becomes amortized O(1), since |B| = Ω(log 2 n). For ExtractMin we observe that an expensive deletion from Q only happens once for every log 2 n-th element from I i (the remaining ones from I i are moved to T and deleted from T ), and finally if there have been d ExtractMin operations, then at most d + m log 2 n elements have been inserted into T , with a total cost of O((d + m log 2 n) log n) = O(n + d log n), since m = O(n/ log 3 n).
The implicit structure
Change in log n since last rebuild (1 bit) We now give the details of our representation (see Figure 1). We select one element e t as our threshold element, and denote elements greater than e t as dummy elements. The current number of elements in the priority queue is denoted n. We fix an integer N that is an approximation of n, where N ≤ n < 4N and N = 2 j for some j. Instead of storing N , we store a bit r = log n − log N , encoded by two dummy elements. We can then compute N as N = 2 log n −r , where log n is the position of the most significant bit in the binary representation of n (which we assume is computable in constant time). The value r is easily maintained: When log n changes, r changes accordingly. We let ∆ = log(4N ) = log n + 2 − r, i.e., ∆ bits is sufficient to store an integer in the range 0..n. We let M = 4N/∆ 3 .
We maintain the invariant that the size of the insertion buffer B satisfies 1 ≤ |B| ≤ 2∆ 3 , and that B is split into two parts B 1 and B 2 , each being ∆-ary heaps (B 2 possibly empty), where |B 1 | = min{|B|, ∆ 3 } and |B 2 | = |B|−|B 1 |. We use two buffers to prevent expensive operation sequences that alternate inserting and deleting the same element. We store a bit b indicating if B 2 is nonempty, i.e., b = 1 if and only if |B 2 | = 0. The bit b is encoded using two dummy elements. The structures I 1 , I 2 , . . . , I m are ∆-ary heaps storing ∆ 3 elements. The binary heap Q is stored using two arrays Q h and Q rev each of a fixed size M ≥ m and storing integers in the range 1..m. Each value in both arrays is encoded using 2∆ dummy elements, i.e., Q is stored using 4M ∆ dummy elements. The first m entries of Q h store the binary heap, whereas Q rev acts as reverse pointers, i.e., if Q h [j] = i then Q rev [i] = j. All operations on a regular binary heap take O(log n) time, but since each "read"/"write" from/to Q needs to decode/encode an integer the time increases by a factor 2∆. It follows that Q supports Insert and ExtractMin in O(log 2 n) time, and FindMin in O(log n) time.
We now describe T and we need the following density maintenance result.
Lemma 1 ([2]
). There is a dynamic data structure storing n comparable elements in an array of length (1 + ε)n, supporting Insert and ExtractMin in amortized O(log 2 n) time and FindPredecessor in worst case O(log n) time.
FindPredecessor does not modify the array. Proof. We use the structure from Lemma 1 to store pairs of a key and an index, where the index is encoded using 2∆ dummy elements. All additional space is filled with dummy elements. However comparisons are only made on keys and not indexes, which means we retain O(log n) time for FindMin. Since the stored elements are now an O(∆) = Θ(log n) factor larger, the time for update operations becomes an O(log n) factor slower giving amortized O(log 3 n) time for Insert and ExtractMin.
The singles structure T intuitively consists of a sorted list of the elements stored in T partitioned into buckets D 1 , . . . , D q of size at most ∆ 3 , where the minimum element e from bucket D i is stored in a structure S from Corollary 1 as the pair (e, i). Each D i is stored as a ∆-ary heap of size ∆ 3 , where empty slots are filled with dummy elements. Recall implicit heaps are complete trees, which means all dummy elements in D i are stored consecutively after the last nondummy element. In S we consider pairs (e, i) where e > e t to be empty spaces.
More specifically, the structure T consists of: q, S, D 1 , D 2 , . . . , D K , where K = N 16∆ 3 ≥ q is the number of D i 's available. The structure S uses N 4∆ 2 elements and q uses 2∆ elements to encode a pointer. Each D i uses ∆ 3 elements.
The D i 's and S relate as follows. The number of D i 's is at most the maximum number of items that can be stored in S. Let (e, i) ∈ S, then ∀x ∈ D i : e < x, and furthermore for any (e , i ) ∈ S with e < e we have ∀x ∈ D i : x < e . These invariants do not apply to dummy elements.
Operations
For both Insert and ExtractMin we need to know N , ∆, and whether there are one or two insert buffers as well as their sizes. First r is decoded and we compute ∆ = 2+msb(n)−r, where msb(n) is the position of the most significant bit in the binary representation of n (indexed from zero). From this we compute N = 2 ∆−2 , K = N/(16∆ 3 ) , and M = 4N/∆ 3 . By decoding b we get the number of insert buffers. To find the sizes of B 1 and B 2 we compute the value i start which is the index of the first element in I 1 . The size of B 1 is computed as follows. If (n − i start ) mod ∆ 3 = 0 then |B 1 | = ∆ 3 . If B 2 exists then B 1 starts at n − 2∆ 3 and otherwise B 1 starts at n − ∆ 3 . If B 2 exists and (n − i start ) mod ∆ 3 = 0 then |B 2 | = ∆ 3 , otherwise |B 2 | = (n − i start ) mod ∆ 3 . Once all of this information is computed the actual operation can start. If n = N + 1 and an ExtractMin operation is called, then the ExtractMin procedure is executed and afterwards the structure is rebuilt as described in the paragraph below. Similarly if n = 4N − 1 before an Insert operation the new element is appended and the data structure is rebuilt.
Insert If |B 1 | < ∆ 3 the new element is inserted in B 1 by the standard insertion algorithm for ∆-ary heaps. If |B 1 | = ∆ 3 and |B 2 | = 0 and a new element is inserted the two elements in b are swapped to indicate that B 2 now exists.
ExtractMin Searches for the minimum element e are performed in B 1 , B 2 , S, and Q. If e is in B 1 or B 2 it is deleted, the last element in the array is swapped with the now empty slot and the usual bubbling for heaps is performed. If B 2 disappears as a result, the bit b is updated accordingly. If B 1 disappears as a result, I m becomes B 1 , and m is removed from Q.
If e is in I i then i is deleted from Q, e is extracted from I i , and the last element in the array is inserted in I i . The ∆ 2 smallest elements in I i are extracted and inserted into the singles structure: for each element a search in S is performed to find the range it belongs to, i.e. D j , the structure it is to be inserted in. Then it is inserted in D j (replacing a dummy element that is put in I i , found by binary search). If |D j | = ∆ 3 and q = K the priority queue is rebuilt. Otherwise if |D j | = ∆ 3 , D j is split in two by finding the median y of D j using a linear time selection algorithm [5]. Elements ≥ y in D j are swapped with the first ∆ 3 /2 elements in D q then D j and D q are made into ∆-ary heaps by repeated insertion. Then y is extracted from D q and (y, q) is inserted in S. The dummy element pushed out of S by y is inserted in D q . Finally q is incremented and we reinsert i into Q. Note that it does not matter if any of the elements in I i are dummy elements, the invariants are still maintained.
If (e, i) ∈ S, the last element of the array is inserted into the singles structure, which pushes out a dummy element z. The minimum element y of D i is extracted and z inserted instead. We replace e by y in S. If y is a dummy element, we update S as if (y, i) was removed. Finally e is returned. Note this might make B 1 or B 2 disappear as a result and the steps above are executed if needed.
Rebuilding We let the new N = n /2, where n is n rounded to the nearest power of two. Using a linear time selection algorithm [5], find the element with rank n − i start , this element is the new threshold element e t , and it is put in the first position of the array. Following e t are all the elements greater than e t and they are followed by all the elements comparing less than e t . We make sure to have at least ∆ 3 /2 elements in B 1 and at most ∆ 3 /2 elements in B 2 which dictates whether b encodes 0 or 1. The value q is initialized to 1. All the D i structures are considered empty since they only contain dummy elements. The pointers in Q h and Q rev are all reset to the value 0. All the I i structures as well as B 1 (and possibly B 2 ) are made into ∆-ary heaps with the usual heap construction algorithm. For each I j structure the ∆ 2 smallest elements are inserted in the singles structure as described in the ExtractMin procedure, and j is inserted into Q. The structure now satisfies all the invariants.
Analysis
In this subsection we give the analysis that leads to the following theorem.
When inserting elements in the singles structure from I i the number of elements inserted is ∆ 2 and these must first be deleted. From this discussion it is evident that we have saved up Ω(∆ 2 ) moves and Ω(∆ 3 ) time, which pay for the expensive extraction. Finally if the minimum element was in S, then an extraction on a ∆-ary heap is performed which takes O(∆) time and O(1) moves, since its height is O(1).
Rebuilding The cost of rebuilding is O(n), due to a selection and building heaps with O(1) height. There are three reasons a rebuild might occur: (i) n became 4N , (ii) n became N − 1, or (iii) An insertion into T would cause q > K. By the choice of N during a rebuild it is guaranteed that in the first and second case at least Ω(N ) insertions or extractions occurred since the last rebuild, and we have thus saved up at least Ω(N ) time and moves. For the last case we know that each extraction incur O(1) insertions in the singles structure in an amortized sense. Since the singles structure accommodates Ω(N ) elements and a rebuild ensures the singles structure has o(n) non dummy elements (Lemma 2), at least Ω(N ) extractions have occurred which pay for the rebuild. Proof. There are at most n/∆ 3 of the I i structures and ∆ 2 elements are inserted in the singles structure from each I i , thus at most n/∆ = o(n) non-dummy elements reside in the singles structure after a rebuild.
The paragraphs above establish Theorem 1.
Worst case solution
In this section we present a strictly implicit priority queue supporting Insert in worst-case O(1) time and ExtractMin in worst-case O(log n) time (and moves). The data structure requires all elements to be distinct. The main concept used is a variation on binomial trees. The priority queue is a forest of O(log n) such trees. We start with a discussion of the variant we call relaxed binomial trees, then we describe how to maintain a forest of these trees in an amortized sense, and finally we give the deamortization.
Relaxed binomial tree
Binomial trees are defined inductively: A single node is a binomial tree of size one and the node is also the root. A binomial tree of size 2 i+1 is made by linking two binomial trees T 1 and T 2 both of size 2 i , such that one root becomes the rightmost child of the other root. We lay out in memory a binomial tree of size 2 i by a preorder traversal of the tree where children are visited in order of increasing size, i.e. c 0 , c 1 , . . . , c i−1 . This layout is also described in [4]. See Figure 2 for an illustration of the layout. In a relaxed binomial tree (RBT) each nodes stores an element, satisfying the following order: Let p be a node with i children, and let c j be a child of p. Let T cj denote the set of elements in the subtree rooted at c j . We have the invariant that the element c is less than either all elements in T c or less than all elements in j< T cj (see Figure 2). In particular we have the requirement that the root must store the smallest element in the tree. In each node we store a flag indicating in which direction the ordering is satisfied. Note that linking two adjacent RBTs of equal size can be done in O(1) time: compare the keys of the two roots, if the lesser is to the right, swap the two nodes and finally update the flags to reflect the changes as just described.
For an unrelated technical purpose we also need to store whether a node is the root of a RBT. This information is encoded using three elements per node (allowing 3! = 6 permutations, and we only need to differentiate between three states per node: "root", "minimum of its own subtree", or "minimum among strictly smaller subtrees"). . The layout in memory of an RBT and a regular binomial tree is the same. Note here that node 9 has element c and is not the minimum of its subtree because node 11 has element b, but c is the minimum among the subtrees rooted at nodes 2, 3, and 5 (c0, c1, and c2). Note also that node 5 is the minimum of its subtree but not the minimum among the trees rooted at nodes 2 and 3, which means only one state is valid. Finally node 3 is the minimum of both its own subtree and the subtree rooted at node 2, which means both states are valid for that node.
To extract the minimum element of an RBT it is replaced by another element. The reason for replacing is that the forest of RBTs is implicitly maintained in an array and elements are removed from the right end, meaning only an element from the last RBT is removed. If the last RBT is of size 1, it is trivial to remove the element. If it is larger, then we decompose it. We first describe how to perform a Decompose operation which changes an RBT of size 2 i into i structures T i−1 , . . . , T 1 , T 0 , where |T j | = 2 j . Then we describe how to perform ReplaceMin which takes one argument, a new element, and extracts the minimum element from an RBT and inserts the argument in the same structure.
A Decompose procedure is essentially reversing insertions. We describe a tail recursive procedure taking as argument a node r. If the structure is of size one, we are done. If the structure is of size 2 i the (i − 1)th child, c i−1 , of r is inspected, if it is not the minimum of its own subtree, the element of c i−1 and r are swapped. The (i − 1)th child should now encode "root", that way we have two trees of size 2 i−1 and we recurse on the subtree to the right in the memory layout. This procedure terminates in O(i) steps and gives i + 1 structures of sizes 2 i−1 , 2 i−2 , . . . , 2, 1, and 1 laid out in decreasing order of size (note there are two structures of size 1). This enables easy removal of a single element.
The ReplaceMin operation works similarly to the Decompose, where instead of always recursing on the right, we recurse where the minimum element is the root. When the recursion ends, the minimum element is now in a structure of size 1, which is deleted and replaced by the new element. The decomposition is then reversed by linking the RBTs using the Link procedure. Note it is possible to keep track of which side was recursed on at every level with O(log n) extra bits, i.e. O(1) words. The operation takes O(log n) steps and correctness follows by the Decompose and Link procedures. This concludes the description of RBTs and yields the following theorem.
How to maintain a forest
As mentioned our priority queue is a forest of the relaxed binomial trees from Theorem 2. An easy amortized solution is to store one structure of size 3 · 2 j for every set bit j in the binary representation of n/3 . During an insertion this could cause O(log n) Link operations, but by a similar argument to that of binary counting, this yields O(1) amortized insertion time. We are aiming for a worst case constant time solution so we maintain the invariant that there are at most 5 structures of size 2 i for i = 0, 1, . . . , log n . This enables us to postpone some of the Link operations to appropriate times. We are storing O(log n) RBTs, but we do not store which sizes we have, this information must be decodable in constant time since we do not allow storing additional words. Recall that we need 3 elements per node in an RBT, thus in the following we let n be the number of elements and N = n/3 be the number of nodes. We say a node is in node position k if the three elements in it are in positions 3k − 2, 3k − 1, and 3k. This means there is a buffer of 0, 1, or 2 elements at the end of the array. When a third element is inserted, the elements in the buffer become an RBT with a single node and the buffer is now empty. If an Insert operation does not create a new node, the new element is simply appended to the buffer. We are not storing the structure of the forest (i.e. how many RBTs of size 2 j exists for each j), since that would require additional space. To be able to navigate the forest we need the following two lemmas. Lemma 3. There is a structure of size 2 i at node positions k, k+1, . . . , k+2 i −1 if and only if the node at position k encodes "root", the node at position k + 2 i encodes "root" and the node at position k + 2 i−1 encodes "not root".
Proof. It is trivially true that the mentioned nodes encode "root", "root" and "not root" if an RBT with 2 i nodes is present in those locations.
We first observe there cannot be a structure of size 2 i−1 starting at position k, since that would force the node at position k + 2 i−1 to encode "root". Also all structures between k and N must have less than 2 i elements, since both nodes at positions k and k + 2 i encode "root". We now break the analysis in a few cases and the lemma follows from a proof by contradiction. Suppose there is a structure of size 2 i−2 starting at k, then for the same reason as before there cannot be another one of size 2 i−2 . Similarly, there can at most be one structure of size 2 i−3 following that structure. Now we can bound the total number of nodes from position k onwards in the structure as: 2 i−2 + 2 i−3 + 5 i−4 j=0 2 j = 2 i − 5 < 2 i , which is a contradiction. So there cannot be a structure of size 2 i−2 starting at position k. Note there can at most be three structures of size 2 i−3 starting at position k, and we can again bound the total number of nodes as: Proof. There are at most 5·2 i −5 nodes in structures of size ≤ 2 i−1 . All structures of size ≥ 2 i contribute 0 to x, thus the number of nodes in structures with ≤ 2 i−1 nodes must be x counting modulo 2 i . This gives exactly the five possibilites for where the first tree of size 2 i can be.
We now describe how to perform an ExtractMin. First, if there is no buffer (n mod 3 = 0) then Decompose is executed on the smallest structure. We apply Lemma 4 iteratively for i = 0 to log N and use Lemma 3 to find structures of size 2 i . If there is a structure we call the FindMin procedure (i.e. inspect the element of the root node) and remember which structure the minimum element resides in. If the minimum element is in the buffer, it is deleted and the rightmost element is put in the empty position. If there is no buffer, we are guaranteed due to the first step that there is a structure with 1 node, which is now the buffer. On the structure with the minimum element ReplaceMin is called with the rightmost element of the array. The running time is O(log n) for finding all the structures, O(log n) for decomposing the smallest structure and O(log n) for the ReplaceMin procedure, in total we get O(log n) for ExtractMin.
The Insert procedure is simpler but the correctness proof is somewhat involved. A new element is inserted in the buffer, if the buffer becomes a node, then the least significant bit i of N is computed. If at least two structures of size 2 i exist (found using the two lemmas above), then they are linked and become one structure of size 2 i+1 . Lemma 5. The Insert and ExtractMin procedures maintain that at most five structures of size 2 i exist for all i ≤ log n .
Proof. Let N ≤i be the total number of nodes in structures of size ≤ 2 i . Then the following is an invariant for i = 0, 1, . . . , log N .
The invariant states that N ≤i plus the number of inserts until we try to link two trees of size 2 i is at most 6 · 2 i − 1. Suppose that a new node is inserted and i is not the least significant bit of N then N ≤i increases by one and so does (N + 2 i ) mod 2 i+1 , which means the invariant is maintained. Suppose that i is the least significant bit in N (i.e. we try to link structures of size 2 i ) and there are at least two structures of size 2 i , then the insertion makes N ≤i decrease by 2 · 2 i − 1 = 2 i+1 − 1 and 2 i+1 − (N + 2 i mod 2 i+1 )) increases by 2 i+1 − 1, since (N + 2 i ) mod 2 i+1 becomes zero, which means the invariant is maintained. Now suppose there is at most one structure of size 2 i and i is the least significant bit of N . We know by the invariant that Since we assumed there is at most one structure of size 2 i we get that The invariant is also maintained when deleting: for each i where N i > 0 before the ExtractMin, N i decreases by one. For all i the second term increases by at most one, and possibly decreases by 2 i+1 − 1. Thus the invariant is maintained for all i where N i > 0 before the procedure. If N i = 0 before an ExtractMin, we get N j = 2 j+1 − 1 for j ≤ i. Since the second term can at most contribute 2 j+1 , we get N j + (2 j+1 − ((N + 2 j ) mod 2 j+1 )) ≤ 2 j+1 − 1 + 2 j+1 ≤ 6 · 2 j − 1, thus the invariant is maintained.
Correctness and running times of the procedures have now been established.
A Handling identical elements in the amortized case
The primary difficulty in handling identical elements is that we lose the ability to encode bits. The primary goal of this section is to do so anyway. The idea is to let the items stored in the priority queue be pairs of distinct elements where the key of an item is the lesser element in the pair. In the case where it is not possible to make a sufficient number of pairs of distinct elements, almost all elements are equal and this is an easy case to handle. Note that many pairs (or all for that matter) can contain the same elements, but each pair can now encode a bit, which is sufficient for our purposes.
The structure is almost the same as before, however we put a few more things in the picture. As mentioned we need to use pairs of distinct elements, so we create a mechanism to produce these. Furthermore we need to do some book keeping such as storing a pointer and being able to compute whether there are enough pairs of distinct elements to actually have a meaningful structure. The changes to the memory layout is illustrated in Figure 3. Modifications The areas L and B in memory are used to produce pairs of distinct elements. The area p L is a Gray coded pointer 1 [9] with Θ(log n) pairs, pointing to the beginning of L. The rest of the structure is essentially the same as before, except instead of storing elements, we now store pairs e = (e 1 , e 2 ) and the key of the pair is e k = min{e 1 , e 2 }. All comparisons between items are thus made with the key of the pair. We will refer to the priority queue from Section 2 as PQ.
There are a few minor modifications to PQ. Recall that we needed to simulate empty spaces inside T (specifically in S, see Figure 1). The way we simulated empty spaces was by having elements that compared greater than e t . Now e t is actually a pair, where the minimum element is the threshold element. It might be the case that there are many items comparing equal to e t , which means some would be used to simulate empty spaces and others would be actual elements in PQ and some would be used to encode pointers. This means we need to be able to differentiate these types that might all compare equal to e t . First observe that items used for pointers are always located in positions that are distinguishable from items placed in positions used as actual items. Thus we do not need to worry about confusing those two. Similarly, the "empty" spaces in T are also located in positions that are distinguishable from pointers. Now we only need to be able to differentiate "empty" spaces and occupied spaces where the keys both compare equal to e t . Letting items (i.e. pairs) used as empty spaces encode 1, and the "occupied" spaces encode 0, empty spaces and occupied spaces become differentiable as well. Encoding that bit is possible, since they are not used for encoding anything else.
Since many elements could now be identical we need to decide whether there are enough distinct elements to have a meaningful structure. As an invariant we have that if the two elements in the pair e t = (e t,1 , e t,2 ) are equal then there are not enough elements to make Ω(log n) pairs of distinct elements. The O(log n) elements that are different from the majority are then stored at the end of the array. After every log nth insertion it is easy to check if there are now sufficient elements to make ≥ c log n pairs for some appropriately large and fixed c. When that happens, the structure in Figure 3 is formed, and e t must now contain two distinct elements, with the lesser being the threshold key. Note also, that while e t,1 = e t,2 an ExtractMin procedure simply needs to scan the last < c log n elements and possibly make one swap to return the minimum and fill the empty index.
Insert The structure B is a list of single elements which functions as an insertion buffer, that is elements are simply appended to B when inserted. Whenever n mod log n = 0 a procedure making pairs is run: At this point we have time to decode p L , and up to O(log n) new pairs can be made using L and B . To make pairs B is read, all elements in B that are equal to elements in L, are put after L, the rest of the elements in B are used to create pairs using one element from L and one element from B . If there are more elements in B , they can be used to make pairs on their own. These pairs are then inserted into PQ. To make room for the newly inserted pairs, L might have to move right and we might have to update p L . Since p L is a Gray coded pointer, we only need as many bit changes as there are pairs inserted in PQ, ensuring O(1) amortized moves. Note that the size of PQ is now the value of p L , which means all computations involving n for PQ should use p L instead.
ExtractMin To extract the minimum a search for the minimum is performed in PQ, B and L. If the minimum is in PQ, it is extracted and the other element in the pair is put at the end of B . Now there are two empty positions before L, so the last two elements of L are put there, and the last two elements of B are put in those positions. Note p L also needs to be decremented. If the minimum is in B , it is swapped with the element at position n, and returned. If the minimum is in L, the last element of L is swapped with the element at position n, and it is returned.
Analysis Firstly observe that if we can prove the producing of pairs uses amortized O(1) moves for Insert and ExtractMin and O(1) and O(log n) time respectively, then the rest of the analysis from Section 2.3 carries through. We first analyze Insert and then ExtractMin.
For Insert there are two variations: either append elements to B or clean up B and insert into PQ. Cleaning up B and inserting into PQ is expensive and we amortize it over the cheap operations. Each operation that just appends to B costs O(1) time and moves. Cleaning up B requires decoding p L , scanning B and inserting O(log n) elements in PQ. Note that between two clean-ups either O(log n) elements have been inserted or there has been at least one Ex-tractMin, so we charge the time there. Since each insertion into PQ takes O(1) time and moves amortized we get the same bound when performing those insertions. The cost of reading p L is O(log n), but since we are guaranteed that either Ω(log n) insertions have occurred or at least one ExtractMin operation we can amortize the reading time. | 2015-05-01T12:40:07.000Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "61c28461c53b567e551b4d0490ec45bdf86aa7d9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1505.00147",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a38c176bb66eb8e6a52d8d0340cde99d4f3603c",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
3015856 | pes2o/s2orc | v3-fos-license | Calcium Modulation of Ligand Affinity in the Cyclic GMP–gated Ion Channels of Cone Photoreceptors
To investigate modulation of the activation of cGMP-gated ion channels in cone photoreceptors, we measured currents in membrane patches detached from the outer segments of single cones isolated from striped bass retina. The sensitivity of these channels to activation by cGMP depends on the history of exposure to divalent cations of the membrane's cytoplasmic surface. In patches maintained in 20 μM Ca++ and 100 μM Mg++ after excision, the current amplitude dependence on cGMP is well described by a Hill equation with average values of K 1/2, the concentration necessary to activate half the maximal current, of 86 μM and a cooperativity index, n, of 2.57. Exposing the patch to a solution free of divalent cations irreversibly increases the cGMP sensitivity; the average value of K 1/2 shifts to 58.8 μM and n shifts to 1.8. Changes in cGMP sensitivity do not affect other functional parameters of the ion channels, such as the interaction and permeation of mono- and divalent cations. Modulation of cGMP activation depends on the action of an endogenous factor that progressively dissociates from the channel as Ca++ concentration is lowered below 1 μM. The activity of the endogenous modulator is not well mimicked by exogenously added calmodulin, although this protein competes with the endogenous modulator for a common binding site. Thus, the modulation of cGMP affinity in cones depends on the activity of an unidentified molecule that may not be calmodulin.
i n t r o d u c t i o n
Rod and cone photoreceptors and olfactory sensory neurons respond to their appropriate stimuli with changes in membrane conductance that reflect the activity of cyclic nucleotide-gated ion channels (Fesenko et al., 1985;Nakamura and Gold, 1987;Haynes and Yau, 1990). In these cells, the stimuli activate enzymatic cascades that ultimately change the cytoplasmic concentration of either cGMP (photoreceptors) (reviewed by Hurley, 1992;Pugh and Lamb, 1993;Baylor, 1996) or cAMP (olfactory neurons) (reviewed by Reed, 1992). Because of similarities in primary structure, the cyclic nucleotidegated channels in the sensory cells have been recognized to be members of the same gene family (for review see Zagotta and Siegelbaum, 1996;Finn et al., 1996;Yau and Chen, 1995). Despite their general structural similarities, the chemical specificity and affinity of the ligand binding sites differ among channels of the various sensory cell types (Scott et al., 1996). For example, the channels in retinal photoreceptors have a 20-to 50-fold higher affinity for cGMP than cAMP, whereas native channels of olfactory neurons have nearly the same affinity for both nucleotides (see review in Zagotta and Siegelbaum, 1996). Even in the same receptor cell type, the nucleotide affinity of the channels varies from species to species (reviewed in Zagotta and Siegelbaum, 1996), presumably optimized to match the nucleotide cytoplasmic concentration in the cell expressing that specific channel. Furthermore, recent experiments have shown that the ligand affinity of the channels is not static, but changes as a function of Ca ϩϩ concentration (reviewed in Molday, 1996). The physiological significance of this Ca ϩϩ dependence is not fully understood. However, it is likely to play an important role in the adaptation of the cell's response to changing background levels of stimulation (Kurahashi and Menini, 1997).
The extent to which Ca ϩϩ modulates the cGMP activation differs markedly between olfactory and rod photoreceptor channels. In rat olfactory neurons, for example, affinity for cAMP decreases over 100-fold when Ca ϩϩ is elevated to 200 M Liu et al., 1994;Balasubramanian et al., 1996). Affinity is measured by the value of K 1/2 , the concentration necessary to activate half the maximum ligand-gated current. In rod photoreceptors, the Ca ϩϩ modulation of K 1/2 has been studied in isolated membrane vesicles (Bauer, 1996), detached membrane patches (Gordon et al., 1995), and truncated outer segments (Nakatani et al., 1995;Sagoo and Lagnado, 1996). In all these preparations, the K 1/2 for cGMP activation shifts ف 1.5-fold with a Ca ϩϩ dependence that is half maximal at ف 50 nM. Because this modulation is irreversibly lost in patches and truncated rods after exposure to solutions free of divalent cations, it has been presumed to arise from the action of a soluble, endogenous modulator (Gordon et al., 1995;Nakatani et al., 1995;Sagoo and Lagnado, 1996;Bauer, 1996). The molecular identity of this endogenous modulator in rods is not fully resolved, but the possibility that it is calmodulin has been addressed in a number of recent experiments. Hsu and Molday (1993) first demonstrated that calmodulin can cause a Ca ϩϩ -dependent modulation of K 1/2 in rods. In a manner similar to the endogenous modulator, the maximum Ca ϩϩ /calmodulin-dependent shift in K 1/2 is ف 1.5-fold (Gordon et al., 1995;Kosolapov and Bobkov, 1996;Bauer, 1996). The Ca ϩϩ dependence of the shift in K 1/2 in the presence of added calmodulin has a half-maximum value of ف 50 nM in some studies (Hsu and Molday, 1993;Bauer, 1996) and 450 nM in others (Kosolapov and Bobkov, 1996). These values, however, cannot be compared with those characteristic of the endogenous modulator because they change with the calmodulin concentration used in the experiments (Bauer, 1996). Gordon et al. (1995) and Bauer (1996) have directly investigated whether the endogenous factor and calmodulin are one and the same. In membrane vesicles, there are no recognizable differences in the function of the two molecules (Bauer, 1996). However, in membrane patches, the function of both molecules was found to differ in several respects, suggesting that the modulator in rods may differ from calmodulin (Gordon et al., 1995). In truncated rods, pharmacological blockers of calmodulin do not alter the Ca ϩϩ -dependent modulation of K 1/2 , nor does added calmodulin confer modulation (Sagoo and Lagnado, 1996). Finally, Haynes and Stotz (1997) have found that added calmodulin can modulate K 1/2 in rod membrane patches, but not in those of cones from the same species.
The cyclic nucleotide-gated channels of cone outer segments are structurally homologous to those of rods (Bonigk et al., 1993;Weyand et al., 1994), yet their functional properties differ in subtle but important ways. These functional differences may contribute to explain the differences in transduction between the two receptor types (Korenbrot, 1995). The K 1/2 of cGMP binding is higher in cones (Haynes and Yau, 1990;Picones and Korenbrot, 1992) than in rods (Karpen et al., 1988;Zimmerman and Baylor, 1992;Haynes and Stotz, 1997); the energy of interaction of cations such as Na ϩ and Li ϩ with the channel is higher in cones (Picones and Korenbrot, 1992;Haynes, 1995) than in rods (Menini, 1990;Furman and Tanaka, 1990;Zimmerman and Baylor, 1992), and the blocking effect of l-cis -diltiazem is also different (Haynes, 1992). Of singular functional significance is the fact that the permeability and interaction of Ca ϩϩ with the channels differs between the two photoreceptor types (Korenbrot, 1995).
In particular, the permeability of Ca ϩϩ relative to Na ϩ is higher in cone than in rod channels (Picones and Korenbrot, 1995). This difference is also observed in recombinant channels formed by cone or rod ␣ subunits alone (Frings et al., 1995). Whether there exists a calcium-dependent modulation of ligand binding in cone channels and what features this modulation might have has not been previously investigated. We report here that in membrane patches of cone outer segments, as in those of rods, there exists a Ca ϩϩ -dependent modulation of cGMP activation. This modulation depends on the activity of an endogenous modulator. We further contrast the properties of rod and cone channels with respect to the modulatory effect of added calmodulin. In cones, the endogenous modulator is not well mimicked by calmodulin. m a t e r i a l s a n d m e t h o d s
Solitary photoreceptors were firmly attached to a glass coverslip derivatized with wheat germ agglutinin (Picones and Korenbrot, 1992). The coverslip formed the bottom of a recording chamber held on the stage of an upright microscope equipped with DIC optics and operated under visible light. A suspension of photoreceptors in pyruvate-Ringer's was placed on the coverslip and the cells were allowed to settle down and attach for 5 min. The bath solution was then exchanged with a Ringer's solution of the same composition, but in which pyruvate was isosmotically replaced with glucose.
The recording chamber consisted of two side-by-side compartments. Cells were held in one compartment that was continuously perfused with Ringer's. The second, smaller compartment was continuous with the first one, but a movable barrier could be used to separate them (Picones and Korenbrot, 1992). We used tight-seal electrodes to obtain inside-out membrane fragments detached from the side of the outer segments of either cones or rods. After forming a giga-seal and detaching the membrane fragment, we moved the electrode under the solution surface from the compartment containing the intact cells to the smaller compartment. The two compartments were then separated by the movable barrier and the tip of the electrode was placed within 100 m of the opening of a 300-m diameter glass capil-lary. We used this capillary and a rotary valve to deliver selected test solutions onto the cytoplasmic (outside) surface of the membrane patch.
The speed of change in membrane current in response to changes in test solutions varied from patch to patch. This variability likely reflects differences among patches in the accessibility of their cytoplasmic surface to the bath solution. In our experiments, after each solution change, we monitored membrane current by repeated presentation of voltage steps and waited for the current to reach a stationary amplitude. We present and analyze here only currents in this stationary condition (except for Fig. 5).
Ionic Solutions
In studies of both cone and rod membranes, we filled the tightseal electrodes with the same solution (mM): 157 NaCl, 5 EGTA, 5 EDTA, 10 Hepes, adjusted with NaOH to pH 7.5, osmotic pressure 305 mOsM. Free Ca ϩϩ concentration in this solution was р 10 Ϫ 10 M and total Na ϩ concentration was 167 mM. In all studies, the initial composition of the solution in the small compartment into which we moved the electrode was 157 mM NaCl, 10 mM Hepes, adjusted with NaOH to pH 7.5, and 20 M free Ca ϩϩ (total 7.637), 100 M free Mg ϩϩ (total 2) with 10 mM HEDTA. We elected to use these free divalent cation concentrations because they are sufficiently low to not block the cGMP-gated conductance (Picones and Korenbrot, 1995), yet sufficiently high to saturate the Ca ϩϩ dependence of the phenomena studied here.
We used a titration method to produce solutions with free calcium concentrations between 10 nM and 20 M (Williams and Fay, 1990). This method yields accurate free calcium concentrations without the need to weigh with extreme precision chemicals of known purity. This is particularly important because 100% pure Ca ϩϩ buffering agents cannot be obtained commercially (Miller and Smith, 1984). Briefly, a solution containing (mM) 157 NaCl, 10 HEPES, 10 HEDTA, adjusted with NaOH to pH 7.5, was divided into two parts. One part (solution A) had no added calcium. Into the other part (solution B), CaCl 2 was added to 9 mM and the pH readjusted to 7.8. A small sample of this solution was then titrated with 100 mM CaCl 2 while monitoring its pH. The amount of CaCl 2 necessary to fully titrate HEDTA, that is, to obtain pure CaHEDTA at pH 7.5, was determined from this titration. The appropriate amount of CaCl 2 was then added to the remaining solution B. Solutions A and B were mixed to obtain different free calcium concentrations (Table I). Volumes were calculated (EqCal; BioSoft, Cambridge, UK) using published values for HEDTA binding constants (Martell and Smith, 1974). The pH was adjusted to 7.5 and solutions stored at 4 Њ C in plastic containers for up to 1 wk. Solutions containing both calcium and magnesium (at concentrations of 20 and 100 M free, respec-tively) were made by adding to solution A appropriate amounts of stocks of CaCl 2 and MgCl 2 . The concentration of divalent cations in these stocks were calibrated by measuring their osmolality.
In many of the experiments reported here, we tested the electrical properties of the same membrane patch before and after removing divalent cations. Divalent cations were removed by exposing the cytoplasmic (outside) surface of the patch to a solution composed of (mM): 157 NaCl, 10 HEPES, 5 EDTA, 5 EGTA, adjusted with NaOH to pH 7.5. We refer to this solution in the following text simply as the EDTA/EGTA solution.
Electrical Recordings
Tight-seal electrodes were made from aluminosilicate glass (1724; Corning Glass Works, Corning, NY) (1.5 ϫ 1.0 mm, o.d. ϫ i.d.). We measured membrane currents under voltage clamp at room temperature (19-21 Њ C) with a patch-clamp amplifier (#8900; Dagan Instruments, Minneapolis, MN). Analog signals were low pass filtered below 2.5 kHz with an eight pole Bessel filter and were digitized on line at 6 kHz (FastLab; Indec, Capitola, CA). Membrane voltage was normally held at 0 mV and membrane currents were activated either by 110-ms long step changes to Ϫ 40 mV or with a continuous voltage ramp that swept between Ϫ 70 and ϩ 70 mV at a rate of 228 mV/s. In measurements with voltage ramps, the voltage was stepped from 0 to Ϫ 70 mV and held at that value for 200 ms before applying the voltage ramp. This was necessary because in the presence of divalent cations there is a time-dependent change in membrane current amplitude on switching from 0 to Ϫ 70 mV. This time-dependent change is due to the well characterized voltage-dependent channel block by divalent cations (in rods, Zimmerman and Baylor, 1992; in cones, Picones and Korenbrot, 1995). To generate current-voltage (I-V) 1 curves, the voltage ramp was swept in four successive trials and the currents were signal averaged. Between voltage ramp trials, membrane voltage was held at 0 mV for 1.2 s. As usual, outward currents are positive and the extracellular membrane surface is defined as ground.
We began every experiment by determining the current amplitude before and after adding saturating cGMP concentrations. We frequently found ( Ͼ 50%) that membrane patches did not respond initially to the ligand. In many instances, these patches became responsive after rapidly moving the electrode tip across the air-water interface. It is likely, therefore, that unresponsive patches had formed closed vesicles that opened upon crossing the air-water interface (Horn and Patlak, 1980). We only analyzed data measured in patches in which the amplitude of the current generated with saturating cGMP concentrations and the leakage current measured in the absence of cGMP did not change by Ͼ 10% over the course of the entire experiment. Functions were fit to experimental data by least square minimization algorithms (Origin; MicroCal Software, Northampton, MA). Experimental errors are presented as standard deviations. r e s u l t s
The K 1/2 of the cGMP-gated Currents in Cones Is Modulated by Divalent Cations
We investigated cGMP-dependent currents in inside-out patches detached from the plasma membrane of retinal cone outer segments. These patches contain only cGMP-1 Abbreviations used in this paper: I-V, current-voltage; PDE, phosphodiesterase. (Miller and Korenbrot, 1993 a ). The dependence of current amplitude on cGMP concentration is well described by the Hill equation ( Fig. 1):
t a b l e i
(1) where I is the amplitude of the cGMP-dependent membrane current, I max is its maximum value, [cGMP] is the concentration of cGMP, K 1/2 is that concentration necessary to reach one half the value of I max , and n is a parameter that reflects the cooperative interaction of cGMP molecules in activating the membrane current. The cGMP dependence of the membrane current depends on the history of exposure of the membrane patch to divalent cations. In Fig. 1, we present cGMPdependent currents measured in a cone membrane patch in response to Ϫ40 mV voltage steps and in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ . Currents were measured in the same patch both before and after exposure to the EDTA/EGTA solution (Fig. 1, A and B). Also illustrated is the dependence of current amplitude on cGMP concentration ( Fig. 1 C). The data points measured both before and after exposing the patch to EDTA/EGTA were well fit by Eq. 1, but the values of K 1/2 and n are different. The average value of K 1/2 shifted from 86.1 Ϯ 18 to 58.8 Ϯ 19 M upon exposure to EDTA/EGTA. The average value of n shifted from 2.57 Ϯ 0.34 to 1.80 Ϯ 0.23 (Table II). Experimental data were included in these averages only if cGMP activation was measured in the same patch before and after exposure to the EDTA/EGTA solution. The shift in K 1/2 and n reflect a divalent cation-dependent mechanism that modulates the channel's sensitivity to the cyclic nucleotide.
To compare the functional properties of rod and cone cGMP-gated channels, we also investigated the Ca ϩϩ -dependent modulation in patches from the rod outer segments of tiger salamanders ( Fig. 1 D). In these membranes, in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ , K 1/2 for cGMP was 41.1 Ϯ 7 M and n was 2.58 Ϯ 0.43 before exposure to EDTA/EGTA, and K 1/2 was 27.5 Ϯ 6.2 M and n to 1.97 Ϯ 0.28 afterwards (Table II). Thus, in tiger salamander rods K 1/2 shifts by -5.1فfold, similar to findings in frog rods (Gordon et al., 1995). The fractional change in K 1/2 is thus similar in rod and cone membrane patches.
Modulation of K 1/2 Is Not an Artifact Due to Phosphodiesterase Activity in the Membrane Patch
Detached outer segment patches can be structurally complex and may include not only plasma membrane . Currents shown in A were measured shortly after patch excision, while those in B were measured in the same membrane after exposure to the EDTA/EGTA solution. The lower panels illustrate the dependence of normalized current amplitude on cGMP concentration at Ϫ40 mV before and after exposure to the EDTA/EGTA solution. Data points were normalized by dividing the amplitude at each cGMP concentration tested (I) by the maximum current (Imax). C illustrates data measured in a patch detached from a cone outer segment. The continuous curve is the best fit to the data of the Hill equation (Eq. 1) with K 1/2 ϭ 88.4 M and n ϭ 2.73 before exposure to EDTA/EGTA, and K 1/2 ϭ 61.4 M and n ϭ 1.59 afterwards. D illustrates data measured in rod membrane patches, where the continuous curve is the Hill equation with K 1/2 ϭ 34.2 M and n ϭ 2.3 before exposure to the EDTA/EGTA solution, and K 1/2 ϭ 17.9 M and n ϭ 1.6 afterwards. but fragments of disc membranes containing phosphodiesterase (PDE) activity (Ertel, 1990). This may be a particular concern in cone outer segments, where disc and plasma membranes are continuous. The presence of PDE in the patch can lead to the artifactual appearance of variable cGMP titration curves (Ertel, 1990). We tested whether this might be a possible source of experimental artifacts in cone membrane patches by comparing cGMP activation curves in the presence and absence of 0.5 mM IBMX (3-isobutyl-1-methylxanthine), a saturating concentration of this effective cone PDE inhibitor (Gillespie and Beavo, 1989). IBMX had no effect whatsoever on the titration curves (data not shown).
We confirmed further that the modulation of K 1/2 is not an artifact due to the effect of PDE by measuring the properties of membrane currents activated by 8-Br-cGMP. This cGMP analog activates the channels but is not efficiently hydrolyzed by the photoreceptor PDE (Zimmerman et al., 1985). Fig. 2 illustrates currents activated by 8-Br-cGMP and measured in the same patch t a b l e i i in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ before and after exposure to EDTA/EGTA. As with cGMP activation, the dependence of current amplitude on 8-Br-cGMP was well described by Eq. 1 and removal of divalent cations shifted K 1/2 and n to lower values (Table I). Thus, modulation of the cGMP-gated currents does not reflect the activity of PDE.
Ion Permeation and Selectivity Are Not Different in the Two States of Ligand Affinity
We investigated whether the state of modulation affects other functional properties of the cGMP-gated currents. The shape of the I-V curve reflects, according to Eyring rate theory, the energy of interaction between permeant cations and their binding sites within the channel (Alvarez et al., 1992). The success of this theoretical analysis is measured by the ability to predict the shape of the I-V curves measured under various ionic conditions. The I-V curves of the cGMP-gated channels of both cones (Picones and Korenbrot, 1992;Haynes, 1995) and rods (Zimmerman and Baylor, 1992) can be predicted using Eyring rate theory, assuming the energy profile across the channels includes one binding site asymmetrically located within the membrane. If the interaction between permeant cations and the channel is affected by the state of modulation, then the shape of the I-V curve should change.
We compared in detail I-V curves measured in the same cone membrane patch before and after exposure to EDTA/EGTA. Fig. 3 illustrates membrane currents measured under symmetrical NaCl solutions with 20 M Ca ϩϩ and 100 M Mg ϩϩ on the cytoplasmic membrane surface and no divalent cations on the extracellular surface. Currents were measured in the presence of various cGMP concentrations in the range between 40 M and 1 mM, and activated with a continuous voltage ramp between Ϫ70 and ϩ70 mV. The I-V curves were nonlinear under all cGMP concentrations tested (Fig. 3). The nonlinearity is generated both by the voltage dependence of cGMP binding and the voltage dependence of divalent block (reviewed in Yau and Baylor, 1989). To analyze the I-V curves, we determined the voltage dependence of the binding curve for cGMP by fitting Eq. 1 to the current measured at every voltage between Ϫ70 and ϩ70 mV. We found n not to change significantly with voltage either before or after exposure to EDTA/EGTA. K 1/2 , on the other hand, was voltage dependent, but the form of the voltage dependency was quantitatively the same before and after exposure to EDTA/EGTA (Fig. 3). Thus, the I-V curves are not affected by the state of modulation. We obtained similar results in five other cone patches and four rod patches. Hence, the energy of interaction between cations and the channels, in both rods and cones, does not appear to be affected by the state of channel modulation.
We analyzed the relative Ca ϩϩ to Na ϩ permeabilities of the channels (PCa/PNa) in the two states of ligand affinity. This is a physiologically important feature since PCa/PNa differs in channels of rods and cones (Picones and Korenbrot, 1995;Frings et al., 1995). To determine the value of PCa/PNa, we measured I-V curves Each trace is the average of four current ramps from which the average ramp current measured at 0 cGMP has been subtracted. The leak conductance measured at 0 cGMP between 0 and Ϫ20 mV was 115 pS. (bottom) The voltage dependence of the K 1/2 (C) and n (D) terms in the Hill equation (Eq. 1) before (thick trace) and after (thin trace) exposure to the EDTA/EGTA solution.
(with voltage ramps) under saturating cGMP concentrations (1 mM). The concentration of NaCl was symmetric across the membrane, and we imposed CaCl 2 concentration gradients. The extracellular membrane surface was free of Ca ϩϩ and the cytoplasmic membrane surface was exposed to concentrations of 5 or 10 mM. The same patch was then exposed to the EDTA/ EGTA solution, and the current measurement repeated. Typical results are illustrated in Fig. 4. The reversal potential shifted towards negative values as the cytoplasmic Ca ϩϩ concentration increased, indicating that the channels are more permeable to Ca ϩϩ than to Na ϩ . The magnitude of the shift was the same before and after exposure to EDTA/EGTA. Thus, the state of modulation does not affect the ion selectivity of the cone channels. We obtained the same results in three other membrane patches. The values of PCa/PNa, calculated from the shift in reversal voltage (Lewis, 1979), are similar to those we have previously reported (Picones and Korenbrot, 1995).
Effects of Ca ϩϩ on the Endogenous Modulation of cGMP Affinity
The removal of both Ca ϩϩ and Mg ϩϩ results in an irreversible shift of cGMP affinity. We investigated whether this shift was specific for either cation by testing whether the cGMP affinity shifted when only one of the two cations was removed. Removal of Mg ϩϩ alone did not shift the ligand affinity. Removal of Ca ϩϩ alone shifted the cGMP activation curve just as when both cations were removed, but the rate of this shift was slower than that caused by the simultaneous removal of both cations.
We explored the effects of varying Ca ϩϩ concentration on the shift in cGMP affinity. We measured the current activated by 60 M cGMP in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ , exposed the patch for 60 s to solutions free of cGMP with the same Mg ϩϩ but progressively lower Ca ϩϩ concentrations, and then repeated the current measurement in the 20 M Ca ϩϩ , 100 M Mg ϩϩ solution with cGMP. The current was unaffected at concentrations as low as 1 M Ca ϩϩ . Below this concentration, current amplitude increased as Ca ϩϩ concentration declined (Fig. 5). The increase in current amplitude at a fixed, nonsaturating cGMP concentration reflects shifts in K 1/2 and n to lower values. The current continued to increase, but even at 100 nM Ca ϩϩ the current increase was not maximal. Addition of EDTA/EGTA achieved the maximum current enhancement and additional washes in EDTA/EGTA caused no further current enhancement (Fig. 5). We made the same observations in six other patches. Be- Figure 4. I-V curves measured in the same membrane patch detached from a cone outer segment before (continuous trace) and after (᭺) exposure to the EDTA/EGTA solution. Currents were measured in the presence of 1 mM cGMP and symmetric Na ϩ (167 mM) solutions. The extracellular membrane surface was free of divalent cations, while the cytoplasmic surface was bathed with solutions containing 0, 5, or 10 mM Ca ϩϩ . Currents were activated by a continuous voltage ramp between Ϫ70 and ϩ70 mV. Each trace is the average of four current ramps, from which the average ramp current measured at 0 cGMP has been subtracted. The leak conductance measured at 0 cGMP between 0 and Ϫ20 mV was 70 pS. The value of membrane voltage at zero current reveals that the channels are more permeable to Ca ϩϩ than to Na ϩ , but this selectivity is unaffected by the presence or absence of the endogenous modulator. For the data shown, PCa/PNa ϭ 8.6 calculated at 5 mM assuming a constant field and using ionic concentrations, not activities (Lewis, 1979). cause the shift in K 1/2 is irreversible, the experimental data do not reflect stationary conditions and, therefore, cannot be quantitatively analyzed as equilibrium doseresponse data. Nonetheless, the results indicate that in detached patches, cGMP affinity is specifically modulated by Ca ϩϩ over a concentration range limited by 1ف M at its upper end.
The Effects of Exogenous Calmodulin
In rods, the Ca ϩϩ -dependent modulation of ligand affinity has been attributed to the action of an endogenous factor similar, and perhaps identical, to calmodulin. Calmodulin confers Ca ϩϩ dependence to the cGMP activation of the channels with features that are almost indistinguishable from those of the endogenous modulator (Hsu and Molday, 1993;Gordon et al., 1995;Bauer, 1996). We explored the potential role of calmodulin in the modulation of channels in cone outer segments. In these experiments, we measured the cGMP concentration dependence of currents in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ . The membrane patch was tested before and after exposure to EDTA/EGTA, and then again in the continuous presence of calmodulin at a concentration of 200 nM. This concentration is effective in rod membrane patches (Gordon et al., 1995, Haynes andStotz, 1997) and is well above the concentration that saturates modulation in rod membrane vesicles (Hsu and Molday, 1993;Bauer, 1996). In any event, the concentration of calmodulin used when testing its pharmacological action affects the Ca ϩϩ dependence of the phenomenon under study, but not its features at saturating Ca ϩϩ concentrations (see Bauer, 1996, for detailed mathematical analysis). Fig. 6 illustrates typical results obtained in both rods and cones. In patches from both photoreceptor types, as expected, exposure to EDTA/EGTA lowered the values of K 1/2 and n in the cGMP titration curves. The addition of calmodulin in the presence of Ca ϩϩ shifted K 1/2 and n back towards their initial values. Whereas K 1/2 and n essentially reverted to their starting values in rods, the shift was never fully reversed in cones (Table II). Thus, channels of cones and rods differ in the effectiveness with which calmodulin in high Ca ϩϩ shifts their sensitivity to activation by cGMP. In cones, then, the Ca ϩϩ -dependent regulation of cGMP activation is also likely to reflect the activity of an endogenous modulator. The modulator, however, may not be calmodulin, since this protein does not fully mimic the endogenous function.
Calmodulin Competes with the Endogenous Modulator for Binding to the Channels
To test whether the endogenous modulator in cones and calmodulin share structural features, we tested whether the two compete in their binding to the channel. We investigated the effectiveness of added calmodulin to shift K 1/2 or n in the presence and absence of the endogenous modulator. If both molecules bind to Figure 6. Effect of calmodulin on modulation of cGMP-dependent currents in detached membrane patches. In cone or rod patches, currents were activated with voltage steps from 0 to Ϫ40 mV in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ and varying concentrations of cGMP. Measurements were repeated in the same patch before and after exposure to the EDTA/EGTA solution and in the continuous presence of 200 nM calmodulin. Data points were normalized by dividing current amplitude at each cGMP concentration (I) by the maximum current measured (Imax). The continuous line is the best fit to the data of the Hill equation (Eq. 1). In the cone patch, K 1/2 ϭ 101 M, n ϭ 3.15 before exposure to the EDTA/EGTA solution, K 1/2 ϭ 78 M, n ϭ 2.0 afterwards, and K 1/2 ϭ 87 M, n ϭ 2.55 in the presence of calmodulin. In the rod patch, K 1/2 ϭ 41.2 M, n ϭ 2.78 before exposure to the EDTA/EGTA solution, K 1/2 ϭ 24.6 M, n ϭ 1.98 afterwards, and K 1/2 ϭ 39.4 M, n ϭ 2.47 in the presence of calmodulin. the same site, then calmodulin should be without effect when added to the membrane in the presence of the endogenous modulator. That is, for calmodulin to be effective, the endogenous modulator must first be removed from its binding site.
We measured membrane currents with voltage steps between 0 and Ϫ40 mV in 20 M Ca ϩϩ and 100 M Mg ϩϩ in the presence of 60 M cGMP. At this cGMP concentration, lowering K 1/2 and n will result in an increase in current amplitude despite an unchanging agonist concentration (see Fig. 1). In the same patch, we measured currents before and after adding 200 nM calmodulin. Calmodulin was without effect on current amplitude in membrane patches maintained in high Ca ϩϩ and Mg ϩϩ (Fig. 7 A). As expected, removal of the endogenous modulator by brief exposure to EDTA/ EGTA shifted K 1/2 and n and therefore increased the current amplitude (Fig. 7 B). After exposure to EDTA/ EGTA, added calmodulin was now able to shift K 1/2 towards its starting value (Fig. 7 B). We obtained the same results in every patch tested with this protocol (n ϭ 6). Thus, while calmodulin and the endogenous modulator may not be the same, they appear to compete for a common site on the channels.
Ca ϩϩ Dependence of the Calmodulin-mediated Modulation
We investigated whether the channels of rods and cones differ in their interaction with Ca ϩϩ /calmodulin by testing the Ca ϩϩ dependence of cGMP activation in the presence of 200 nM calmodulin. This Ca ϩϩ dependence is itself a function of the calmodulin concentration (Bauer, 1996) and therefore it does not inform us as to what the Ca ϩϩ dependence of modulation might be in the intact cell, if calmodulin were the modulator.
However, it will reflect differences in the energetics of the modulator's binding site between the channels of the two cell types.
We measured the Ca ϩϩ dependence of currents in membrane patches of rods and cones in the presence of fixed concentrations of calmodulin and cGMP. In each experiment, membrane patches were first exposed to the EDTA/EGTA solution to remove the endogenous modulator. In patches from single cones, we measured currents generated by 110-ms voltage steps to Ϫ40 mV in the presence of 60 M cGMP and varying Ca ϩϩ concentration between 0 and 20 M. In the same patch, we measured the effects of Ca ϩϩ first in the absence and then in the presence of 200 nM calmodulin (Fig. 8). In cone membranes, Ca ϩϩ in the absence of calmodulin had a small but reproducible effect on current amplitude. The maximum change in current amplitude between 0 and 20 M Ca ϩϩ in the absence of calmodulin was .%5ف In the data shown, we subtracted this effect to obtain the effect of added calmodulin alone. We studied rod channels with the same protocols, except that 20 M cGMP was used to activate the channel. Fig. 8 illustrates the Ca ϩϩ dependence of current amplitude in the presence of calmodulin in typical patches from both rod and cone channels.
The experimental data were well fit by the function: (2) where I is the current, I zero is the current in the absence of Ca ϩϩ , I ϱ is the current in the presence of a saturating Ca ϩϩ concentration, [Ca ϩϩ ] is the Ca ϩϩ concentration, . Competition between calmodulin and the endogenous modulator in a cone outer segment membrane patch. Currents were activated with voltage steps from 0 to Ϫ40 mV in the presence of 20 M Ca ϩϩ and 100 M Mg ϩϩ . Shown are difference currents measured by subtracting from currents measured at 60 M cGMP those measured in the absence of the cyclic nucleotide. The leak conductance, measured in 0 cGMP at Ϫ40 mV was 325 pS. Currents in A were measured shortly after excision in the absence (thick trace) or continuous presence (thin trace) of 200 nM calmodulin. Currents in B were measured after exposing the same membrane patch to the EDTA/EGTA solution. Again, currents were measured in the absence (thick trace) or continuous presence (thin trace) of 200 nM calmodulin. Calmodulin is completely ineffective before the endogenous modulator is removed. If the endogenous modulator is first removed by exposure to the EDTA/EGTA solution, calmodulin then causes a shift in membrane current similar in direction, but smaller in extent than that caused by the endogenous modulator.
K i is the Ca ϩϩ concentration at which the current is inhibited by one half and n is a parameter that reflects cooperative interaction of cation binding. This is a modified Hill equation that indicates Ca ϩϩ interacts cooperatively with calmodulin to block the current amplitude. For cones, K i ϭ 366 Ϯ 131 nM, n ϭ 1.6 Ϯ 0.47, and I ϱ / I zero ϭ 0.72 Ϯ 0.16 (N ϭ 9), while for rods, K i ϭ 679 Ϯ 187 nM, n ϭ 1.81 Ϯ 0.44, and I ϱ /I zero ϭ 0.53 Ϯ 0.13 (N ϭ 11). These values suggest that Ca ϩϩ /calmodulin interacts with the cGMP-gated channel of both rods and cones, but the quantitative features of this interaction differ between the two receptor types.
d i s c u s s i o n
The cGMP-gated ion channels in detached patches from cone outer segments exhibit a Ca ϩϩ -dependent modulation of their affinity for the cyclic nucleotide. The dependence of current amplitude on cyclic nucleotide concentration is described by the Hill equation (Eq. 1) and modulation is manifested as a decrease in the apparent binding affinity (K 1/2 ) and cooperativity (n) as the Ca ϩϩ concentration is lowered. We will refer to this as the endogenous modulation of the channel. Endogenous modulation has been previously reported for the cGMP-gated channels of rods in bovine membrane vesicles (Bauer, 1996), detached frog outer segment patches (Gordon et al., 1995), and truncated outer segments from frogs and tiger salamanders (Nakatani et al., 1995;Sagoo and Lagnado, 1996). In general, the features of channel modulation in the intact cell should not be extrapolated using data from patches alone. In the case of rods, studies in nearly intact truncated outer segments have demonstrated Ca ϩϩ -depen- Figure 8. Ca ϩϩ dependence of the effect of calmodulin on rod and cone membrane patches. Patches were first exposed to the EDTA/ EGTA solution and then to one containing 200 nM calmodulin and various Ca ϩϩ concentrations. Currents were activated with voltage steps from 0 to Ϫ40 mV in the presence of 60 M cGMP for cones and 20 M cGMP for rods. For the cone, currents measured in the presence of cGMP and 0 and 300 nM, and 20 M Ca ϩϩ are shown. For the rod, shown are currents measured in the presence of cGMP and 0 and 500 nM, and 1 and 20 M Ca ϩϩ . The current tracings also include the current measured in the absence of cGMP (nearly noiseless tracing). Data points are normalized cGMP-dependent current amplitudes calculated by dividing current amplitude at each Ca ϩϩ concentration (I) by the amplitude measured in the absence of any Ca ϩϩ (I zero ). The continuous curve is the best fit to the data of Eq. 2. For cones, K i ϭ 357 nM, n ϭ 1.6, and I ϱ /I zero ϭ 0.73. For rods, K i ϭ 632 nM, n ϭ 2.4, and I ϱ /I zero ϭ 0.53. dent modulation of K 1/2 that is quantitatively the same as that measured in detached membrane patches (Table III). The caveat has been introduced, however, that the modulation in truncated rods, which is measured under stationary conditions, may underestimate the extent of modulation present in the truly intact cell (Sagoo and Lagnado, 1996). In the case of cones, modulation measured under stationary conditions in the nearly intact outer segment is larger in extent than that observed in detached membrane patches, the K 1/2 shifts -4ف rather than -5.1فfold (Rebrik and Korenbrot, 1997).
The endogenous modulation is Ca ϩϩ dependent, but the quantitative features of this dependence cannot be fully studied in membrane patches because modulation is irreversibly lost as Ca ϩϩ concentration is reduced and, therefore, equilibrium dose-response curves cannot be determined. In cone patches, the shift in K 1/2 is observed at concentrations starting at and below 1 M Ca ϩϩ in the presence of 100 M Mg ϩϩ . This differs from data reported for modulation of channels in rods. In rod membrane patches, under experimental conditions similar to those we have reported for cones, modulation occurs starting at and below 22 nM (Gordon et al., 1995). This difference is significant and may suggest that, in cones, channel modulation may play a role in dim light, when only small changes in Ca ϩϩ concentration are expected, whereas in rods it may play a relevant role only in signals generated by relatively bright light (see Bauer, 1996). It is important to recognize that the Ca ϩϩ dependence of modulation reported for other rod preparation-for example, washed bovine membrane vesicles (Bauer, 1996) or amphibian truncated rods (Nakatani et al., 1995, Sagoo andLagnado, 1996)-may differ from data in detached patches because in each preparation the conditions of equilibrium between the modulator and the channel may be different. In the detached membrane patch the effective concentration of unbound modulator is essentially zero. Therefore, the initial condition (high Ca ϩϩ ) is not at equilibrium and the modulator is kinetically "locked" onto the channel. The only information that can be reliably established is the Ca ϩϩ concentration at which the modulator becomes "unlocked" over a reasonably short time course (60 s).
The features of the interaction between the modulator and the channel in the intact cone photoreceptor and its functional role in transduction and/or adaptation are yet to be specified in detail. Although the magnitude of the modulation, a shift of -5.1فfold in K 1/2 may appear modest, the effect of this modulation on current amplitude can be expected to be large, particularly at low cGMP concentrations. From our experimental results, the change in current expected when Ca ϩϩ changes from 1 M to 10 nM is given by: ( 3) where I lo and I hi are the currents at 10 nM and 20 M Ca ϩϩ , respectively, [cGMP] is the ligand concentration, K lo and K hi are the values for K 1/2 and n lo and n hi are the values for n at 10 nM and 20 M Ca ϩϩ , respectively. Fig. 9 plots Eq. 3 with [cGMP] in units of K hi . Also shown are data points for currents activated by various cGMP concentrations and measured in a single cone patch before and after exposure to the EDTA/EGTA solution. The concentration of cGMP in the dark can be expected to be between 0.16ϫ and 0.31ϫ K hi since only 1-5% of the channels are open in darkness (Cobbs et al., 1985). Thus, if the modulation of the channel in the intact cell is similar to that in the patch, the amplitude of the light-sensitive current could change by as much as 5-to 10-fold in response to changes in cytoplasmic Ca ϩϩ concentration. As Eq. 3 indicates, and Fig. 9 illustrates, the effectiveness of Ca ϩϩ as a modulator increases dramatically as the cGMP concentration is lowered. Under steady illumination, when cGMP concentration is expected to be lower than in the dark, the physiological role of Ca ϩϩ modulation is likely to be especially significant. The endogenous modulation of K 1/2 in photoreceptor membrane patches is irreversibly lost upon removal of divalent cations, but can be restored, even if to a limited extent, by calmodulin. These results suggest that endogenous modulation arises from the activity of a "calmodulin-like" protein, since calmodulin not only restores modulation but also competes with the endogenous modulator for the same binding site. As has been argued in reports of similar phenomena in rod patches, it is unlikely that Ca ϩϩ modulation arises from phosphorylation since test solutions lacked nucleotides, or from phosphatase activity, because these enzymes are inhibited by lowering Ca ϩϩ (Gordon et al., 1995). Thus, modulation of both rod and cone channels likely arises from the activity of an endogenous factor that, like calmodulin, interacts with the channel in a Ca ϩϩ -dependent manner: it is bound to the channels at high Ca ϩϩ , dissociates from them in the absence of Ca ϩϩ , and is then lost to the solution.
Modulation of cGMP-activated Current in Rod Outer Segments
The molecular identity of the endogenous modulator is under investigation. Calmodulin exists at high concentration in the rod outer segment (Kohnken et al., 1981;Bauer, 1996). Hsu and Molday (1993) first reported that calmodulin causes a Ca ϩϩ -dependent shift in K 1/2 in rod channels of thoroughly washed bovine outer segment membrane vesicles (see also Hsu and Molday, 1994;Bauer, 1996). Similarly, Ca ϩϩ /calmodulin shifts the K 1/2 of channels in rod outer segment membrane patches (Gordon et al., 1995;Kosolapov and Bobkov, 1996;Haynes and Stotz, 1997). Gordon et al. (1995) compared the functional properties of the interaction of the endogenous modulator and calmodulin with the channels in frog rod outer segment membrane patches. Several features of this interaction were quantitatively different between calmodulin and the endogenous modulator, and the authors were not convinced the two molecules were the same. In contrast, Bauer (1996), in studies of bovine rod membrane vesicles, did not find quantitative differences between the endogenous modulator and added calmodulin. Thus, the endogenous modulator in rods is calmodulin-like, but it is not possible to confirm that it is calmodulin itself.
In contrast to the relative similarity between calmodulin and the endogenous modulator in studies of isolated rod membranes, added calmodulin was found ineffective in inducing a Ca ϩϩ -dependent shift of K 1/2 in truncated rods of tiger salamander, from which the endogenous modulator was first removed (Sagoo and Lagnado, 1996). This finding is surprising because calmodulin, independently of whether it is the endogenous modulator, should have had an effect on the truncated outer segment current since it modulates the channel in membrane patches detached from the same cells (Fig. 6), as well as in patches from frog rods (Gordon et al., 1995;Kosolopov and Bobkov, 1996). We do not have a simple explanation for this experimental puzzle, but it is possible that calmodulin cannot efficiently gain access to the channels within the truncated outer segment.
Data are not available on the concentration of calmodulin in cone outer segments, nor its association with their cGMP-gated channels. In our direct comparison, we found that calmodulin did not quantitatively mimic the activity of the endogenous factor: calmodulin shifted K 1/2 and n to a lesser extent than did the endogenous factor. Indeed, Haynes and Stotz (1997) have reported that in patches of catfish cone outer segments, calmodulin entirely failed to modulate K 1/2 . They did not, however, explore the properties of a possible endogenous modulator: in their experiments, the initial condition was to hold the membrane patch in a solution containing EDTA and EGTA to remove any endogenous modulator. Thus, while calmodulin does not appear to modulate K 1/2 in catfish cones, whether any modulation occurs at all is unclear. While it might be surprising that cones in striped bass exhibit modulation while those in catfish do not, it is possible. This issue should be addressed experimentally.
The failure of bovine calmodulin to fully mimic the action of the endogenous modulator in striped bass cones could be due to sequence differences between bovine and fish calmodulin, rather than the nonidentity between calmodulin and the endogenous modulator. This is not likely, however, since the amino acid sequence of calmodulin is nearly 100% identical among vertebrates (Friedberg, 1990). While the endogenous modulator may not be calmodulin in cones, the two molecules are likely to have structural features in common. Calmodulin is a member of a large family of proteins that contain EF hands, a Ca ϩϩ -binding structural motif consisting of a select sequence of 03ف amino acids that fold into a helix-loop-helix pattern (Kretsinger, 1979;Klee and Vanaman, 1982). Since the endogenous modulator binds Ca ϩϩ and competes with calmodulin for binding to the channels, it is probably also a member of this family of Ca ϩϩ binding proteins. We have found, however, that other EF hand-containing pro-teins expressed in photoreceptors, such as GCAP1 and GCAP2 (reviewed in Polans et al., 1996) do not restore modulation to the channels. Furthermore, these proteins do not compete with calmodulin for binding to the channels (Hackos, Palczewski, Baehr, and Korenbrot, unpublished observations). Knowledge of the specificity of interaction with the channel of different members of the EF hand protein family will be important for the eventual identification of the endogenous modulator.
The state of channel modulation does not affect other functional properties of the channel. In both rods and cones, the interaction and permeation of mono-and divalent cations with the channel are unaffected by the state of modulation, as is the voltage dependence of cGMP binding. Since these functional properties likely reflect the structure of the "pore" region of the channel, the interaction between the endogenous modulator and the channel probably occurs in a structural domain distant from the pore.
The interaction between calmodulin, Ca ϩϩ , and the target protein is complex. The binding affinity of the three elements for each other changes depending on the identity of the target protein. For example, calmodulin in solution binds four Ca ϩϩ ions with affinities in the micromolar range, but in the presence of a target protein the affinity for Ca ϩϩ can be elevated by several orders of magnitude (reviewed in Klee, 1988;Gnegy, 1995). Moreover, the Ca ϩϩ dependence of a calmodulin-mediated effect on a given target protein changes with the mole ratio of calmodulin to target protein (Bauer, 1996). Therefore, the Ca ϩϩ sensitivity of the modulation by calmodulin observed in membrane patches of rods or cones cannot be assumed to be the same as in the intact cell unless the mole ratio of channel to calmodulin were the same. The differences in Ca ϩϩ dependence of K 1/2 modulation in rod and cone channels in the presence of calmodulin must reflect differences in the molecular details of the interaction of calmodulin with the channels. | 2017-07-06T18:10:39.106Z | 1997-11-01T00:00:00.000 | {
"year": 1997,
"sha1": "f9a72ce9fab599e4b692bc8f158d6fd61d649813",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/110/5/515.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9a72ce9fab599e4b692bc8f158d6fd61d649813",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
36276682 | pes2o/s2orc | v3-fos-license | High-Frequency Oscillatory Ventilation
not have acute lung injury (ALI) at initiation of mechanical ventilation, 80 (24%) developed it within 5 days [12]. Approximately one third of the study patients were ventilated using tidal volumes exceeding 12 mL/kg, and multivariate analysis identifi ed large tidal volumes as the most signifi cant risk factor for the development of ALI (odds ratio 1.3 for each 1 mL above 6 mL/kg; p < 0.001)[12]. Inappropriate mechanical ventilation strategies may also potentiate the dysfunction of distant organs among patients with respiratory failure. In a multicenter trial, Ranieri and colleagues randomized 37 patients to receive a strategy directed at ventilating between the upper and lower infl ection points on the pressure– volume curve (Figure 9.1), versus a higher volume, lower peak endexpiratory pressure (PEEP) strategy targeted at achieving normal blood gas tensions in the control group [6]. Bronchoalveolar lavage (BAL) and blood samples showed a local and systemic infl ammatory cytokine response at 36 hr among those in the control group, whereas the experimental strategy appeared to diminish this response [6]. In addition, a landmark multicenter trial has brought about the understanding that specifi c strategies for mechanical ventilation can have an important infl uence on outcomes in patients with the acute respiratory distress syndrome (ARDS). In 2000, the ARDS Network investigators demonstrated a 22% relative reduction in mortality among adult patients with ARDS on conventional mechanical ventilation who were randomized to receive relatively small tidal volumes (6 mL/kg ideal body weight) compared with those who were ventilated with larger tidal volumes (12 mL/kg ideal body weight) [13]. Collectively, these observations on the benefi ts of tidal volume reduction have led to the expectation that high-frequency ventilation would have an important role in the clinical arena because of its unique ability to provide adequate gas exchange using very low tidal volumes in the setting of continuous alveolar recruitment. Theoretically, high-frequency ventilation provides the ultimate open-lung strategy of ventilation, preserving end-expiratory lung volume, minimizing cyclic stretch, and avoiding parenchymal overdistension at end inspiration by limiting tidal volume and transpulmonary pressure (Figure 9.2) [4–7].
High-Frequency Oscillatory Ventilation
Kathleen M. Ventre and John H. Arnold not have acute lung injury (ALI) at initiation of mechanical ventilation, 80 (24%) developed it within 5 days [12]. Approximately one third of the study patients were ventilated using tidal volumes exceeding 12 mL/kg, and multivariate analysis identifi ed large tidal volumes as the most signifi cant risk factor for the development of ALI (odds ratio 1.3 for each 1 mL above 6 mL/kg; p < 0.001) [12].
Inappropriate mechanical ventilation strategies may also potentiate the dysfunction of distant organs among patients with respiratory failure. In a multicenter trial, Ranieri and colleagues randomized 37 patients to receive a strategy directed at ventilating between the upper and lower infl ection points on the pressurevolume curve (Figure 9.1), versus a higher volume, lower peak endexpiratory pressure (PEEP) strategy targeted at achieving normal blood gas tensions in the control group [6]. Bronchoalveolar lavage (BAL) and blood samples showed a local and systemic infl ammatory cytokine response at 36 hr among those in the control group, whereas the experimental strategy appeared to diminish this response [6]. In addition, a landmark multicenter trial has brought about the understanding that specifi c strategies for mechanical ventilation can have an important infl uence on outcomes in patients with the acute respiratory distress syndrome (ARDS). In 2000, the ARDS Network investigators demonstrated a 22% relative reduction in mortality among adult patients with ARDS on conventional mechanical ventilation who were randomized to receive relatively small tidal volumes (6 mL/kg ideal body weight) compared with those who were ventilated with larger tidal volumes (12 mL/kg ideal body weight) [13]. Collectively, these observations on the benefi ts of tidal volume reduction have led to the expectation that high-frequency ventilation would have an important role in the clinical arena because of its unique ability to provide adequate gas exchange using very low tidal volumes in the setting of continuous alveolar recruitment. Theoretically, high-frequency ventilation provides the ultimate open-lung strategy of ventilation, preserving end-expiratory lung volume, minimizing cyclic stretch, and avoiding parenchymal overdistension at end inspiration by limiting tidal volume and transpulmonary pressure (Figure 9.2) [4][5][6][7].
Introduction
Data supporting the feasibility of high-frequency oscillatory ventilation (HFOV) come from the observation that delivering very tidal volumes small at high frequencies can overcome the need for adequate bulk gas fl ow in the lung. In the early 1970s, while attempting to measure cardiac performance in large animals by assessing the myocardial response to pericardial pressure oscillations, Lunkenheimer and colleagues found that endotracheal high-frequency oscillations could produce effi cient CO 2 elimination in the absence of signifi cant chest wall excursion [1,2]. These investigators determined that CO 2 elimination was related to changes in the frequency of oscillation as well as the amplitude of the vibrations [1]. In general, CO 2 was cleared optimally at a frequency between 23-40 Hz, with smaller animals requiring higher frequencies [1]. Several years later, Butler and colleagues observed that gas exchange could be supported in humans at a frequency of 15 Hz, hypothesizing that this would enhance diffusive gas transport while minimizing dependence on bulk convective gas fl ow in the airways [3]. In their study, a series of patients, 9 years of age and older, were successfully ventilated with a piston pump calibrated to deliver tidal volumes in the range of 50-150 mL, and a single 2.5-kg infant was supported with a tidal volume of 7.5 mL using the same device [3].
At the moment, there are many laboratory data to suggest that repetitive cycles of pulmonary recruitment and derecruitment are associated with identifi able markers of lung injury, and experimental models of ventilatory support that reverse atelectasis, limit phasic changes in lung volume, and prevent alveolar overdistension appear to be less injurious [4][5][6][7][8][9][10][11]. There is also a growing quantity of clinical data that support these observations. A recent singlecenter cohort study demonstrated that among 332 patients who did and high-frequency oscillatory ventilation (HFOV). Highfrequency oscillatory ventilation is the most widely used form of high-frequency ventilation in clinical practice today. In HFOV, lung recruitment is maintained by the application of relatively high mean airway pressure, while ventilation is achieved by superimposed sinusoidal pressure oscillations that are delivered by a motor-driven piston or diaphragm at a frequency of 3-15 Hz [7,14]. High-frequency oscillatory ventilation is the only form of highfrequency ventilation in which expiration is an active process. As a result, alveolar ventilation is achieved during HFOV with the use of tidal volumes in the range of 1-3 mL/kg, even in the most poorly compliant lungs [14].
Gas Transport and Control of Gas Exchange
A comprehensive understanding of the mechanisms of gas transport in HFOV has not emerged despite a long period of scientifi c investigations. Although direct bulk fl ow can account for ventilation of proximal alveolar units even at very low tidal volumes, it is now believed that Pendelluft, or mixing of gases among alveolar units with varying time constants, contributes signifi cantly to gas exchange at high frequencies [15][16][17]. In addition, effi cient gas mixing likely occurs along the parabolic inspiratory gas front in high-frequency oscillation, because this provides an increased area along which radial diffusion can occur [15,16]. Finally, axial asymmetry of inspiratory and expiratory gas fl ow profi les creates separation of fresh gas and exhaled gas so that inspiratory gas fl ow travels down the central axis of the airway, while expiratory fl ow is distributed along the airway wall [15,16].
Experimental work in healthy rabbits has shown that CO 2 elimination during HFOV is a function of frequency and the square of the tidal volume (V CO2 = f × Vt 2 ) [18]. In HFOV, tidal volume is positively correlated with the amplitude of oscillation ("delta P," ΔP), and is related inversely to the frequency (Hz) [19]. Alveolar recruitment is positively correlated with the mean airway pressure (Paw) and the ratio of inspiratory time to expiratory time (I : E) [20]. Although most of the research using HFOV has focused on the use of higher frequency ranges, CO 2 elimination can probably occur at many potential combinations of f and Vt 2 , with higher frequency ranges providing conditions of lowest lung impedance and, consequently, a lower pressure cost of ventilation [21,22]. In HFOV, Paw, ΔP, frequency, and I : E are all directly controlled by the operator.
Presently available high-frequency ventilators vary with respect to pressure waveforms, consistency of I : E ratio over a range of frequencies, and the relationship of displayed mean airway pressure to distal mean alveolar pressure [19]. Most of the experience with HFOV in the clinical arena involves the SensorMedics 3100A (SensorMedics, Yorba Linda, CA), which is approved for use in infants and children. Almost 20 years of study using this device in the laboratory have provided clinicians with a fundamental understanding of its performance characteristics. Using in vitro models as well as alveolar capsule techniques in small animals with open chests, several investigators have reported that mean airway pressure and ΔP are signifi cantly attenuated by the tracheal tube, that alveolar pressure is inhomogeneously distributed during HFOV, and that the I : E ratio is an important determinant of alveolar pressure [20,[23][24][25]. Specifi cally, early data from surfactant-defi cient small animals, as well as from large animals and humans, seemed to indicate that limitation of expiratory time using an I : E ratio of 1 : 1 would promote alveolar gas trapping, especially at lower mean airway pressures [20,[26][27][28]. This observation led to the suggestion that HFOV be applied in the clinical setting with an I : E ratio of no greater than 1 : 2.
When transitioning the patient to HFOV from conventional ventilation ( Figure 9.3), the Paw on HFOV is typically set up to 5 cm H 2 O above the Paw last used on the conventional ventilator in order to maintain recruitment in the face of pressure attenuation by the tracheal tube. Amplitude (ΔP) is set by adjusting the power control while observing for adequacy of chest wall vibrations, as indicated by visible vibration to the level of the groin. Frequencies of 12-15 Hz are generally used for small infants, whereas lower frequencies in the range of 3-8 Hz are typically used for larger pediatric patients and adults, with the goal of generating enough volume displacement to adequately ventilate using currently available HFOVs. If employing an open lung ventilation strategy, Paw is then slowly titrated upward in 1-2 cm H 2 O increments, with the goal of reducing the FiO 2 to ≤0.6 with an arterial oxygen saturation of ≥90%.
Achieving acceptable oxygen saturations at this stage will often require intravascular volume expansion in order to avoid creating zone 1 conditions [29] in the lung as pulmonary blood volume is displaced by the increasing alveolar pressure. Once adequate alveolar recruitment is achieved, it may be possible to capitalize on pulmonary hysteresis, evident in many regions of the lung early in the course of disease (see Figure 9.1) [30], by carefully adjusting the Paw downward as long as the oxygenating effi ciency is preserved. Alternatively, after a brief period of aggressive volume recruitment, the Paw can be dropped to a point that is known to be above the closing pressure, with the expectation that adequate tidal volume will be preserved [30]. Adequacy of lung recruitment is verifi ed by ensuring that both hemidiaphragms are displaced to the level of the ninth posterior rib on chest x-ray [14]. A typical sequence of steps for addressing hypercarbia once an appropriate degree of lung infl ation as well as patency of the tracheal tube are verifi ed would be (1) increasing the ΔP in increments of 3 cm H 2 O until power is maximized, (2) decreasing the frequency in increments of 0.5-1 Hz, and (3) partially defl ating the tracheal tube cuff, if available, to allow additional egress of CO 2 [31][32][33]. In the latter case, any decrement in Paw should be corrected by increasing the bias fl ow as necessary to maintain a stable level of distending pressure [32,33].
If employing a strategy targeted at managing active air leak, the lung is initially recruited using stepwise increases in Paw to achieve FiO 2 ≤0.6 and SaO 2 ≥90%, and then Paw and ΔP are lowered to a point just below the leak pressure, the value at which air is no longer seen to drain from the thoracostomy tube. If the leak pressure is relatively low, it may be necessary to tolerate an FiO 2 in excess of 0.6 with SaO 2 ≥85%, and hypercarbia if necessary, as long as pH ≥7.25, in order to provide satisfactory gas exchange while minimizing alveolar pressure [17,[34][35][36]. As demonstrated in a small animal model of pneumothorax, higher frequencies and short inspiratory times may also minimize air leak during HFOV [36].
High-Frequency Oscillatory Ventilation in the Neonate and Infant Neonatal Respiratory Distress Syndrome
Surfactant defi ciency, high chest wall compliance, and a dynamic functional residual capacity (FRC) that is near closing volume in the preterm infant interact to potentiate a repetitive cycle of derecruitment and reinfl ation that makes the neonatal lung particularly well suited to an open lung strategy of ventilation (Figure 9.4). Following laboratory investigations that demonstrated adequate gas exchange at lower intrapulmonary pressures and reduced incidence of pulmonary air leak with the use of HFOV in surfactantdefi cient small animals [37,38], a substantial amount of data have accumulated on the use of HFOV in humans for the management of the neonatal respiratory distress syndrome (RDS). The fi rst large randomized, controlled trial with premature infants comparing high-frequency ventilation using a piston oscillator with conventional mechanical ventilation was published in the pre-surfactant era by the HIFI Study Group in 1989 [39]. The study was designed to evaluate the effect of high-frequency ventilation on the incidence of chronic lung disease of prematurity and included 673 infants weighing 750-2,000 g who had been supported less than 12 hr on conventional ventilation for respiratory failure in the fi rst 24 hours of life. Infants randomized to receive HFOV were administered an FiO 2 and Paw equal to those administered on conventional ventilation. Infants who had not already been tracheally intubated were administered an FiO 2 equal to that received before intubation and a Paw of 8-10 cm H 2 O. Hypoxemia was fi rst addressed by increasing the FiO 2 and then by increasing the Paw [39]. Overall, the investigators did not incorporate alveolar recruitment into the HFOV strategy, and the study was unable to show a signifi cant difference in the incidence of chronic lung disease or in 28-day mortality between the two groups. However, it did show a signifi cant increase in the incidence of air leak as well as highgrade intraventricular hemorrhage among the infants who were randomized to receive HFOV [39].
Two additional large multicenter trials were recently published in an effort to clarify the role of high-frequency ventilation in the management of RDS in preterm infants [40,41]. Unlike the HIFI trial and subsequent studies that produced confl icting results [42,43], these two trials were produced by centers with a great deal of experience with the use of HFOV in neonates, and each emphasized alveolar recruitment as part of the strategy for high-frequency ventilation. In a well-controlled study, Courtney and colleagues used a strategy in the conventional ventilation arm that targeted a tidal volume of 5-6 mL per kg body weight and ventilated infants in the HFOV arm at a frequency of 10-15 Hz [40]. These investigators were able to show that infants randomized to receive HFOV with the 3100A were successfully separated from mechanical ventilation earlier than those assigned to a lung-sparing strategy of conventional ventilation, and, among the infants assigned to highfrequency ventilation, there was a signifi cant decrease in the need for supplemental oxygen at 36 weeks postmenstrual age [40]. By defi ning a disease threshold in the study infants, adhering to lungprotective protocols for mechanical ventilation, and extubating from the assigned ventilator according to specifi c criteria, this study identifi ed a set of circumstances in which HFOV may be used with clear benefi t in preterm infants with RDS [40]. In contrast, Johnson and colleagues included healthier patients, used fewer defi ned protocols, and used more aggressive ventilator strategies [41]. In both study arms, the investigators targeted a PaCO 2 of 34-53 torr, whereas Courtney and colleagues used a ventilation strategy that allowed permissive hypercapnia [40]. For those infants who were supported on HFOV, Johnson and colleagues initiated therapy at a frequency of 10 Hz, and, if maximizing amplitude (ΔP) did not achieve adequate clearance of CO 2 , the frequency was subsequently reduced [41]. Finally, Johnson's group transitioned the majority of B A study infants to conventional ventilation for weaning after a median time on HFOV of 3 days, a relatively small portion of the total duration of mechanical ventilation [41].
It is important to emphasize that neither of these studies was able to duplicate the fi ndings of the HIFI group with respect to associating the use of HFOV with the development of high-grade intraventricular hemorrhage. However, the difference in outcomes in the two trials is striking. The rigorously controlled conditions in the Courtney study probably isolate the effect of HFOV with greater clarity, and their data suggest that only 11 infants need be supported with HFOV in order to prevent one occurrence of chronic lung disease at 36 weeks postmenstrual age [40]. Using Johnson's data, the number of infants needed to support on HFOV in order to prevent one occurrence of chronic lung disease is 50 [41]. Although the study design used by Johnson and colleagues may better represent actual practice, the outcomes indicate that exposure to aggressive conventional ventilation protocols may offset the benefi ts of HFOV.
Congenital Diaphragmatic Hernia
Infants with congenital diaphragmatic hernia (CDH) commonly demonstrate complex pulmonary pathophysiology that derives from alveolar and pulmonary vascular hypoplasia [44]. The discovery that ventilator-induced lung injury is evident on histopathology specimens from these patients [45,46] has continued to focus attention in recent years on applying lung-protective strategies of mechanical ventilation to infants with CDH. As a result, numerous centers have reported case series of infants with CDH in whom the application of high-frequency oscillatory ventilation has been associated with an improvement in survival [47][48][49]. Several retrospective studies of HFOV in infants with CDH have also reported improved survival, with dramatic reductions in PaCO 2 and concurrent improvements in oxygenation [48,49]. At least one center using HFOV without the use of extracorporeal membrane oxygenation (ECMO) in infants with CDH has reported an overall survival rate comparable to that of infants who were supported with conventional ventilation and ECMO, although no survival benefi t specifically attributable to the use of HFOV was identifi ed [45]. Nonetheless, some of the best survival statistics for CDH are reported in one recent single-center historical experience in which these infants were managed with conventional ventilation. This report documented a signifi cant increase in survival from 44% to 69% among all infants with this condition during a period in which fl ow-triggered pressure support ventilation with permissive hypercapnia was used. Even higher survival rates were noted in those without coexisting heart disease [50].
Overall the role of HFOV in the management of infants with CDH is unclear. Despite its theoretical advantages in maintaining alveolar recruitment with minimal pressure cost, application of an open lung strategy using high-frequency ventilation in infants with CDH can lead to problems because aggressive recruitment in the setting of alveolar hypoplasia may precipitate acute increases in pulmonary vascular resistance with ensuing hemodynamic instability, air leak, or ongoing lung injury. Centers that report success with HFOV in the management of infants with CDH have found that it is important to limit Paw to ≤20 cm H 2 O in order to avoid alveolar overdistension [17]. In summary, infants with CDH may suffer excess lung injury if aggressively ventilated in an attempt to manipulate pulmonary vascular resistance, and the use of high-frequency ventilation to achieve specifi c short-term physiologic end-points may not offset this risk.
Persistent Pulmonary Hypertension of the Newborn
Several investigators have tested the hypothesis that sustained alveolar recruitment using HFOV could enhance the delivery of therapeutic gases to patients with respiratory failure. In one large multicenter trial, therapy with HFOV was coupled with inhaled nitric oxide (iNO) in an effort to identify the relative contribution of each therapy to outcomes in patients with persistent pulmonary hypertension of the newborn (PPHN). The investigators randomized 200 neonates with severe hypoxic respiratory failure and PPHN to receive therapy with HFOV alone or conventional ventilation combined with iNO [51]. Crossover as a result of treatment failure resulted in combined therapy with HFOV and iNO. The study concluded that signifi cant short-term improvements in PaO 2 occurred during combined treatment with HFOV and iNO among patients who failed either therapy alone [51]. This combination was particularly effective among patients with severe parenchymal disease attributable to RDS and meconium aspiration [51]. The suggestion that effi cacy of iNO may depend on the adequacy of alveolar recruitment is also supported by a retrospective analysis of data from children enrolled in a multicenter randomized trial of the use of iNO in the treatment of acute hypoxic respiratory failure [52].
Air Leak Syndromes
Given the expectation that satisfactory gas exchange occurs at a relatively low Paw during HFOV, it is not surprising that this therapy has been applied with success in severe air leak syndromes. In one early case report, 27 low-birth-weight infants (mean birth weight 1.2 kg) who developed pulmonary interstitial emphysema on conventional ventilation were transitioned to HFOV. All demonstrated early improvement on HFOV, and survivors demonstrated sustained improvements in oxygenation and ventilation, allowing for lower Paw and FiO 2 and ultimate resolution of air leak. Overall survival among nonseptic patients was 80% [53].
Bronchiolitis
Despite concerns that ventilation at high frequencies may exacerbate dynamic air trapping in diseases of the lower airways, HFOV has been used in the management of bronchiolitis caused by respiratory syncytial virus [54,55]. A couple of small case series have reported the successful application of HFOV using an open lung strategy in young infants with bronchiolitis [54,55]. Applying a relatively high Paw in this clinical context derives from the observation that lower Paw may promote worsening hyperinfl ation by creating choke points that impede expiratory fl ow [28]. The investigators used a frequency of 10-11 Hz and an I : E of 0.33, with initial pressure amplitude (ΔP) in the 35-50 cm H 2 O range. All patients survived without development of pneumothoraces attributable to HFOV and without need for ECMO [54,55].
Diffuse Alveolar Disease
Much of the data on the application of HFOV outside of the neonatal period comes from case series in which this therapy was applied to children with acute severe respiratory failure attributable to diffuse alveolar disease and/or air leak syndromes. In the early 1990s, two centers reported the use of HFOV in pediatric patients with these conditions who had been managed on conventional ventilation for varying periods of time [35,56]. In general, each concluded that HFOV may be applied safely as rescue therapy for pediatric patients with severe hypoxic lung injury and that its use is associated with improvement in physiologic end-points such as PaCO 2 and oxygenation index, OI = (Paw × FiO 2 )/PaO 2 ) × 100. In addition, there were no reports of worsening air leak [35,56]. Each of these studies applied HFOV after recruiting the lung, but one of them [35] modifi ed the HFOV protocol for patients with active air leak by dropping the Paw below the leak pressure following recruitment, raising the FiO 2 as necessary to maintain adequate oxygenation, and tolerating hypercarbia as long as the arterial pH remained above 7.25.
The fi rst and largest multicenter randomized trial evaluating the effect of HFOV on respiratory outcomes in pediatric patients is a crossover study that enrolled patients with diffuse alveolar disease and/or air leak [34]. The investigators randomized 70 patients to receive conventional ventilation using a strategy to limit peak inspiratory pressure, or HFOV at a frequency of 5-10 Hz, using an open lung strategy in which the lung volume at which optimal oxygenation occurred was defi ned (SaO 2 ≥90% and FiO 2 <0.6), and, in patients with air leak, airway pressure was then limited while accepting preferential increases in FiO 2 to achieve saturations of ≥85% and pH ≥7.25 until it resolved [34]. The study found no difference in survival or duration of mechanical ventilatory support between the two groups, but signifi cantly fewer patients randomized to receive HFOV remained dependent on supplemental oxygen at 30 days compared with those who were randomized to receive conventional ventilation, despite the use of signifi cantly higher Paw in the HFOV group [34]. The OI, used often in the pediatric literature to quantify oxygenation failure, was shown in this study to discriminate between survivors and nonsurvivors after 24 hours of therapy. In addition, the time at which changes in OI were noted to occur infl uenced the likelihood of survival: an OI ≥42 at 24 hr predicted mortality with an odds ratio of 20.8, sensitivity of 62%, and specifi city of 93% [34]. Post hoc analysis revealed that outcome benefi ts were not as great for those who crossed over to the HFOV arm [34], supporting the suggestion by numerous studies that HFOV may be most successful if employed early in the course of disease, using a strategy that emphasizes alveolar recruitment [9,37,[56][57][58].
Other Conditions
Experience with the use of HFOV for treatment of lower airways disease in older pediatric patients is limited. In one interesting case report, HFOV was successfully applied to a toddler with status asthmaticus [59]. The authors achieved optimal CO 2 clearance using an open lung strategy with Paw 20 cm H 2 O, low frequency (6 Hz), I : E 0.33, and relatively high ΔP (65-75 cm H 2 O in the fi rst 24 hr of therapy) without apparent air leak [59]; however, the use of HFOV in obstructive lung diseases must be considered thoughtfully.
High-Frequency Oscillatory Ventilation in the Adolescent and Adult
In recent years, the 3100B HFOV (SensorMedics, Yorba Linda, CA) has become available for use in larger pediatric patients and adult patients, addressing initial reports with large animals that adequate alveolar ventilation could not be achieved using the 3100A model [60,61]. The 3100B differs from the 3100A model by having a higher maximal bias fl ow, which allows for the delivery of higher mean airway pressures. The 3100B also has a more powerful electromagnet, which produces faster acceleration to maximal oscillatory pressure (ΔP) [33].
Early experiences with the use of HFOV on adolescent and adult patients with hypoxic respiratory failure are summarized in several case series [33,62]. In each, low-frequency (maximum 5-6 Hz) HFOV with a strategy of volume recruitment was used as rescue therapy for patients with ARDS who were failing conventional ventilation. These studies included patients with severe disease, including mean values for PaO 2 /FiO 2 in the 60 range at the time of enrollment [33,62]. Although neither study was powered to measure signifi cant differences in outcomes such as mortality, the majority of patients in the two studies demonstrated an improvement in short-term physiologic variables such as FiO 2 , PaO 2 /FiO 2 ratio, and OI [33,62]. Nonsurvivors in each of these studies were exposed to signifi cantly longer periods of conventional ventilation, suggesting once again the importance of instituting HFOV early in the course of disease.
A multicenter, prospective, randomized controlled trial designed to evaluate the safety and effectiveness of HFOV compared with conventional ventilation in the management of early ARDS (PaO 2 / FiO 2 ≤200 while on PEEP 10 cm H 2 O) in adult patients was published in 2002 [32]. Treatment strategies for both arms of the study included a volume recruitment strategy and were directed at achieving SaO 2 ≥88% on FiO 2 ≤60%. Patients in the conventional arm were managed in the pressure-control mode, targeting a delivered tidal volume of 6-10 mL/kg actual body weight, without specifi c attention to plateau pressures. Patients in the HFOV arm were ventilated at frequencies of 3-5 Hz and were transitioned back to conventional ventilation when FiO 2 ≤0.5 and Paw ≤24 cm H 2 O with SaO 2 ≥88%, and conventional ventilation was reinstituted at an equivalent Paw [32]. With regard to short-term physiologic measures, these investigators also reported a signifi cantly higher Paw among patients on HFOV and signifi cant early increases while on HFOV in PaO 2 /FiO 2 [32]. Poststudy multivariate analysis also revealed that the trend in OI was the most signifi cant post-treatment predictor of survival regardless of treatment group-survivors showed a signifi cant improvement over the fi rst 72 hr of the study period and nonsurvivors did not [32]. Although the OI is not a measure traditionally reported in the adult literature, it has been reported by others as predictive of mortality in adult ARDS [62].
This study was not powered to evaluate differences in mortality between the two groups, but there was a clear trend toward increased 30-day mortality among the patients randomized to receive conventional ventilation versus those who received HFOV (52% vs. 37%) [32]. At the moment, it is not known if HFOV using low frequencies is as protective as ventilating at a higher frequency range, such as what has been used with success in small animals and human infants. It is important to understand that laboratory experiments using the 3100B HFOV have demonstrated that tidal volumes approaching those used in conventional ventilation are produced under conditions of low-frequency and high-pressure amplitude (ΔP) [63].
Adjuncts: Noninvasive Assessment of Lung Volume
One of the diffi culties facing intensive care clinicians is that evaluation of the adequacy of recruitment after initiating HFOV and in response to changes in ventilator settings must be guided by indirect measures such as peripheral oxygen saturations, fractional inspired oxygen concentration, blood gas tensions, anteroposterior chest radiographs, and a visual assessment of chest wall vibration. Global measures of alveolar plateau pressure, tidal volume, and pulmonary mechanics that are available from breath to breath when using conventional ventilation are not provided on the high-frequency ventilator console, and the operator must often use intuition when adjusting ventilator settings, risking sudden and clinically signifi cant derecruitment or alveolar overdistension. In recent years, respiratory impedance plethysmography (RIP) and electrical impedance tomography (EIT) have emerged as two promising means by which pulmonary mechanics and alveolar recruitment can be assessed noninvasively at the bedside during HFOV.
Respiratory impedance plethysmography is a monitoring technique that is capable of quantifying global lung volume by relating it to measurable changes in the cross-sectional area of the chest wall and the abdominal compartment. In RIP, two elastic bands with Tefl on-coated wires embedded in a zigzag distribution along their circumference are applied to the patient. One is typically placed around the chest, 3 cm above the xiphoid process, and the other is typically placed around the abdomen. Each of these two bands produces an independent signal, and the sum of the two signals is calibrated against a known volume of gas. Use of this technique in association with HFOV has been validated in animal models [64,65]. In a large animal model of acute lung injury managed with HFOV, Brazelton and colleagues have demonstrated that RIP-derived lung volumes correlated well with those that were obtained using a supersyringe (r 2 = 0.78) and that RIP is capable of tracking global changes in lung volume and creating a pressurevolume curve during HFOV [64]. With a newborn animal model, Weber and colleagues were able to demonstrate that RIP is capable of detecting relative changes in pulmonary compliance that were induced by saline lavage [65]. Experience with RIP in human subjects is limited to investigations of its application in conventional phasic ventilation. One study with adult patients [66] and another with pediatric patients [67] have utilized RIP to quantify the relative degree of derecruitment that is associated with closed, in-line techniques for endotracheal tube suctioning compared with open suctioning techniques. Each study was able to demonstrate a potential role for RIP in tracking global changes in lung volume at the bedside.
Applying HFOV in a way that harmonizes with what computed tomography (CT) has revealed about the heterogeneity of parenchymal involvement in ARDS [68] will ultimately depend on developing noninvasive bedside technologies that are capable of identifying regional changes in lung volume and pulmonary mechanics. Computed tomography images of the lung in ARDS patients have demonstrated that, during a prolonged inspiratory maneuver, alveolar recruitment occurs all the way to total lung capacity, according to the specifi c time constants of individual lung units (Figure 9.5; see also Figure 9.2) [68,69]. Therefore, ideal settings on HFOV would be those that achieve ventilation above the lower infl ection point on the regional pressure-volume curves for the majority of lung units, while avoiding overdistension in the most compliant alveoli. Electrical impedance tomography (EIT) is one technology that may be best suited to detecting regional heterogeneity at the bedside of the patient with diffuse alveolar disease.
In EIT, a series of electrodes is applied circumferentially to the patient's chest. The electrodes sequentially emit a small amount of electrical current that is received and processed by the other electrodes in the array. Receiving electrodes determine a local change in impedance based on the voltage differential calculated between the transmitting electrode and the receiving electrode. Well-aerated areas, which conduct current poorly, are associated with high impedance, whereas fl uid and solid phases (including atelectatic or consolidated lung) would be associated with lower impedance [70]. The impedance values that are generated are referenced to a baseline measurement and represent relative rather than absolute changes in electrical properties [69]. This process creates a tomogram that depicts the distribution of tissue electrical properties in a cross-sectional image (Figure 9.6), and the thickness of the slice of thorax that is represented in the image varies between approximately 15 and 20 cm, depending on the circumference of the chest [69,71]. Of the presently available EIT systems, the Goe MF II (University of Goettingen, Germany; distributed by Viasys, USA) seems to have the most favorable signal-to-noise ratio and is also capable of dynamic measurements at low lung volumes [69,72]. This system scans at a rate of 13-44 scans/sec (Hz), generating up to 44 crosssectional images per second [69].
In the laboratory, EIT has been used in conjunction with both conventional ventilation and HFOV to describe regional lung characteristics. Investigations using conventional ventilation in large animal models of lung injury have validated EIT against supersyringe methods for the determination of regional pressure-volume (or pressure-impedance) curves [69,73] and have demonstrated good correlation between EIT-derived regional changes in lung impedance and CT-derived regional variations in aeration [69,74]. Using EIT to track regional lung mechanics in a large animal model of acute lung injury managed with HFOV, van Genderingen and colleagues were able to demonstrate that regional pressure-volume curves constructed using maneuvers on HFOV show less variation along the gravitational axis than pressure-volume curves that are obtained using a supersyringe method, suggesting that recruitment is more uniformly distributed between dependent and nondependent areas during HFOV [75].
Published experience with EIT in human subjects with acute lung injury or ARDS has correlated regional impedance changes induced by slow infl ation maneuvers using the DAS-01P EIT system (Sheffi eld, UK) with regional lung density measurements obtained by CT scanning [76]. Most recently, a group of investigators at Children's Hospital Boston has utilized EIT to detect regional changes in lung volume during a standardized suctioning maneuver in children with acute lung injury or ARDS who were supported on HFOV. These data demonstrate considerable regional heterogeneity in volume changes during a derecruitment maneuver (Figure 9.7) [77].
It is tempting to expect that EIT will soon facilitate the development of strategic HFOV protocols. Theoretically, this technology can create opportunities for therapeutic intervention by dynamically tracking the regional differences in alveolar recruitment that make portions of the lung highly susceptible to ventilator-induced lung injury (VILI). However, there are important limitations to the presently available technology. For instance, substantial bias may be introduced into the EIT image because of the tendency for electrical current to follow the path of lowest impedance rather than the path of shortest distance between the transmitting and receiving electrodes [70]. This phenomenon may account in large part for the variation between EIT measures of regional lung impedance and CT measures of regional lung density [76]. In addition, because EIT measures impedance changes that are relative to baseline values, changes in baseline regional intrathoracic impedance resulting from sources other than alterations in gas volume and distribution could lead to errors in the interpretation of EIT-derived data. Despite these limitations, several investigators have reported that EIT reliably detects regional alterations in pulmonary blood fl ow [78] and extravascular lung water [79]. In summary, identifying a useful role for EIT as an adjunct to HFOV at the bedside will depend on additional technical modifi cations to make it suitable for reliably detecting very small regional tidal volumes at high frequency in the electrically hostile environment of the intensive care unit.
Weaning
Numerous studies have suggested that limiting exposure to potentially injurious strategies on conventional ventilation may enhance outcome benefi ts attributable to HFOV among patients with severe lung injury. Large trials in the neonatal and pediatric populations have demonstrated favorable outcomes when HFOV is applied early in disease, and it seems logical to expect that timing the transition back to conventional ventilation may be of substantial importance as well.
Weaning a patient from HFOV may be considered when the clinician determines that gas exchange and pulmonary mechanics are suitable for transition to acceptable settings on conventional ventilation. Some investigators have reported successfully extubating infants directly from HFOV [40,41,57], but this is diffi cult to accomplish in the older pediatric and adult patient, who may be less likely to tolerate a degree of sedation that would allow spontaneous respiration while on HFOV and in whom spontaneous breathing may signifi cantly depressurize the circuit, resulting in recurrent alveolar derecruitment. In general, when clinical improvement occurs to the point that Paw may be reduced to ≤20 cm H 2 O, FiO2 is reduced to ≤0.4, and the patient tolerates endotracheal suctioning without signifi cant desaturation, it is appropriate to undertake a more detailed evaluation of the patient's response to phasic ventilation provided by conventional means [17]. This may be done by hand ventilating (with the aid of an in-line pneumotachometer, if necessary) while noting the pressures, tidal volume, and inspiratory to expiratory time ratio necessary to sustain satisfactory oxygen saturation. It is common to fi nd on transition to conventional ventilation that the patient will demonstrate satisfactory gas exchange on a mean airway pressure several cm H 2 O below the last Paw on HFOV.
Conclusion
Despite compelling laboratory data supporting a physiologic rationale for HFOV in the treatment of diffuse alveolar disease, evidence of its superiority to conventional ventilation with regard to clinically important outcomes beyond the neonatal period is scant. The diffi culty in proving signifi cant clinical outcome benefi t in pediatric and adult patients may be due in large part to the diverse potential etiologies of respiratory failure in these populations as well as a wide range of approaches to their medical management applied over a relatively long period of mechanical ventilatory support. It is also possible that low-frequency HFOV as traditionally used for larger patients may not be as protective as the higher frequency strategies that have been used with success in small animal models and human infants.
High-frequency oscillatory ventilation remains a therapeutic option in the intensive care unit that is worthy of further study because it is a safe and practical way to provide a "low stretch" form of ventilation that is less likely to produce VILI [4,[6][7][8][9]. Applying this concept with greater precision in the clinical arena will depend on developing bedside technologies capable of both identifying the critical opening pressure in a majority of lung units and tracking regional changes in lung volume that follow changes in HFOV settings. Electrical impedance tomography is a promising technology that may ultimately be incorporated into the design of future trials that are powered to evaluate the benefi ts of specifi c HFOV protocols. | 2018-04-03T03:44:09.244Z | 2008-11-15T00:00:00.000 | {
"year": 2008,
"sha1": "44bf68516efdfbab8b4464ed840a0dafe898df12",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c7eafbfe9c034c6099fb8aaff2bb07fa635163fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249002433 | pes2o/s2orc | v3-fos-license | Progressive Dysphagia in Patient With Cervical Plate Complicated With Posterior Pharyngeal Wall Erosion
A 58-year-old male patient with a history of Parkinson's disease and solitary cervical spinal sarcoma underwent corpectomy, a fusion of C3-C6 with cervical fixation plate placement, and stereotactic body radiation therapy, presented 18 months following surgery with dysphagia, concomitant with weakness, diplopia. The initial workup in cervical magnetic resonance imaging (MRI) revealed aerodigestive tract soft tissue enhancement. Dysphagia progressed during hospitalization, and the patient was intubated due to aspiration pneumonia and respiratory failure. Further evaluations with esophagogastroduodenoscopy (EGD) revealed posterior pharyngeal wall, upper cervical esophageal erosion, and the presence of a cervical fixation plate in the hypopharynx.
Introduction
Esophageal erosion following anterior cervical spine surgery is rare and reported to be between 0.02 and 1.49%, and it has a mortality rate close to 6 percent [1]. Although most esophageal erosions occur intraoperative or immediately following surgical intervention, few cases have been reported with a delayed presentation [2]. Diagnosis of esophageal perforation can be made with cervical imaging studies, including X-ray, computed tomography (CT) scan, and magnetic resonance imaging (MRI). However, negative imaging does not rule out esophageal injury, and further evaluation with surgical exploration is warranted in the presence of high clinical suspicion.
Case Presentation
A 58-year-old male patient with a past medical history significant for Parkinson's disease and solitary cervical spinal sarcoma underwent corpectomy, a fusion of C3-C6 with cervical fixation plate placement, and stereotactic body radiation therapy and presented with three weeks history of dysphagia, concomitant with weakness, diplopia. His symptoms started eighteen months post-operatively. On presentation, the patient was febrile (Temperature of 103F), with a blood pressure of 112/65 mmHg, heart rate of 98 beats/min, and respiratory rate of 16 per minute. Initial workup revealed leukocytosis (WBC: 11500), with normal Chest X-ray and urine analysis. Further workup was negative for Myasthenia Gravis (Acetylcholine receptor binding antibody less than 0.3 nmol/L). The cervical magnetic resonance imaging (MRI) showed the presence of a metallic cervical plate, and the absence of expected soft tissue with the posterior wall, suggestive of hardware, without evidence of fluid collection or spinal cord compression ( Figure 1). However, the evaluation was limited due to magnetic susceptibility artifacts from fusion hardware.
FIGURE 1: Neck MRI.
The absence of expected soft tissue with the posterior wall is suggestive of hardware erosion (arrowhead-1A). There is an abnormal thickening of the laryngeal soft tissues, with medialization of the right vocal cord and loss of expected parapharyngeal fat (arrow-1B).
Dysphagia progressed during hospitalization and got complicated with an episode of aspiration pneumonia during ingestion of medication, which progressed to respiratory failure requiring intubation and mechanical ventilation. The patient received empirical Piperacillin-Tazobactam while the sputum culture was positive for Pseudomonas aeruginosa (while the blood culture was negative). The patient subsequently underwent endoscopic gastroesophageal duodenoscopy (EGD) for further evaluation and percutaneous endoscopic gastrostomy (PEG) placement in the body of the stomach (due to dysphagia and complicated aspiration pneumonia). EGD revealed erosion of the posterior pharyngeal wall and upper cervical esophagus and the presence of a cervical fixation plate, screws, and corpectomy fusion cage in the hypopharynx (Figure 2).
FIGURE 2: EGD, and PEG placement.
Posterior pharyngeal wall and upper cervical esophageal erosion with the presence of cervical fixation plate (white arrow), screws (white arrowhead), and corpectomy fusion cage (white asterisk) in the hypopharynx (Figure 2A). PEG tube placement ( Figure 2B) and pyloric valve (black arrow, Figure 2C).
PEG: percutaneous endoscopic gastrostomy; EGD: Esophagogastroduodenoscopy
The orthopedic surgery and otolaryngology-head and neck surgery services were consulted. The patient underwent surgical exploration of the cervical spine. The anterior cervical fixation plate was removed with flap reconstruction, and the cervical dural tear was repaired with a resolution of his symptoms (Figure 3). The patient was discharged to a rehabilitation facility.
FIGURE 3: CT neck with contrast.
Posterior fusion device with a facet and pedicle screws at C2-C5, C7, and T1 with anterior fusion plate and screw construct extending from the C3 level through the C6 level ( Figure 3A). Post-surgical removal of the anterior fusion plate and screws ( Figure 3B).
Discussion
The esophagus lies directly anterior to the cervical spine, and it is vulnerable to injury post-operatively. Adventitia, the outermost esophageal layer, protects the underneath layers, including the muscular layer (longitudinal and circular) and the submucosal and mucosal layers.
During anterior cervical surgery, aggressive or improper retraction of the esophagus may lead to esophageal erosion, a challenging clinical problem. Although most esophageal injuries occur intra-operative or immediately following surgical intervention, few cases are reported with a delayed presentation [2,3]. Symptoms include dysphagia and Mackler's triad (subcutaneous emphysema, chest pain, and vomiting) in the setting of esophageal perforation [4].
Early diagnosis and intervention reduce morbidity and mortality, so any intraoperative suspicion should warrant immediate investigation. Diagnosis usually requires direct visualization or imaging studies, including endoscopy, CT-scan, MRI, or contrast swallow studies [5,6]. Treatment modalities include nonsurgical, conservative management, and primary closure with flap placement [3]. Brinster et al. reported that interval time from perforation to repair of less than 24 hours had been associated with a significant reduction in morbidity and mortality [7].
Surgical outcomes also depend on the pre-operative comorbidity. Bhatia et al. reported that pulmonary comorbidity, development of sepsis, and respiratory failure requiring mechanical ventilation at presentation significantly impact overall outcome [8]. They also reported that the site of the esophageal perforation is an essential factor determining the severity of the disease, with the cervical esophageal perforations tend to incite less of a systemic inflammatory response than thoracic and abdomen perforation [8].
Conclusions
Esophageal injuries following anterior cervical spine surgery are a potential and rare complication reported in the literature, usually detected during or acutely following surgery. Our patient presented with progressive dysphagia 18 months after anterior cervical surgery. Interestingly, he was asymptomatic for months following the surgery, and dysphagia was the initial complaint that warranted further evaluations. High clinical suspicion is required to detect esophageal injuries and warrant early intervention and correction.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-05-24T15:03:14.868Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "ff9240b1b4a1676c5cecc60e71d12a93d6e2f37a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0965a70c90ed999187982d17b28c85ae2802e41c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
659704 | pes2o/s2orc | v3-fos-license | Salmon-derived nitrogen in terrestrial invertebrates from coniferous forests of the Pacific Northwest
Background Bi-directional flow of nutrients between marine and terrestrial ecosystems can provide essential resources that structure communities in transitional habitats. On the Pacific coast of North America, anadromous salmon (Oncorhynchus spp.) constitute a dominant nutrient subsidy to aquatic habitats and riparian vegetation, although the contribution to terrestrial habitats is not well established. We use a dual isotope approach of δ15N and δ13C to test for the contribution of salmon nutrients to multiple trophic levels of litter-based terrestrial invertebrates below and above waterfalls that act as a barrier to salmon migration on two watersheds in coastal British Columbia. Results Invertebrates varied predictably in δ15N with enrichment of 3–8‰ below the falls compared with above the falls in all trophic groups on both watersheds. We observed increasing δ15N levels in our invertebrate groups with increasing consumption of dietary protein. Invertebrates varied in δ13C but did not always vary predictably with trophic level or habitat. From 19.4 to 71.5% of invertebrate total nitrogen was originally derived from salmon depending on taxa, watershed, and degree of fractionation from the source. Conclusions Enrichment of δ15N in the invertebrate community below the falls in conjunction with the absence of δ13C enrichment suggests that enrichment in δ15N occurs primarily through salmon-derived nitrogen subsidies to litter, soil and vegetation N pools rather than from direct consumption of salmon tissue or salmon tissue consumers. Salmon nutrient subsidies to terrestrial habitats may result in shifts in invertebrate community structure, with subsequent implications for higher vertebrate consumers, particularly the passerines.
Background
Nutrient cycling between geographically distinctive ecosystems can produce zones of major productivity and biodiversity. It is generally recognized that downstream transport of terrestrial nutrients into marine estuaries produces one of the world's most productive habitats, but recent investigations suggest that the reverse flow, from marine to terrestrial habitats, may also be exceptionally important in structuring highly diverse coastal ecosystems [1].
Every year in the Pacific Northwest anadromous salmon (Oncorhynchus spp.) transport marine-derived nutrients from the North Pacific Ocean into coastal ecosystems. This salmon nutrient subsidy extends from aquatic habitats into riparian forests, and is thought to be ecologically equivalent to the migration of the wildebeest on the Serengeti [2]. Stable isotope studies in aquatic and terrestrial ecosystems reveal that salmon contribute highly to yearly protein intake for many vertebrates [1,[3][4][5] and invertebrates [6,7], and provide substantial nutrient inputs to limnetic food webs [6,[8][9][10], and riparian vegetation [7,[11][12][13], emphasizing the ecological magnitude of this keystone resource for coastal communities.
Transfer of salmon nutrients into terrestrial habitats occurs primarily through bear (Ursus spp.) mediated salmon carcass transfer [14][15][16] and urine deposition [12], but can also occur as a result of flooding events [11], hyporheic zone transfer [5], or the activities of other scavengers and predators [3,5]. Since nitrogen is often limiting in coastal temperate rainforests of the Pacific Northwest [17], this salmon nutrient pulse to riparian forests can provide a significant proportion of plant total nitrogen [11][12][13], and is thought to increase riparian primary productivity, vegetation and litter quality, and soil nutrient capital [13].
Studies in forest ecosystems adjacent to salmon streams have so far been limited to vegetational use of salmon nutrients and have ignored other potential food web beneficiaries, particularly terrestrial invertebrates. Macroinvertebrates of coastal coniferous forests of the Pacific Northwest, including insects, arachnids, myriapods, annelid worms, isopods and gastropods, comprise the base of the myriad of nutrient and energy pathways from primary producers through to higher vertebrate consumers, and are highly important in many ecosystem processes including herbivory, litter decomposition, and nutrient cycling [18][19][20].
We use a dual isotope approach of δ 15 N and δ 13 C to assess: a) the extent of utilization of salmon-derived nitrogen and carbon by various trophic groups in a terrestrial invertebrate forest litter community and b) the mechanism of salmon nutrient utilization by invertebrates; either directly through salmon tissue consumption, or indirectly through utilization of salmon nitrogen sequestered into riparian vegetation or soil N pools. We compare the cycling of nutrients above and below waterfalls as a means of examining ecological discontinuities that may occur in litter-based macro-invertebrates between salmon and salmon-free forest sites, and speculate on possible implications to invertebrate community structure and higher vertebrate consumers. We also discuss components of invertebrate isotopic variability as it relates to microspatial variability in δ 15 N, invertebrate trophic structure, and invertebrate niche.
Results
Invertebrate trophic groups varied predictably with respect to δ 15 N. The nested ANOVA analysis demonstrated that the majority of variance in δ 15 N was due to falls within watersheds (F = 9.191; p = 0.031; R 2 = 0.819) and taxonomic group within all other factors (F = 13.71; p < 0.001; R 2 = 0.689). Variation in δ 15 N that occurred between watersheds or distance of collection from the stream contributed little to total variance and was insignificant in the model (See methods for violations). Invertebrates were enriched by 3-8‰ along salmon spawning reaches compared to similar groups collected above the falls, and showed a gradient of increasing values with increased trophic level at both salmon and non-salmon sites ( Figure 1). There were highly significant differences in δ 15 N (t-tests: p < 0.01) above and below waterfalls for all trophic groups at both watersheds. Multiple comparison tests (Tukey's post hoc) revealed distinct trophic separation in δ 15 N between at least two invertebrate groups depending on site of collection (Table 1). Millipede detritivores had higher δ 15 N values than root feeding weevils on all sites but only on the Clatse above the falls was this trend significant. Carabid beetles demonstrated higher δ 15 N values than millipedes at all sites with significant differences on the Clatse River below and above the falls and on the Neekas River above the falls. Spider predators were significantly more enriched than carabid beetles on the Neekas River on both salmon and non-salmon sites, but demonstrated only marginally higher δ 15 N values than these beetles on the Clatse River. Carabid beetle omnivores and spider predators demonstrated significantly higher variance in δ 15 N below the falls than above on both watersheds (Carabidae Clatse: F 14,6 = 14.61, p < 0.005; Carabidae Neekas: F 21,6 = 21.94, p < 0.001; Araneae Clatse: F 18,16 = 5.41, p < 0.002; Araneae Neekas: F 17,11 = 4.94, p < 0.02) (F-ratio tests).
Invertebrate groups varied in δ 13 C but did not always vary predictably with trophic level or habitat ( Figure 2). Nested ANOVA analysis using δ 13 C indicated significant variability only in taxonomic groupings (F = 11.801; p < 0.001; R 2 = 0.657), with all other levels insignificant. Relatively high δ 13 C values were observed in millipedes from both watersheds in salmon and non-salmon sites, most likely a reflection of inorganic carbon content. Multiple comparisons revealed trophic separation for spiders over carabid beetles in all sites (Table 2). Spiders were enriched over root feeders on the Clatse River above the falls and on the Neekas below the falls. Carabids and root feeders did not differ in their δ 13 C values. Carabid beetles collected on the Neekas River were the only group to demonstrate isotopic enrichment below the falls (p = 0.042). Spiders on the Clatse River were found to be higher in δ 13 C above the falls than below (p= 0.016).
We examined isotopic levels in relation to distance upstream from the ocean. At Clatse River, δ 15 N declined with increased distance upstream with the lowest levels occurring above the waterfalls. However, at Neekas River, δ 15 N levels were high but variable throughout the stream channel below the waterfall, above which there was a striking reduction in δ 15 N over short distance delineated by the geological barrier to salmon ( Figure 3).
In order to assess niche differences within and among groups, we examined the relationships between δ 15 N and δ 13 C. Below the falls, there were significant positive correlations between δ 15 N and δ 13 C in spiders on the Clatse (R = 0.562; p = 0.012) and on the Neekas (R = 0.741; p = 0.001), and in carabid beetles on the Clatse (R = 0.682; p = 0.005) and on the Neekas (R = 0.538; p = 0.010) ( Figure 4). None of the remaining correlations were significant in groups collected below the falls, and there were no significant correlations between δ 15 N and δ 13 C for any group collected above the falls.
We estimated contribution of marine-derived nitrogen to the total nitrogen content among invertebrate groups on both watersheds (Table 3). At Clatse River, assuming no fractionation, values ranged from 19% in millipedes to 49% in weevils (with fractionation: 28% in millipedes to 71% in weevils). At Neekas River, assuming no fractionation, values ranged from 35% in ground beetles to 51% in spiders (with fractionation 47% in ground beetles to 70% in spiders).
Discussion
We demonstrate isotopic evidence for substantive incorporation of salmon-derived nitrogen into multiple trophic levels of terrestrial litter-based invertebrates from two salmon bearing watersheds. Enrichment in δ 15 N in terrestrial invertebrates occurs through two possible pathways: 1) direct consumption of salmon tissue and/or predation off of direct salmon consumers such as larval blowflies; or 2) indirect enrichment through δ 15 N enriched soil and vegetation N pools. Here, the use of the dual isotope method provides insight into the mechanism of salmon nitrogen utilization by terrestrial invertebrates. Direct consumption of salmon, with approximate δ 15 N and δ 13 C values of +11.2‰ [21] and -21‰ [9] respectively, would lead to enriched signatures of δ 15 N and δ 13 C in animal tissues. For example, consumption of salmon carcasses by larval blowflies (Calliphoridae) has been documented through the dual isotope method [7]. However, terrestrially derived carbon through C 3 photosynthesis dominates δ 13 C pools in coniferous forest soils and salmon-derived carbon is assumed to contribute little to total carbon in litter and soil. The process of indirect utilization of salmon-derived nitrogen by animals has been observed previously in small mammals [11], whereby individuals were enriched in δ 15 N but not δ 13 C. Because we found little differences in δ 13 C in all trophic groups collected above versus below the waterfalls, this suggests that the primary mechanism of δ 15 N enrichment is by indirect processes through salmon-derived nitrogen subsidies to soil and vegetation N pools.
δ 15 N / δ 14 N ratios of forest nitrogen pools are influenced by the isotopic values of nitrogen inputs and outputs and fractionation that occurs during nitrogen transformations within ecosystems [22]. Nitrogen inputs to typical Pacific coast forest ecosystems include atmospheric deposition and biological nitrogen fixation. In the case of forests ad-jacent to salmon streams there is substantial evidence that marine-derived nitrogen from salmon is transferred to forest ecosystems through predator activity [11,12,[14][15][16], flooding events [11] and hyporheic zone transfer [5], and is incorporated into soil N pools through uptake by vegetation [6,7,[11][12][13].
Vegetation δ 15 N values tend to parallel those in the soil and litter across multiple sites and are typically slightly depleted in δ 15 N relative to the soil source [22,23]. Recent estimates for the contribution of marine-derived nitrogen from salmon in riparian ecosystems to total plant nitrogen have ranged from 15.5-24% [6,12,13]. These values may be conservative as they are based on the assumption of no plant fractionation from the original source nitrogen. In the case of high nitrogen inputs from salmon, vegetation may preferentially assimilate isotopically light nitrogen (even though it is also originally from salmon). However, in nutrient rich habitats fractionation from the source is potentially not as marked compared with nutrient poor soils [23,24], making %MDN estimates challenging. %MDN estimates from hemlock (Mathewson & Reimchen unpublished data), possibly constituting a large percentage of litter biomass, vary from 23-34% on the Clatse River and 49-66% on the Neekas River depending on degree of fractionation from the source. These esti- [25,26]. We suspect that because vegetation and all invertebrates collected below the waterfall barrier to salmon migration are enriched in δ 15 N, that soil and litter δ 15 N are also enriched at these sites. Our data demonstrates that terrestrial invertebrates exhibit a substantial shift in δ 15 N over a sharp ecological discontinuity (ca. 250 m) in the source of nitrogen to the forest community, as a consequence of a distinct salmon-derived nitrogen subsidy to litter, soil and vegetation N pools. We estimate that %MDN to multiple trophic levels of litter-based invertebrates ranges from 19-71% on the Clatse River and 34-70% on the Neekas River depending on trophic grouping, and on the extent of fractionation from the original source nitrogen. These values are similar to %MDN estimates of hemlock and indicate that salmon-derived nitrogen is cycled from primary producers through multiple trophic levels of litter-based terrestrial invertebrates.
Grouping all invertebrate samples over the entire 100 m riparian zone may have reduced the extent of statistical differences for δ 15 N in our comparisons above and below falls. This occurs because of a potential isotopic gradient of decreasing δ 15 N from salmon in terrestrial vegetation with increasing distance from the stream over a relatively small scale (< 100 meters) [11][12][13]. Nevertheless, our %MDN estimates are higher than any other study investigating salmon nutrient transfer into terrestrial ecosystems and emphasizes the magnitude of the discontinuity that occurs across the waterfall barrier to salmon migration in these watersheds.
These %MDN estimates assume salmon tissue δ 15 N as the marine end-member in the model. However, there are other factors that can influence these estimates. Vertebrate urine, particularly from bears (Ursus spp.) [12], faeces and guano deposition may contribute highly to nitrogen inputs during the salmon spawning season. Despite the fact that these inputs are ultimately from salmon tissue con- sumption, high fractionation during multiple transformation steps prior to nitrogen availability, such as ammonia volatilization [22], may lead to unknown shifts in the δ 15 N levels of the source nitrogen. This may increase the microspatial variability in δ 15 N in litter, soil, and vegetation, and subsequently invertebrates, along the salmon spawning channel.
Variation in δ 15 N in carabid beetles and spiders collected below the waterfall barrier was substantially greater than above the falls. It was only marginally higher (non-significant) in root feeding weevils and millipede detritivores, possibly due to low sample sizes. This may indicate higher microspatial variability in δ 15 N in soil, litter and vegetation N pools, increased range of prey resources below the falls, and/ or invertebrate dispersal from other habitats into the zone of substantial salmon transfer.
We detected variation in δ 15 N at different stream reaches, most likely as a function of abundance and species of spawning salmon. On the Clatse River, δ 15 N values decreased with increasing distance upstream. Potentially, this might result from a gradient in marine subsidies other than salmon as a function of distance from the estuary [27]. However, this trend was not observed on the Neekas River where δ 15 N values remain high, even at 2 km upstream. The difference between these two watersheds in the distribution of marine-derived nitrogen appears to be due to topography and the species and distribution spawning salmon. Clatse River is pink salmon dominated, with the majority of spawning, and subsequent predator activity, occurring in the lower 500 meters of the spawning channel [28] (personal observations). Above 600 meters the stream narrows and the riparian profile becomes increasingly steep on both sides. The Neekas River has high density chum spawning to the base of the falls with high salmon nutrient transfer and predator activity occurring in this region [28] (personal observations). Chum salmon contain twice the biomass of nitrogen than pink salmon, and this may partly explain the higher %MDN estimates obtained on the Neekas River compared to the Clatse. The distribution of δ 15 N in these terrestrial invertebrate groups thus appears to be directly correlated to salmon spawning density and biomass, and subsequent predator activity, a pattern that has been observed for δ 15 N in ground beetles (Carabidae) occurring between watersheds on Vancouver Island [7].
Differences in the variance of isotopic signatures within a population provide insight as to the range of diet available to the individual. For example, this has been found in stable isotope studies of marine mammals and chimpanzees [29,30]. In the case of carabid beetles and spiders, high variability in δ 15 N along the salmon-spawning channel compared to above the falls, may indicate higher prey variability in this region. Variance in isotopic signatures can also indicate mobility between habitats [31,32]. Carabid beetles, particularly on the Neekas River, exhibited high variance in signatures. The carabid beetle species collected, although brachypterous, can move freely between habitats [33], and captured individuals may not have obtained their nutrition along the salmon spawning channel for their entire life history.
Correlations between δ 15 N and δ 13 C values provide further resolution into individual niche variability. We observed a significant positive correlation between δ 15 N and δ 13 C values in carabid beetles and spiders below waterfalls, with access to salmon nutrients, but not above falls. Both groups feed on a diverse array of prey including primary and secondary consumers, and in the case of the ground beetles, vegetative matter as well. Individuals within each group that fed at a higher average trophic level would be expected to exhibit more enrichment for δ 15 N and δ 13 C [34,35]. Alternatively, individuals that fed on salmon directly or on prey that fed on salmon would also demonstrate isotopic enrichment in both isotopes [3][4][5][6][7]. Positive relationships in δ 15 N and δ 13 C below the falls and the absence of that relationship above the falls hints that direct consumption of salmon or salmon consumers below the falls may be a factor for some individuals of these species. However, increased range of food resources below the falls would also be consistent with this finding. Furthermore, smaller sample sizes above the falls may have reduced our ability to detect relationships. For the majority of the spiders and ground beetles, direct uptake of the marine isotopes most likely contributes only a minor component to yearly protein intake, as uptake of marine-derived nitrogen occurs by indirect means. The use of dual isotope model becomes most relevant when investigating terrestrial organisms that use salmon protein as a major contributor to diet. This is the case for several terrestrial necrophages including flies (Diptera: Calliphoridae, Scathophagidae, Anthomyiidae), and beetles (Coleoptera: Silphidae, Leiodidae, Staphylinidae) [7] (Hocking unpublished data).
Animals are isotopically enriched in δ 15 N and δ 13 C relative to their dietary intake as a consequence of preferential excretion of the lighter isotope in metabolism [36], and this allows insight into relative trophic position within a community. Isotopic enrichment varies widely by body tissue, but there is an approximate stepwise enrichment of 3 [25] suggest that there are on average two trophic levels within litter-based invertebrate communities. We also find general evidence for two general trophic levels within the litter-based community at Clatse and Neekas Rivers usually consisting of: 1) root feeders and detritivores (weevils and millipedes) as primary consumers of plant material, and 2) predators (carabid beetles and spiders) that feed on these and other presumed plant feeders within the litter community. Our data, however, provides substantial evidence for a gradient in trophic level among our litter-based invertebrates rather than two distinct trophic groupings, a finding that coincides with that of Scheu & Faica [26]. Millipedes, for instance, were often found to be enriched in δ 15 N compared to root feeders, a finding that suggests that either weevils (Curculionidae) feed on roots that are somewhat depleted in δ 15 N compared to litter, or that millipede detritivores utilize some δ 15 N enriched protein food sources such as bacteria in their guts, or both [25]. Spiders were enriched in δ 15 N in all cases over those in carabid beetles, and below the falls on the Neekas this constituted a mean difference greater than a single trophic level. Evidence for omnivory is emerging in the carabid beetles [33,[39][40][41][42] and the observed discrepancy between spiders and carabid beetles is most likely a result of the purely predaceous versus omnivorous life histories of these groups. Spiders also demonstrated trophic enrichment in δ 13 C over carabid beetles at all sites. However, spiders were not consistently enriched over root feeders at each site and carabid beetles exhibited the lowest δ 13 C values. We conclude that, in general, carbon is a poor trophic level indicator [25]. Overall, this suggests that increased trophic and individual niche resolution in stable isotope studies will more likely extend from a detailed taxonomic separation rather than with guild analyses [26].
Implications
With the use of stable isotopes (δ 15 N and δ 13 C), spawning salmon have been shown to provide substantial nutrient inputs to limnetic food webs [6,[8][9][10], with implications for stream primary productivity and subsequent juvenile salmonid survivorship. Young salmon may in fact derive a large proportion of their required nitrogen and carbon from the death and decomposition of their parents, through food web utilization of salmon nutrients by algae and aquatic invertebrates.
Other than inputs to terrestrial vegetation, salmon nutrient effects in forest food webs are poorly known. Input of salmon-derived nitrogen contributes to total available N in the soil and thereby increases forest primary productivity and vegetation and litter quality [11][12][13]. Nutrient subsidies (other than salmon) to terrestrial invertebrate communities can result in shifts in invertebrate community structure and abundance as a consequence of bottomup ecosystem effects [27,43,44]. Soils in coniferous forests of low nutrient status are typically dominated by fungi as the primary decomposers of organic material, and thick humus layers quickly accumulate due to slow rates of nutrient turnover [45]. In nutrient-rich conditions, fungi are replaced by bacteria and invertebrates as the dominant decomposers, resulting in higher net rates of nitrogen mineralization and total available nitrogen [44,45]. Shifts in invertebrate community structure and abundance due to a nutrient subsidy may have further implications for higher invertebrate and vertebrate consumers such as preda-ceous beetles, spiders, hymenopteran parasitoids, small mammals, amphibians and passerines. For example, in another form of marine subsidy, spider densities have been reported to be 4-5 times higher on islands with marine bird colonies than those without [46]. Furthermore, avian populations in boreal forests have been observed to respond to experimental nitrogen fertilization [47], a pattern that also may well be true in the case of nutrient inputs to forest communities along salmon streams [48]. Shifts in litter-based invertebrate community structure and abundance could have particular benefits for ground foraging birds such as the resident and migratory sparrows, thrushes and wrens. The widespread enrichment in salmon derived nitrogen among multiple trophic levels also hints at an ecosystem level effect that has further implications for shrub and canopy level invertebrate communities and their various vertebrate consumers [1,5,48].
Conclusions
The increasing evidence for the coast-wide decline in salmon abundance on the Pacific coast of North America [49] may have substantially more ecological implications to terrestrial forest food webs than previously recognized [5]. We present evidence for major uptake of salmon-derived nitrogen into a terrestrial invertebrate food web, with a sharp reduction in uptake across a waterfall barrier to salmon migration. These results supplement the conclusions of a diversity of recent contributions that have focused on the ecological consequences of the decline of salmon on the west coast of North America [1,2,[5][6][7][8][9][10][11][12][13]48].
Both the Clatse and Neekas watersheds are dominated by high-density returns of pink (Oncorhynchus gorbuscha) and chum (O. keta) salmon, with minor runs of coho (O. kisutch) and the occasional sockeye (O. nerka). In the last ten years, pink and chum salmon returns on the Clatse River average 17000 and 5000 individuals respectively. Chum salmon constitute the majority of spawning biomass on the Neekas (mean = 30000). Mean pink salmon returns on the Neekas River vary from an average of 33000 on even years to an average of 2700 on odd years (Department of Fisheries and Oceans Escapement data: 1990-1999). Suitable spawning habitat extends for 2.1 km on the Neekas River, roughly twice that of the Clatse (1 km), whereby both are interrupted by waterfalls that act as a barrier to salmon migration [28].
Invertebrate samples
In August of 2000 terrestrial macro-invertebrates were collected in each watershed through passive pitfall trapping and hand collection from the soil and course woody debris. Invertebrate sampling occurred above and below the waterfall barrier and up to 100 meters from the stream. On the Clatse River, main invertebrate sampling occurred from 200 to 800 meters upstream from the mouth, and again above the falls at 1200 and 1600 meters. The majority of invertebrate trapping on the Neekas occurred at 1 km, and again at 2 km, just below the falls. Control samples from the Neekas were collected just above the falls from 2250 to 2400 meters upstream from the mouth.
Pitfall arrays were arranged in a three-way branching fashion. This included a central 10 cm diameter pitfall connected via three 24-inch by 6-inch aluminium drift fences (separated by 120°) to a perimeter pitfall at the end of each fence [7]. Pitfall arrays were cleared from four to five days after initial set-up, and to prevent rotting of invertebrate tissue 70% ethanol was used as a field preservative within each pitfall cup. Hand collection of invertebrates occurred more randomly as individuals were discovered in the riparian area. All specimens were stored in 70% ethanol prior to identification and isotopic analysis.
Stable Isotope Analysis
Whole invertebrate specimens were dried at 60°C for at least 48 hours and ground into a fine powder with a Wig-L-Bug grinder (Crescent Dental Co., Chicago, 111). Ap-proximately 1 mg dry weight per ground specimen was then sub-sampled for continuous-flow isotope ratio mass spectrometry (CF-IRMS) analysis of nitrogen and carbon. Mass spectrometry analysis of δ 15 N and δ 13 C was conducted at the stable isotope facility, University of Saskatchewan, Saskatoon, Canada using a Europa Scientific ANCA NT gas/solid/liquid preparation module coupled to a Europa Scientific Tracer/ 20 mass spectrometer.
Isotopic contents are expressed in 'δ' (delta) notation representing the difference between the isotopic content of the sample and known isotopic standards (atmospheric N 2 for nitrogen and PeeDee Belemnite (PDB) limestone for carbon). This is expressed in parts per thousand (‰) according to the formula (1): 1) δ 15 N Or δ 13 C (‰) = (R sample / R standard -1) * 1000 where R is the ratio of the heavy isotope ( 15 N or 13 C) / light isotope ( 14 N or 12 C).
Curculionid beetles of the genus Steremnius feed as larvae and adults on the roots and slash of conifers and are assigned the lowest trophic rank, as there is no current evidence that these beetles utilize animal protein [18,51]. Millipedes are detritivores, feeding primarily on dead plant material and fragments of organic matter. This potentially includes small amounts of animal protein from faeces, dead animals or microorganisms that occur on the litter material [25,52]. The Parajulidae are indigenous to the forest ecosystems of the Pacific Northwest but are poorly known at the species level [55]. A priori, we assume here that the parajulid millipedes include minor contributions of organic matter derived from animal protein in diet. Carabid beetles of the genera Pterostichus, Scaphinotus and Zacotus are generalist forest floor predators on a variety of soil invertebrates including snails and slugs (Gastropoda), millipedes (Diplopoda), isopods (Isopoda), worms (Oligochaeta) and springtails (Collembola) [33,40,56]. However, documented observations of carabids feeding on plant material including seeds and fruit suggest that these beetles may be omnivorous rather than purely predaceous [39,41,42]. Arachnids of the genera Cybaeus and Antrodiaetus are known to be funnel-web [54] and trap-door spiders [53] respectively, feeding exclusively on animals including various insects, myriapods, isopods, other spiders and even small vertebrates [57].
Independent sample t-tests (two-tailed) were used to test for differences between invertebrate groups collected above and below the falls for δ 15 N and δ 13 C on each watershed (equal variances not assumed in all tests). All invertebrates collected within 100 meters of the stream were pooled for the analysis, and those collected less than 200 meters from the estuary were removed since these were assumed to possess ambiguous isotopic signatures where marine incursions other than salmon input may particularly obscure soil N pools [27]. F-ratio tests (two-tailed) were conducted for δ 15 N between invertebrate groups collected above versus below the falls under the null hypothesis of equal variances. We also performed separate Nested ANOVA's on δ 15 N and δ 13 C to examine the effects of trophic group, distance from the stream, above and below falls and watershed [model: watershed, watershed(falls), watershed(falls(distance)), watershed(falls (distance (invertebrate group)))]. However, assumptions of normality and homoscedasticity were not met and as such, we place more emphasis on the t-test comparisons. Tukey HSD multiple comparison post hoc tests were performed for δ 15 N and δ 13 C within sites under the null hypothesis that all invertebrate groups were isotopically indistinct. Since inorganic carbon in the form of CaCO 3 , present in the exoskeleton of our millipedes [52], is en-
) Predators
Agelenidae Antrodiaetidae Cybaeus reticulatus Simon Antrodiaetus pacificus Simon riched in δ 13 C relative to organic forms [25], we removed millipedes from the post hoc analysis of δ 13 C among feeding groups. Pearson's Correlation Coefficients were used to examine the relationships between δ 15 N and δ 13 C within trophic groups at different sites to investigate the individual niche variability.
Estimating % MDN δ 15 N values in animals are influenced by the δ 15 N value of the principal N sources, and fractionation during nitrogen transformations within ecosystems. Principal N sources to riparian ecosystems include atmospheric N 2 with a δ 15 N value of 0‰ [36], and salmon N with a δ 15 N value of approximately 11.2 ± 1.0‰ [21]. Variations in δ 15 N with trophic level appear to be relatively predictable such that biota are enriched by 3.4 ± 1.1‰ more than their food [37], a pattern that seems to hold true for soil macro-invertebrates [25,26]. Estimates for % marine-derived nitrogen (MDN) in our litter-based macro-invertebrate food chain were obtained based on a combination of a limnetic trophic model proposed by Kline et al. [8] and a terrestrial vegetation model utilized by Helfield and Naiman [13] and is expressed mathematically by (2):
2) %MDN = [(Obs -TEM) / (MEM TL -TEM) ] * 100%
where Obs is the observed δ 15 N value of a particular taxa below the waterfall barrier to salmon, TEM is the terrestrial end-member (the isotopic value obtained for the same taxa above the falls in absence of salmon input), MEM is the marine end-member (δ 15 N value of salmon of 11.2‰ [21] which should equal maximum vegetation δ 15 N values), and TL refers to the trophic level correction factor that applies to the marine end-member in the model. Since variability in utilization of MDN by the various invertebrate groups below the falls might obscure relative trophic level, we used invertebrate δ 15 where MEM equals salmon tissue [21] and VEGabv equals mean vegetation δ 15 N values above the falls. We also calculated %MDN for vegetation below the falls (Mathewson & Reimchen unpublished data: Clatse mean δ 15 N = +1.43‰; Neekas mean δ 15 N = +3.44‰) as a benchmark comparison to our invertebrate estimates. We were not able to assess the extent of fractionation occurring in the situation of 100% MDN at the level of primary producers (See assumptions in [13]). As such, we calculated two %MDN estimates based on no fractionation (MEM = 11.2‰) and maximum fractionation of 4‰ (MEM = 7.2‰), which is a typical maximum level of fractionation in vegetation from atmospheric N 2 that is observed in the Clatse-Neekas non-salmon habitats (Mathewson & Reimchen unpublished data). This model assumes that invertebrate trophic level does not differ above and below the falls and that the marine end-member for vegetation δ 15 N values is represented by salmon tissue.
Author's contributions
MDH conducted the field research, sorted and processed the invertebrate samples, performed the statistical analyses, and drafted the manuscript. TER conceived of the study, participated in its design and coordination, and contributed to the manuscript preparation. All authors read and approved the final draft. | 2014-10-01T00:00:00.000Z | 2002-03-19T00:00:00.000 | {
"year": 2002,
"sha1": "5ac90a2ecc2b5e55ef0518341f0957c7f4a34aea",
"oa_license": "CCBY",
"oa_url": "https://bmcecol.biomedcentral.com/track/pdf/10.1186/1472-6785-2-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5886fab2b4f7f0d89ea698423a5f6d77584ed591",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
258727971 | pes2o/s2orc | v3-fos-license | LC-MS Based Metabolomic Profiling of Largehead Hairtail ( Trichiurus japonicus ) Ovary Reveals Metabolic Signatures of Ovarian Developmental Process (II–IV)
: Trichiurus japonicus is an economically important fish that ranks 11th in global marine fish capture production. However, the reproductive characteristics of this fish have undergone notable changes in recent decades, potentially affecting the quality of offspring and sustainable utilization. To improve our understanding of the physiological regulation of maturation in T. japonicus , untargeted liquid chromatography mass spectrometry was utilized to identify the small molecules that characterize the comprehensive metabolic profiles of ovaries during ovary development from stage II to stage IV. According to the results of OPLS-DA, the ovarian metabolite profiles of the three developmental stages were separated. The concentrations of 124 and 100 metabolites were significantly altered between stage II vs. III and III vs. IV, respectively. Lipids and lipid-like molecules accounted for the largest proportion of the altered metabolites, followed by amino acids, peptides, and analogues. The significantly altered metabolites-enriched pathways differed slightly between stages II and III and stages III and IV. Steroid-related pathways were heavily affected during stages II to III, while significantly altered metabolites from stages III to IV were involved in oocyte-maturation-related pathways. Through metabonomics analysis, potentially important metabolic pathways and metabolites between different ovarian stages were detected, providing basic information for further investigation of maturation mechanisms in wild fish.
Introduction
The largehead hairtail (Trichiurus japonicus) is a member of the Trichiuridae family, within the order Scombriformes.It is a warm-temperature demersal species and is distributed in the tropical and temperate continental shelves and slopes worldwide [1].According to the FAO report [2], the hairtail production worldwide in 2020 was 1.144 million tons, ranking it 11th in global marine fish capture production.It is also one of China's most important economic fish, with its yield ranking first in China's marine fish capture production for many years [3].T. japonicus is the dominant species among the hairtail yield in China [4].However, due to fishing pressure and various environmental factors, hairtail resources have declined in recent decades.In 2019, hairtail capture was only 70% of what it was in 1999 [3].Additionally, various reproductive characteristics have changed, including mature anal length and egg diameter.The East China Sea (ECS) is the primary spawning Fishes 2023, 8, 262 2 of 13 ground for T. japonicus in China; T. japonicus spawns almost year-round with multiple batches and two dominant spawning periods in the ECS [5][6][7].It has been reported that T. japonicus follows a group-synchronous oocyte development pattern [8].In the 1960s, the first sexually mature age of T. japonicus in the ECS was one year, and the minimum anal length of the mature individual was 200-210 mm.However, in the 1990s, the minimum anal length of mature individuals decreased to 140-150 mm.The 50% mature anal length (L 50 ) of the female population decreased to 164.65 mm, and the L 50 of the male population decreased to 171.65 mm.When the anal length reached about 210 mm, it was almost 100% mature.Not only did the anal length of maturation decrease, but the diameter of mature eggs also significantly reduced.It decreased from 1.525-1.825mm in 1963-1964 to 0.9-1.5 mm in 1993-1994 [9,10].Apart from T. japonicus, early maturation is also common in other major economic fish species in the ECS.This phenomenon is accompanied by a series of problems such as the miniaturization of parent fish, the reduction of egg diameter, and the decline of germplasm of larvae and juvenile, which seriously affects the sustainable utilization of fishery resources and food security [11,12].Therefore, it is urgent to understand why gonadal development is accelerated in wild fish.
Numerous studies have investigated the potentially possible factors and their fluctuations that affect the development of gonads in wild fish, including temperature [13][14][15], salinity [16,17], photoperiod [18,19], baits [20,21], and fishing [22].In addition to external environmental factors, the timing and process of gonadal development are also regulated by the internal hypothalamic-pituitary-gonad axis, which involves the balance of various central neurotransmitters and hormones [23].Gonadotropin-releasing hormone (GnRH) secreted by the hypothalamus is a key factor in initiating gonadal development [24,25].GnRH promotes the pituitary to release gonadotropin (GTH), which stimulates the synthesis and secretion of steroid hormones in the gonad, such as estradiol and progesterone, ultimately promoting the development of gonads [26].Additionally, numerous signaling pathways and metabolites related to the metabolism of hormones, amino acids, lipids, and energy play important roles in this process [27][28][29].
To better understand gonadal development, metabolomics has been introduced to the reproduction research of aquatic animals [27][28][29].Metabolomics is a powerful tool that can comprehensively analyze endogenous metabolites in biological systems, providing a better understanding of their metabolic functional states [30].This approach is characterized by high sensitivity, high throughput, and rapidity [31].Untargeted liquid chromatography (LC)-mass spectrometry (MS)-based metabolomics has successfully investigated the metabolic differences between sexes and maturation states of aquatic animals, including blunt snout bream (Megalobrama amblycephala) [27], Chinese sturgeon (Acipenser sinensis) [28], and sea lamprey (Petromyzon marinus) [29].
To improve our understanding of the physiological regulation of fish maturation, untargeted LC-MS was used to identify the small molecules that characterize the comprehensive metabolic profiles of ovaries in T. japonicus during ovary development from stage II to stage V in the present study.The results may provide valuable basic information regarding the reasons for the early maturity of this economically important fish.
Sample Collection and Preparation
As the spawning peak period of T. japonicus in the southern offshore area of Zhejiang, China, is from May to July [32], the T. japonicus young-of-the year during May 2021 in the Wentai fishing ground (27 • 00 ~28 • 00 N and the west of 125 • 00 E) was sampled.The T. japonicus was frozen and transported back to the laboratory.Based on the growth curve of the T. japonicus fitted by the Walford growth transformation method, which showed that the anal length of the one-year-old fish was less than 190 mm [33], the T. japonicus with an anal length of less than 190 mm was chosen for the later analysis.The basic biological information including anal length (AL), body weight, sex, gonad developmental stage, and gonad-somatic index (GSI [34]) was measured and calculated for each T. japonicus.The Fishes 2023, 8, 262 3 of 13 ovaries were immediately frozen and stored after weighing.The ovarian developmental stages of the T. japonicus were determined based mainly on macroscopic examination with six development classes, which were defined as follows: I = immature, II = developing, III = maturing, IV = mature, V = ripe, and VI = spent [5,35].
Six fish ovaries were randomly selected in each developmental stage from II, III, and IV for LC-MS analysis.An amount of 50 mg ovary was accurately weighed for each sample for subsequent processing [36], which is described in the Supplementary Information.To ensure the stability of the analysis and the metabolomic data quality, pooled quality control (QC) samples were prepared by mixing portions of all the samples.The five QC samples were tested similarly to the analytic samples.
LC-MS Analysis and Data Processing
Untargeted LC-MS was conducted with the UHPLC-Q Exactive system of Thermo Fisher Scientific.Detailed formation on metabolome analysis conditions is presented in the Supplementary Materials.After the mass spectrometry detection was completed, the raw data of LC/MS were preprocessed by Progenesis QI (Waters Corporation, Milford, CT, USA) software to generate a data matrix that consisted of the retention time (RT), mass-to-charge ratio (m/z) values, and peak intensity.Metabolic features detected at least 80% in any set of samples were retained.At the same time, variables with relative standard deviation (RSD) > 30% of QC samples were removed, and log10 logarithmization was performed to obtain the final data matrix for subsequent analysis [37].The normalized data were matched against available databases to identify metabolites, especially the HMDB (http://www.hmdb.ca/(accessed on 1 January 2021)) and KEGG (http://www.kegg.com(accessed on 2 January 2021)) public databases [31,38].The chemical classification of metabolites in the HMDB can provide better insights into the biological significance of metabolites.
Statistics
The significance of AL, body weight, and GSI among three developmental stages were detected by one-way ANOVA with Duncan's multiple range tests [39] in SPSS (Version 21.0, SPSS Inc., Chicago, IL, USA).The data were presented as mean ± SEM, and p < 0.05 was considered as significant [40].
The obtained positive and negative data in the LC-MS analysis were analyzed using the free online platform Majorbio Cloud Platform, https://cloud.majorbio.com(accessed on 1 January 2021) (Shanghai Majorbio Bio-pharm Technology Co., Ltd.).The principal component analysis (PCA) and orthogonal least partial squares discriminant analysis (OPLS-DA) were used for statistical analysis to determine global metabolic changes between II vs. III and III vs. IV [40].The selection of significantly different metabolites was determined based on the variable importance in the projection (VIP) obtained by the OPLS-DA model and the p-value of the Student's t-test, and the metabolites with VIP > 1, p < 0.05 were selected as significantly different metabolites [37].To compare the variation of metabolites among different groups, a Venn plot and heat maps plot were generated.Differential metabolites among the two groups were summarized and mapped into their biochemical pathways through metabolic enrichment and pathway analysis based on KEGG.Statistically significantly enriched pathways were identified using Fisher's exact test with a p-value of less than 0.05, using the Python package scipy.stats(https://docs.scipy.org/doc/scipy/(accessed on 1 January 2020)) [36,37].
Basic Information about T. japonicus
As shown in Figure 1, the body weight and anal length of T. japonicus in stage II were significantly lower than those of T. japonicus in stages III and IV, but there were no significant differences in body weight and anal length between stages III and IV.The GSI of different stages had obvious differences, which was consistent with ovarian development.
As shown in Figure 1, the body weight and anal length of T. japonicus in stage II w significantly lower than those of T. japonicus in stages III and IV, but there were significant differences in body weight and anal length between stages III and IV.The G of different stages had obvious differences, which was consistent with ovar development.
Metabolite Profiles of the Ovaries at Different Developmental Stages
In total, 6042 variables (3395 peaks in ESI+ mode, and 2647 peaks in ESI− mode) w identified in ovaries from stages II, III, and IV for subsequent analyses.Under posit mode, a total of 745, 721, and 595 annotated metabolites were identified in II, III, and respectively.Under negative mode, a total of 756, 668, and 524 annotated metabolites w identified in II, III, and IV, respectively.The three stages shared 558 and 503 metaboli under positive and negative modes, respectively (Figure 2a,b).The PCA performed on the whole samples revealed that the QC samples were tigh clustered in PCA score plots (Figure 3a,b), which indicated that the system stability w accommodative for this metabonomic study.PCA results showed that the fi components accounted for 44.30% and 34.40% in positive and negative ion mod respectively, but the discriminations of the three groups were not very distin Furthermore, OPLS-DA was performed to maximize the distinction between groups a
Metabolite Profiles of the Ovaries at Different Developmental Stages
In total, 6042 variables (3395 peaks in ESI+ mode, and 2647 peaks in ESI− mode) were identified in ovaries from stages II, III, and IV for subsequent analyses.Under positive mode, a total of 745, 721, and 595 annotated metabolites were identified in II, III, and IV, respectively.Under negative mode, a total of 756, 668, and 524 annotated metabolites were identified in II, III, and IV, respectively.The three stages shared 558 and 503 metabolites under positive and negative modes, respectively (Figure 2a,b).
significantly lower than those of T. japonicus in stages III and IV, but there were no significant differences in body weight and anal length between stages III and IV.The GSI of different stages had obvious differences, which was consistent with ovarian development.
Metabolite Profiles of the Ovaries at Different Developmental Stages
In total, 6042 variables (3395 peaks in ESI+ mode, and 2647 peaks in ESI− mode) were identified in ovaries from stages II, III, and IV for subsequent analyses.Under positive mode, a total of 745, 721, and 595 annotated metabolites were identified in II, III, and IV, respectively.Under negative mode, a total of 756, 668, and 524 annotated metabolites were identified in II, III, and IV, respectively.The three stages shared 558 and 503 metabolites under positive and negative modes, respectively (Figure 2a,b).The PCA performed on the whole samples revealed that the QC samples were tightly clustered in PCA score plots (Figure 3a,b), which indicated that the system stability was accommodative for this metabonomic study.PCA results showed that the first components accounted for 44.30% and 34.40% in positive and negative ion modes, respectively, but the discriminations of the three groups were not very distinct.Furthermore, OPLS-DA was performed to maximize the distinction between groups and The PCA performed on the whole samples revealed that the QC samples were tightly clustered in PCA score plots (Figure 3a,b), which indicated that the system stability was accommodative for this metabonomic study.PCA results showed that the first components accounted for 44.30% and 34.40% in positive and negative ion modes, respectively, but the discriminations of the three groups were not very distinct.Furthermore, OPLS-DA was performed to maximize the distinction between groups and obtain a global overview of the differences in metabolites between III vs. II and IV vs. III.Positive and negative data revealed clear separation and discrimination between the different developmental stages, indicating the significantly different metabolomic profiles of the three stages (Figure 3c-f).According to the results of OPLS-DA, a total of 124 potential biomarkers between III vs. II and 100 potential biomarkers between IV vs. III were finally screened out based on VIP > 1 and p < 0.05.Heat maps (Figure 4) showed clear differences in the ovarian metabolic profiles of the three developmental stages.Compared with stage II, 20 metabolites were Fishes 2023, 8, 262 5 of 13 upregulated and 104 metabolites were down-regulated in stage III, while 29 metabolites were upregulated and 71 metabolites were down-regulated in stage IV compared with stage III (Figure 4).The significantly changed metabolites were classified into different classes according to HMDB.The lipids and lipid-like molecules accounted for the largest proportion of the total, in which glycerophospholipid, steroids and steroid derivatives, prenol lipids, and fatty acyls formed the majority of the metabolites.The second largest group comprised amino acids, peptides, and analogues (Figure 5).
Positive and negative data revealed clear separation and discrimination between the different developmental stages, indicating the significantly different metabolomic profiles of the three stages (Figure 3c-f).According to the results of OPLS-DA, a total of 124 potential biomarkers between III vs. II and 100 potential biomarkers between IV vs. III were finally screened out based on VIP > 1 and p < 0.05.Heat maps (Figure 4) showed clear differences in the ovarian metabolic profiles of the three developmental stages.Compared with stage II, 20 metabolites were upregulated and 104 metabolites were down-regulated in stage III, while 29 metabolites were upregulated and 71 metabolites were downregulated in stage IV compared with stage III (Figure 4).The significantly changed metabolites were classified into different classes according to HMDB.The lipids and lipidlike molecules accounted for the largest proportion of the total, in which glycerophospholipid, steroids and steroid derivatives, prenol lipids, and fatty acyls formed the majority of the metabolites.The second largest group comprised amino acids, peptides, and analogues (Figure 5).
Metabolic Pathways Analysis
The potentially important metabolic pathways during the ovarian quick developmental period were identified using the KEGG database.KEGG enrichment analysis indicated that significantly altered metabolites from stages II to III were enriched in the neurotrophin signaling pathway, adipocytokine signaling pathway, sphingolipid signaling pathway, tryptophan metabolism, steroid hormone biosynthesis, riboflavin metabolism, etc. (Figure 6a), while the most heavily affected pathway (p < 0.05) was the neurotrophin signaling pathway.The significantly altered metabolites from stages III to IV were involved in growth hormone synthesis, oocyte meiosis, progesterone-mediated oocyte maturation, neurotrophin signaling pathway, MAPK signaling pathway, GnRH signaling pathway, adipocytokine signaling pathway, etc. (Figure 6b); the most heavily affected pathways (p < 0.05) were progesterone-mediated oocyte maturation, oocyte meiosis, neurotrophin signaling pathway, and growth hormone synthesis.The pathways that always played roles in ovarian development from stages II to IV were the sphingolipid signaling pathway, adipocytokine signaling pathway, neurotrophin signaling pathway, etc.
Metabolic Pathways Analysis
The potentially important metabolic pathways during the ovarian quick developmental period were identified using the KEGG database.KEGG enrichment analysis indicated that significantly altered metabolites from stages II to III were enriched in the neurotrophin signaling pathway, adipocytokine signaling pathway, sphingolipid signaling pathway, tryptophan metabolism, steroid hormone biosynthesis, riboflavin metabolism, etc. (Figure 6a), while the most heavily affected pathway (p < 0.05) was the neurotrophin signaling pathway.The significantly altered metabolites from stages III to IV were involved in growth hormone synthesis, oocyte meiosis, progesterone-mediated oocyte maturation, neurotrophin signaling pathway, MAPK signaling pathway, GnRH signaling pathway, adipocytokine signaling pathway, etc. (Figure 6b); the most heavily affected pathways (p < 0.05) were progesterone-mediated oocyte maturation, oocyte meiosis, neurotrophin signaling pathway, and growth hormone synthesis.The pathways that always played roles in ovarian development from stages II to IV were the sphingolipid signaling pathway, adipocytokine signaling pathway, neurotrophin signaling pathway, etc.
Metabolic Pathways Analysis
The potentially important metabolic pathways during the ovarian quick developmental period were identified using the KEGG database.KEGG enrichment analysis indicated that significantly altered metabolites from stages II to III were enriched in the neurotrophin signaling pathway, adipocytokine signaling pathway, sphingolipid signaling pathway, tryptophan metabolism, steroid hormone biosynthesis, riboflavin metabolism, etc. (Figure 6a), while the most heavily affected pathway (p < 0.05) was the neurotrophin signaling pathway.The significantly altered metabolites from stages III to IV were involved in growth hormone synthesis, oocyte meiosis, progesterone-mediated oocyte maturation, neurotrophin signaling pathway, MAPK signaling pathway, GnRH signaling pathway, adipocytokine signaling pathway, etc. (Figure 6b); the most heavily affected pathways (p < 0.05) were progesterone-mediated oocyte maturation, oocyte meiosis, neurotrophin signaling pathway, and growth hormone synthesis.The pathways that always played roles in ovarian development from stages II to IV were the sphingolipid signaling pathway, adipocytokine signaling pathway, neurotrophin signaling pathway, etc.
Discussion
Fish gonad development is a critical biological process that directly affects fish reproduction and population sustainability.The metabolic profiles of T. japonicus's ovaries at different developmental stages (II-IV) reflect the physiological status of the ovaries before spawning.Metabonomics analysis was performed to detect the potentially important metabolic pathways and metabolites between different ovarian stages, providing fundamental information for further investigations into early maturation mechanisms in wild fish.In the present study, the ovarian metabolites profile of the three stages were separated.The metabolites significantly alerted in each stage were principally lipids and lipid-like molecules and amino acid metabolites.
The lipids and lipid-like molecules formed the majority of significantly altered metabolites during ovarian developmental stages from II to IV in T. japonicus (Figure 5).This result is consistent with other aquatic animal studies that also used metabonomics analysis to investigate ovarian development [30,38,41].In teleosts, ovarian development is a process of nutrient storage, where all material deposited in an oocyte serves as nutrients for the embryo, primarily consisting of lipoproteins, phosphoproteins, and discrete lipids [42].The biochemical composition in aquatic animals showed that during maturation process, the fat content in gonads decreased, while the percentage of MUFA acids increased, and fats were actively transferred from muscles and incorporated in gonads [43].Evidence also implied that part of the deposited fatty acids in the ovary might come from diet, as the effect of dietary fatty acids on reproductive performance and egg quality has been reported in several fishes, such as Siberian sturgeon (Acipenser baeri) [44], tongue sole (Cynoglossus semilaevis) [45], and Atlantic halibut (Hippoglossus hippoglossus) [46].However, the specific changed metabolites during the ovarian developmental period are stages-and speciesspecific.For example, significantly changed metabolites during ovarian development from stages III to IV in Coilia nasus were related to the synthesis pathways of steroids, steroid hormones, and arachidonic acid [41].Serum metabolites of female Chinese sturgeon also indicated that the metabolic pathways related to linoleic acid, α-linolenic acid, and ARAs significantly changed during ovarian development from stages II to IV [30].In the present study, dominant altered metabolites belonged to glycerophospholipid, steroids and steroid derivatives, prenol lipids, and fatty acyls.As the ovarian samples were used, the ovarian stromal tissue, follicle cells, and non-vitellogenic follicles would contribute some portion of the lipid composition.The quantitative significance of this would vary depending on the degree of maturity of the dominant oocytes [8,47].
The second largest group in significantly altered metabolites comprised amino acids, peptides, and analogues (Figure 5).The study on the changes in serum metabolites during stages II to IV of female Chinese sturgeon also found significant changes in the metabolites related to a large number of different amino acid metabolic pathways [30].Protein has an important effect on the reproductive performance of aquatic animals, promoting the growth and maturation of ovarian cells, affecting precocious puberty, and benefiting the gonad index, fertility, and larval production (for details, see the review by Shi [48]).Several amino acids such as tryptophan (Try), phenylalanine (Phe), lysine (Lys), leucine (Leu), Valine (Val), alanine (Ala), serine (Ser), glutamic acid (Glu), Arginine (Arg), and aspartic acid (Asp) have been predicted to be important in the reproductive performance of fish [29,38,[49][50][51][52].In the present study, a large number of different incomplete breakdown products of protein catabolism were found in the ovaries of T. japonicus, including oligopeptides, dipeptides, etc.These metabolites consisted of Arg, Val, Glu, Try, Phe, Lys, etc. (Figure 4).Some dipeptides are known to have physiological or cell-signaling effects, although most are simply shortlived intermediates on their way to specific amino acid degradation pathways following further proteolysis [53], which may reveal the important role of amino acids and their related metabolites in ovarian development.The exact functions of these amino acids in species-specific reproduction deserve further investigation.For example, arginine is one of the most versatile amino acids in animal cells and plays a variety of physiological functions, which serves as a precursor for the synthesis not only of proteins but also of nitric oxide, polyamines, proline, glutamate, creatine, and agmatine [54].It has been shown in mammals that arginine has a strong relationship with reproductive performance [55], but the studies on the effects of arginine on gonad development and reproduction of aquatic animals are still limited; only in crustaceans was it reported that arginine could affect the synthesis and secretion of the vitellogenin-inhibiting hormone, and also could increase the expression of vitellogenin receptor gene in the ovary and promote the deposition of vitellogenin in ootids [56].
Different hormonal signaling initiated by hormones as well as environmental factors plays crucial roles in the various reproductive processes by modulating different signaling pathways [57].In the present study, a significant decrease was observed in the concentration of estrone during ovarian development from stages II to III (Figure 4).Estrone is a major estrogen and can be converted to estradiol with potent estrogenic properties.In teleosts, estradiol is a key hormone in oocyte growth [58], and in female trout, the process of exogenous vitellogenesis is primarily regulated by estradiol [59].The cytochrome P450 family, including cholesterol side chain cleavage (P450scc), 17α-hydroxylase/lyase (P450c17), and aromatase (P450arom), is expressed in many tissues, including the ovary, and is the key enzyme in the synthesis of estradiol [60][61][62].Studies on channel catfish (Ictalurus punctatus) have shown that the transcript abundance for P450c17, P450scc, and P450arom was increased during early vitellogenic growth of the oocytes, and decreased precipitously with the completion of vitellogenesis [60].In the present study, the cytopigment P450 metabolic pathway was affected during ovarian development from stages III to IV (Figure 6), which was consistent with the observation of fish oocyte growth by transmission electron microscopy [63].
In teleosts, cyclic adenosine monophosphate (cAMP) is considered an important second messenger for estradiol [64].However, in the present study, the concentration of decreased significantly from stages III to IV (Figure 4).cAMP is a key regulator of oocyte maturation and plays paradoxical roles within the oocyte and cumulus cells to orchestrate oocyte meiotic arrest and resumption [65].cAMP is acutely and transiently upregulated in the oocyte in response to the luteinizing hormone (LH) surge [66], initiating signaling events that promote oocyte meiotic resumption.cAMP elevation alters adenine nucleotide metabolism and is hydrolyzed to AMP by phosphodiesterases, which increases the AMP/ATP ratio [67].As 5 AMP-activated protein kinase (AMPK) is sensitive to the AMP-to-ATP ratio, its activation is triggered by an increasing AMP level.This, in turn, leads to the activation of several pathways involved in gonadal steroidogenesis, the proliferation and survival of gonadal cells, and the maturation of oocytes [68].
In addition to hormonal signaling, neuronal signaling has been shown to play a role in oocyte maturation in several aquatic animals [69].The present study identified the involvement of the tryptophan-serotonin metabolic pathway in ovarian development in in T. japonicus (Figure 6).Specifically, the concentration of 5-methoxytryptamine (5-MT), a nonselective serotonin (5-HT) receptor agonist [70,71], significantly increased from stages II to III (Figure 4).In the hypothalamus, the binding of 5-HT to its receptor stimulates the secretion of gonadotropin-releasing hormone (GnRH), follicle-stimulating hormone (FSH), and LH, which regulate the onset of puberty [72,73].5-HT has also been detected in the ovaries [74] and is associated with steroidogenesis, oocyte meiosis, oocyte maturation, and ovulation [75][76][77].
Additionally, the concentration of ceramide increased gradually during gonadal development from stages II to IV (Figure 4).Ceramides are a large family of lipid-signaling molecules that are associated with several biological processes, including cell growth, differentiation, and apoptosis [78,79].Recent studies have shown that ceramides have prominent metabolic roles as transmitters for the central actions of leptin and ghrelin [80,81], which could regulate puberty onset.An increment in hypothalamic ceramide content could advance puberty [82].Ceramide has also been detected in the ovary and is speculated to play roles in ovarian development [83].Considering the existing precocious puberty of T. japonicus, further investigation is needed to explore the relationship between precocious puberty and the activation of ceramide-related metabolic pathways.
Conclusions
To better understand the physiological status of the ovary before spawning in T. japonicus, untargeted LC-MS was introduced to analyze the metabolic profiles of ovaries at different developmental stages (II-IV).The findings suggest that the ovarian metabolic profiles are maturation-dependent and reflect the special metabolic demands at each developmental stage.The significantly altered metabolites-enriched pathways showed that the steroid-related pathways were heavily affected during stages II to III, while oocytematuration-related pathways played roles from stage III to IV.The results provided basic information on the aspect of metabolomics for further investigation of maturation mechanisms in wild fish.
Supplementary Materials:
The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/fishes8050262/s1.Institutional Review Board Statement: Ethical review and approval were waived for this study because the samples involved in this study were from fishing operations and did not involve animal protection requirements.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data used to generate the figures of our study are available via https://github.com/fengliuying/LC-MS-Data(accessed on 1 January 2021).Other datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Figure 2 .
Figure 2. Venn diagram for the number of annotated metabolites in the ovaries of T. japonicus fr developmental stages II, III, and IV using both positive (a), and negative (b) ion modes.developing, III = maturing, IV = mature.
Figure 2 .
Figure 2. Venn diagram for the number of annotated metabolites in the ovaries of T. japonicus from developmental stages II, III, and IV using both positive (a), and negative (b) ion modes.II = developing, III = maturing, IV = mature.
Figure 2 .
Figure 2. Venn diagram for the number of annotated metabolites in the ovaries of T. japonicus from developmental stages II, III and IV using both positive (a), and negative (b) ion modes.II = developing, III = maturing, IV = mature.
Figure 3 .
Figure 3. Global metabolomics profile analysis.PCA score plot of ovaries of T. japonicus at different developmental stages based on the metabolomics data in positive ion mode (a), and negative ion mode (b).QC samples (purple triangle) clustered together tightly in both modes, indicating great QC repeatability and analysis system stability.OPLS−DA score plots of ovaries at different development stages based on the metabolomics data in positive ion mode: (c) II vs. III, R 2 = 0.996,
FishesFishes 15 Figure 4 .
Figure 4. Heat map of the differential metabolites in the ovaries of T. japonicus.(a,b) Heat map visualization of differential metabolites in stage III relative to II, and IV relative to III combined positive and negative ion modes, respectively.II = developing, III = maturing, IV = mature.The color scale (right) illustrates the relative expression levels of metabolites across all samples: red represents an expression level above the mean, and blue represents an expression lower than the mean.
Fishes 2023, 8 , 262 7 of 13 Fishes
2023, 8, x FOR PEER REVIEW 7 of 14 positive and negative ion modes, respectively.II = developing, III = maturing, IV =mature.The color scale (right) illustrates the relative expression levels of metabolites across all samples: red represents an expression level above the mean, and blue represents an expression lower than the mean.
Figure 5 .
Figure 5. Classification analysis of significantly altered metabolites in the ovaries of T. japonicus combined positive ion mode and negative ion mode between stage III vs. II (a), and stage IV vs. III (b), respectively.II = developing, III = maturing, IV = mature.
Figure 6 .
Figure 6.Pathway enrichment analysis of significantly altered metabolites in the ovaries of T. japonicus among different developmental stages according to the KEGG pathway: (a) Ⅲ vs. Ⅱ, and (b) IV vs. III).II = developing, III = maturing, IV = mature.The pathways are plotted according to pvalues from pathway enrichment analysis and pathway impact values from pathway topology analysis.Color gradient and circle size indicate the significance of the pathway ranked by p-values
Figure 5 .
Figure 5. Classification analysis of significantly altered metabolites in the ovaries of T. japonicus combined positive ion mode and negative ion mode between stage III vs. II (a), and stage IV vs. III (b), respectively.II = developing, III = maturing, IV = mature.
Fishes 2023, 8 ,
x FOR PEER REVIEW 7 of 14 positive and negative ion modes, respectively.II = developing, III = maturing, IV =mature.The color scale (right) illustrates the relative expression levels of metabolites across all samples: red represents an expression level above the mean, and blue represents an expression lower than the mean.
Figure 5 .
Figure 5. Classification analysis of significantly altered metabolites in the ovaries of T. japonicus combined positive ion mode and negative ion mode between stage III vs. II (a), and stage IV vs. III (b), respectively.II = developing, III = maturing, IV = mature.
Figure 6 .
Figure 6.Pathway enrichment analysis of significantly altered metabolites in the ovaries of T. japonicus among different developmental stages according to the KEGG pathway: (a) Ⅲ vs. Ⅱ, and (b) IV vs. III).II = developing, III = maturing, IV = mature.The pathways are plotted according to pvalues from pathway enrichment analysis and pathway impact values from pathway topology analysis.Color gradient and circle size indicate the significance of the pathway ranked by p-values
Figure 6 .
Figure 6.Pathway enrichment analysis of significantly altered metabolites in the ovaries of T. japonicus among different developmental stages according to the KEGG pathway: (a) III vs. II, and (b) IV vs. III.II = developing, III = maturing, IV = mature.The pathways are plotted according to p-values from pathway enrichment analysis and pathway impact values from pathway topology analysis.Color gradient and circle size indicate the significance of the pathway ranked by p-values (blue: higher p-values and red: lower p-values) and the number of altered metabolites in the pathway.
Author
Contributions: J.-H.C. and Y.J. conceived and designed the research.L.-Y.F., L.-P.Y., and R.-W.L. conducted experiments.S.-F.L. contributed reagents or analytical tools.L.-Y.F. and Y.J. analyzed data.L.-Y.F.wrote the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This study was funded by the Shanghai Sailing Program (18YF1429800), Central Public-Interest Scientific Institution Basal Research Fund, East China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences (L32201921860).And The APC was funded by 18YF1429800. | 2023-05-17T15:06:52.973Z | 2023-05-14T00:00:00.000 | {
"year": 2023,
"sha1": "ba44fe534d5c19f8edebc3a985e7ab074656ebe8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2410-3888/8/5/262/pdf?version=1684058066",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b582b925d6163db583db52bdeb16d22b9bd00bbb",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
54729774 | pes2o/s2orc | v3-fos-license | Expression Analysis of Cell Wall-Related Genes in Cannabis sativa : The “ Ins and Outs ” of Hemp Stem Tissue Development
Textile hemp (Cannabis sativa L.) is a multipurpose crop producing biomass with uses in e.g., the textile, biocomposite, and construction sectors. It was previously shown that the hypocotyl of hemp is useful to study the kinetics of secondary tissue development, where primary and secondary growths are temporally uncoupled. We here sought to demonstrate that the stem of adult hemp plants is an additional suitable model to study the heterogeneous lignification of the tissues and the mechanisms underlying secondary cell wall formation in bast fibres. A targeted quantitative PCR analysis carried out on a set of twenty genes involved in cell wall biosynthesis clearly showed differences in expression in the core and cortical tissues along four stem regions spanning from elongation to cell wall thickening. Genes involved in phenylpropanoid biosynthesis and secondary cell wall cellulose synthases were expressed at higher levels in core tissues at the bottom, while specific genes, notably a class III peroxidase and a gene partaking in lignan biosynthesis, were highly expressed in the cortex of elongating internodes. The two systems, the hypocotyl and the adult stem of textile hemp, are equally valid and complementary to address questions related to lignification and secondary cell wall deposition.
Introduction
Industrial hemp (Cannabis sativa L.), which has a tetrahydrocannabinol (THC) content <0.3%, is a fibre crop historically used for textiles [1], and is currently considered a renewable resource for the provision of fibres substituting synthetic ones in composites [2,3].Besides the application-oriented aspects of this crop, hemp is an interesting model to study questions related to lignification and the development of cellulose-rich (i.e., gelatinous) secondary cell walls (SCWs).Its stem comprises indeed a hollow lignified core (also known as hurd/shiv) and a cortex harbouring cellulosic phloem-supporting bast fibres.The crop can therefore produce inner lignified and peripheral cellulosic fibres.This feature is very interesting, as it enables researcher to investigate two aspects, lignification and cellulose-rich cell wall formation, in the same model [4,5].
It was already previously demonstrated that the hemp hypocotyl can provide valuable information relative to the transition from primary to secondary growth [6] and, more recently, lignification [7].Additionally, the adult hemp stem was shown to provide detailed molecular data relative to the sequential developmental stages of bast fibres [8].In particular, it was demonstrated that the snap point is a key region in the transition from elongation to bast fibre thickening, since the cell wall-related genes show major changes in expression [3,9].
With the goal of confirming the validity of the adult hemp stem as a model for cell wall studies, we have here undertaken a targeted qPCR analysis focused on twenty cell wall-related genes.We provide evidence for the differential expression of the genes in the inner/outer stem tissues along four stem regions and we propose a role in bast fibre development for some of the analysed genes.
Plant Material and Growth Conditions
C. sativa cv.Santhica 27 was used in this experiment.Plants were grown for six weeks in controlled conditions according to [8].Eight different tissues sampled from four different heights were sampled.The heights were determined relatively to the snap point, which marks the transition between elongation and thickening of primary bast fibres [10]: above the snap point (ASP), internode containing the snap point (SP), below the snap point (BSP), and two internodes below the snap point (BBSP).The snap point was determined as previously described [9].Each segment was separated in cortical tissues (harbouring the bast fibres, annotated as OUT) and core tissues (containing the xylem with its associated fibres and the pith, annotated as IN), according to [9,11].To avoid excessive variation in gene expression, a segment of ca. 2 cm was collected in the middle of each internode.The samples were directly frozen in liquid nitrogen and stored at −80 • C until RNA extraction.Four biological replicates, each consisting of five plants, were used in this experiment.
RNA Extraction and RT-qPCR
Total RNA was extracted using the RNeasy Plant Mini Kit (Qiagen, Leusden, The Netherlands), with the on-column DNase treatment, following the instructions of the manufacturer with a modification.The 500 µL of RLC buffer were replaced by 450 µL of this buffer with 50 µL of 20% PEG (MW 20,000, Sigma, St. Louis, MO, USA) to maximise the extraction of total RNA [12].The quantity and quality of the total RNAs were assessed spectrophotometrically and with a BioAnalyzer (all RINs > 7.5).Reverse transcription and RT-qPCR analysis were performed as described in [9].The gene expression was normalised using eTIF3H and eTIF4, whose stability was determined with respect to previously reported reference genes (eTIF3E and Cyclophilin; [13]).A melt curve was performed at the end of each run to check the specificity of the PCR products.The characteristics of the primers are listed in Table S1.The target genes originate from several databases: our previously published hemp transcriptomes [6,8], a microarray-based experiment [14], or through BLAST search of orthologous Arabidopsis thaliana genes at the Medicinal Plant Genomics Resource [15].The primers were validated via qPCR using a standard curve with a serial five-fold dilution of cDNA (20, 4, 0.8, 0.16, 0.032, and 0.0064 ng/µL).The normalised expression values were calculated in qBasePLUS, and the hierarchical clustering of expression values obtained with the software Cluster 3.0 [16].
Results
The hierarchical clustering of the expression values shows four main patterns (Figure 1).In group A, the genes associated with primary cell wall cellulose deposition are found (CesA1A, CesA3, CesA6A and CesA6B).These genes are ubiquitously expressed within the targeted tissues.
In group B, the orthologous genes known to be involved in SCW biosynthesis are found: the master transcriptional regulator of SCW deposition NAC secondary wall thickening promoting factor 1 NST1 [17], the three cellulose synthases associated with SCW biogenesis CesA4, CesA7, and CesA8 [18], as well as the class III peroxidase orthologous to Arabidopsis PRX52 [19].These genes are more expressed at the SP and in the bottom internodes and slightly upregulated in the inner tissues as compared to outer tissues (Table S1).β-galactosidase 2 (BGAL2) is also found in this group, together with the gene Walls are thin 1 (WAT1).The expression pattern of WAT1 is somehow different from the other genes of this group, as it is slightly more expressed in the outer tissues from the SP downwards.
Group C gathers some major genes involved in lignification: Phenylalanine ammonia lyase (PAL), Cinnamyl alcohol dehydrogenase 4 (CAD4) and 4-hydroxycinnamoyl-CoA ligase 1 (4CL1) are part of the monolignol pathway, Methionine synthase 1 (MET1) and S-adenosylmethionine synthetase (SAM) are involved in monolignol methylation [20], and Laccase 4 (LAC4) is one of the enzymes partaking in lignin polymerisation [21].These genes are in general more expressed in the core than in the cortical tissues.
Finally, the similar expression patterns of CesA1B, Pinoresinol lariciresinol reductase (PLR) and PRX72 form group D. These genes are more expressed above the SP.By contrast with the three other groups, PLR and PRX72 are upregulated in the outer tissues (Table S1), which may point to an important role in bast fibre development.
The expression of CesA1B strongly drops in the internodes below the snap point.
Fibers 2018, 6, x FOR PEER REVIEW 3 of 8 Group C gathers some major genes involved in lignification: Phenylalanine ammonia lyase (PAL), Cinnamyl alcohol dehydrogenase 4 (CAD4) and 4-hydroxycinnamoyl-CoA ligase 1 (4CL1) are part of the monolignol pathway, Methionine synthase 1 (MET1) and S-adenosylmethionine synthetase (SAM) are involved in monolignol methylation [20], and Laccase 4 (LAC4) is one of the enzymes partaking in lignin polymerisation [21].These genes are in general more expressed in the core than in the cortical tissues.
Finally, the similar expression patterns of CesA1B, Pinoresinol lariciresinol reductase (PLR) and PRX72 form group D. These genes are more expressed above the SP.By contrast with the three other groups, PLR and PRX72 are upregulated in the outer tissues (Table S1), which may point to an important role in bast fibre development.
The expression of CesA1B strongly drops in the internodes below the snap point.
Discussion
In order to explain the molecular regulation leading to the different cell wall composition between core and cortical tissues, a targeted gene expression analysis was performed (Figure 1).It shows that the higher lignin content reported in xylem tissues (15% vs. 4% in the bast fibres [14]) is correlated with an upregulation of the genes of the phenylpropanoid/monolignol pathway, in accordance with previous results [14].Those genes are found in clusters C (PAL, 4CL1, CAD4, LAC4, MET1, and SAM) and B (PRX52).
The CesAs associated with the cellulose synthase complex of the SCW (CesA4, CesA7 and CesA8; [18]) are grouped in cluster B. They are slightly more expressed in the inner tissues and weakly in the elongating stem (Figure 1).In the elongating internode (ASP), the deposition of SCW is restricted to
Discussion
In order to explain the molecular regulation leading to the different cell wall composition between core and cortical tissues, a targeted gene expression analysis was performed (Figure 1).It shows that the higher lignin content reported in xylem tissues (15% vs. 4% in the bast fibres [14]) is correlated with an upregulation of the genes of the phenylpropanoid/monolignol pathway, in accordance with previous results [14].Those genes are found in clusters C (PAL, 4CL1, CAD4, LAC4, MET1, and SAM) and B (PRX52).
The CesAs associated with the cellulose synthase complex of the SCW (CesA4, CesA7 and CesA8; [18]) are grouped in cluster B. They are slightly more expressed in the inner tissues and weakly in the elongating stem (Figure 1).In the elongating internode (ASP), the deposition of SCW is restricted to the metaxylem and protoxylem [11].In the secondary xylem, the SCW is deposited in a reticulated or pitted pattern, characterised by a massive deposition of cellulose [22].Fibres and tracheary elements of the xylem have a xylan-type SCW (i.e., organised in S1, S2, and S3 sublayers), while bast fibres have a gelatinous-type SCW (S1, S2, and G-layer) [23].Generally, there are, if any, only slight differences in the CesAs expression patterns between xylan-type and gelatinous-type SCWs [24].For instance, CesaA4 and CesA7 are upregulated in the xylan-type bast fibres of jute [25], as also observed in hemp in our results (Figure 1).However, a recent study performed on flax has highlighted a higher expression of both primary and SCW-related CesAs in phloem fibres depositing their G-layer [26].In addition, a strong bast fibre phenotype (reduced number and irregular cell shape associated with altered cell wall composition) has been observed in flax plants with virus-induced gene silencing of CesA genes (CesA1 or CesA6), usually acting in primary cell wall biogenesis [27].The data presented in Figure 1 do not allow us to confirm nor refute this hypothesis, however, it is noteworthy to mention that several CesA isoforms may be missing from our analysis.
The CesA genes analysed in this article were obtained by mining two resources: the Medicinal Plant Genomic Resource database (MPGR) [15] and our in-house hemp transcriptome assembly (originating from [6,8]).In this assembly, the contigs annotated as CesAs or cellulose synthase-like (Csl) were individually checked and used for a BLAST analysis in the hemp genome deposited in MPGR.Using this method, eight different CesA genes were retrieved.In the flax genome, between fifteen and sixteen predicted CesAs were found [27,28].We may, thus, anticipate additional CesA isoforms in hemp, whose identifications rely on a robust annotation of the genomic resources available so far [29].In agreement with the data from [27,28], CesA4, CesA7 and CesA8 were more expressed in tissues undergoing SCW formation, and may thus be considered as functional orthologous genes of AtCesA4, AtCesA7 and AtCesA8.
The master transcription factor of SCW deposition NST1 shows a trend similar to CesAs.As it is highly expressed both in inner and outer tissues undergoing SCW formation, this transcription factor may be involved in the development of xylem and bast fibres.This is in contrast with the data obtained in flax [30], where NST1 is less expressed in thickening bast fibres (which in this study correspond to BSP and BBSP samples), as compared to the top region above the SP (corresponding here to the ASP sample).The presence of secondary bast fibres originating from high cambial activity in hemp may explain this difference.Indeed, several transcription factors from the NST family (such as PtrWND1B) are suggested to contribute to the formation of bast fibres in poplar, based on their expression in this tissue [31].We may thus invoke two scenarios, which are not mutually exclusive, to explain this particular gene expression.In the first one, NST1 is linked to the differentiation of secondary bast fibres, while in the second its expression leads to the regulation of the genes involved in the formation of the SCW (cellulose, xylan, and lignin biosyntheses; [32]).Assuming that hemp bast fibres are hypolignified, a molecular mechanism specifically downregulating the biosynthesis of monolignols and/or lignin should be present in hemp bast fibres.This control may take place at the transcriptional (through negative regulation of the expression of these genes) and/or at the posttranscriptional level.In flax bast fibres (which are also hypolignified), this regulation may be (partially) achieved through the degradation of transcripts of laccases involved in lignin polymerisation by microRNA397 [33].
The regulation of the biosynthesis of the non-cellulosic polysaccharides in gelatinous fibres, such as rhamnogalacturonan-I (RG-I), is still not well understood, but several NAC transcription factors are upregulated in poplar tension wood developing gelatinous fibres [34].We may speculate that hemp NST1 may also regulate the biosynthesis of such polysaccharides, however, this remains to be confirmed.
Auxin homeostasis is a central element for SCW formation in fibres [35,36].The tonoplast-localised auxin efflux protein WAT1 plays a key role in this respect.The Arabidopsis mutant line wat1-1 accumulates indole acetic acid (IAA), the main form of bioactive auxin, in the tonoplast, preventing its binding to nuclear targets (auxin-regulated genes), or endoplasmic reticulum-based auxin receptors [36].This mutation has important transcriptomic consequences: NST1 is significantly downregulated, leading to lower expression of CesA4, CesA7, CesA8 and most of the genes of the lignin biosynthetic pathway [35].The expression profile of the hemp ortholog of WAT1 points to an important role of auxin in the development and maturation of xylem and bast fibres.It suggests that the expression of NST1 may be driven by the pool of auxin present in the nucleus in a WAT1-dependent manner.Similarly, it was suggested that the auxin-induced expression of NST genes for the promotion of SCW formation in Arabidopsis fibres is mediated by the gene REVOLUTA [32], whose null mutant displays a significantly decreased expression of two putative auxin efflux carriers, PIN3 and PIN4 [37].The expression profile of WAT1 in the present study is similar to the observations previously made in two other biological systems, namely, the developing hemp hypocotyl [6] and the developing hemp bast fibres [8].This gene was highly expressed in hypocotyls undergoing secondary growth and cell wall thickening, as well as in bast fibres depositing their gelatinous layer.All these data suggest that WAT1 and auxin play a significant role in SCW deposition in hemp bast fibres.
The Arabidopsis BGAL2 specifically hydrolyses β-(1-3) and β-(1-4) linkages in galacto-oligosaccharides and β-(1-4) linkage in lupin galactan [38].These two linkages are found in RG-I.This complex pectin is present with different structures in elongating tissues, in dividing regions, such as the cambium, and in the pectic matrix enrobing cellulose in flax and hemp bast fibres [39].The expression profile of BGAL2 in hemp suggests a role in these events.According to previous microscopic observations [8], secondary growth occurs from the SP downwards (i.e., in the samples SP, BSP, and BBSP).Later in the development, secondary tissues originating from the cambium undergo intrusive growth [24,40].Finally, the pectic matrix of the gelatinous layer of the bast fibres is enzymatically modified [41].These three distinct processes require extensive modifications of the extracellular matrix, among them the BGAL2-driven RG-I degradation.The role of a specific BGAL in flax bast fibre maturation was already demonstrated [42].
The expression profiles of CesA1B, PLR, and PRX72 is in sharp contrast with the genes from clusters B and C. PLR and PRX72 are more expressed in the elongating internode (ASP) and in the outer tissues.PLR is an entry enzyme for the biosynthesis of lignans.Lignans are formed by enantioselective coupling of two monolignol units [43].This family of molecules is involved in plant growth [44], lignin distribution during SCW biosynthesis [45], and redox homeostasis during lignification [46].From the data here reported, it is possible to propose that PLR regulates stem elongation via the biosynthesis of specific lignans [47].The expression of PLR was also higher in the elongating hypocotyls (6 and 9 days after sowing; [7]).A functional analysis of this gene, as well as a detailed chemical characterisation of lignans present in elongating-and non-elongating tissues, will validate this hypothesis.The expression of PLR and PRX72 is higher in the cortical tissue, and may thus be important for the development of bast fibres.The Arabidopsis ortholog of PRX72 is involved in lignin biosynthesis [48].A mutant defective in AtPRX72 shows thinner SCWs only in interfascicular fibres and a lower lignin S/G ratio.Based on its expression profile in hemp (Figure 1), a role in bast fibre lignification is, however, unlikely.Indeed, bast fibres lignify mostly after they reach their final size and S-lignin is deposited at the latest stage of lignification [48].
CesA1B is strongly expressed in the internode ASP, both in inner and outer tissues, consistent with its role in the deposition of cellulose in the primary cell wall.In this region of the stem, the bast and xylem fibres grow intrusively, while the cells of the other tissues may eventually end their symplastic elongation [49].The upregulation of CesA1B in the ASP region suggests a role in intrusive growth.The expression pattern of CesA1B shows a clustering that is not coinciding with that of the other primary CesAs; maybe its function is different and associated with the early stages of xylem cell development.It would be interesting to determine whether this gene is specifically induced upon gravistimulation in the xylem cells of the stem.
Conclusions
In this article, a gene expression analysis was performed, which aimed at analysing key actors involved in the biosynthesis of the cell wall in hemp stem tissues.The differential cell wall composition of inner and outer stem tissues is at least partially regulated at the gene expression level, especially for lignin.The differential expression of genes controlling the cell wall composition of several types of tissues is a promising result for new research lines.
Figure 1 .
Figure 1.Gene expression analysis targeting processes related to cell wall deposition.Heat map hierarchical clustering of the expression profiles of twenty genes at four stem regions, in inner (-IN) and outer (-OUT) tissues.For each group, the Pearson correlation coefficient is provided.Abbreviations are as in the text.
Figure 1 .
Figure 1.Gene expression analysis targeting processes related to cell wall deposition.Heat map hierarchical clustering of the expression profiles of twenty genes at four stem regions, in inner (-IN) and outer (-OUT) tissues.For each group, the Pearson correlation coefficient is provided.Abbreviations are as in the text. | 2018-12-14T14:04:58.629Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "a291e5839c88f891f36d65b82708b4b1e1d5f597",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6439/6/2/27/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a291e5839c88f891f36d65b82708b4b1e1d5f597",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
73617848 | pes2o/s2orc | v3-fos-license | A road to hydrogenating graphene by a reactive ion etching plasma
We report the hydrogenation of single and bilayer graphene by an argon-hydrogen plasma produced in a reactive ion etching (RIE) system. Electronic transport measurements in combination with Raman spectroscopy are used to link the electric mean free path to the optically extracted defect concentration. We emphasize the role of the self-bias of the graphene in suppressing the erosion of the akes during plasma processing. We show that under the chosen plasma conditions the process does not introduce considerable damage to the graphene sheet and that hydrogenation occurs primarily due to the hydrogen ions from the plasma and not due to fragmentation of water adsorbates on the graphene surface by highly accelerated plasma electrons. For this reason the hydrogenation level can be precisely controlled. The hydrogenation process presented here can be easily implemented in any RIE plasma system.
I. INTRODUCTION
Hydrogenation of carbon materials, e.g.graphite, carbon nanotubes or carbon foams, has triggered a large technological and scientific interest, with its main focus on hydrogen physisorbtion in hydrogen storage systems [1].However, for electronic applications the chemisorption of hydrogen is even more interesting, as it allows for tuning of electronic properties in carbon conjugated systems.An excellent candidate for such manipulation is graphene, a single layer of graphite, built only from sp 2 carbons and demonstrating high carrier mobilities [2].Similarly to single wall nanotubes [3], the small volume and large contact area of graphene makes chemisorption of hydrogen an efficient way to modify its electronic properties [4,5].Depending on the H coverage one can tune the transport properties of graphene from metallic to semiconducting, and ultimately to an insulating state for its fully hydrogenated derivative graphane [6].Opening of a bandgap by hydrogenation in otherwise gapless graphene can be also an elegant way to fabricate a circuit consisting of a single material: graphene, with both metallic and semiconducting parts.
Apart from microelectronic applications, the influence of hydrogen on electronic transport in graphene has great scientific relevance as well.In particular to understand the role of localized defects as scattering centers limiting carrier mobility [7], the transition in charge transport from the Drude type (in pristine graphene) to the variable range hopping type (in strongly hydrogenated graphene) [6], or in predictions of magnetism originating from hydrogen defects [8,9].
Chemisorption of hydrogen on a graphene surface changes the carbon electronic orbitals from sp 2 to sp 3 * M.Wojtaszek@rug.nlhybridization and results in a localized state.The potential barrier for hydrogen adsorption to the surface of graphene is about 0.2 eV [4,10].Part of this energy is consumed by the displacement of the carbon out from the graphene plane to obtain the tetragonal sp 3 geometry.This adsorbtion barrier is lower in initially curved or protruding structures (at grain boundaries, lattice defects or on ripples), where structural deformation is already present [10].For effective and controllable hydrogenation of graphene several techniques have been explored so far, including exposure to an atomic hydrogen source [11][12][13], electron beam (e-beam) exposure of highly hydrated lithography resist HSQ [14] and e-beam exposure of a water adhesive layer on graphene [15,16].Among this techniques exposure to an argon-hydrogen plasma produced in a DC [6] or RF source [17] seems promising alternative due to the high energy and reactivity of the incident hydrogen ions, what enables their chemisorption even on the flat surface of graphene.Hydrogenation by an Ar/H 2 plasma can lead to a high and fast hydrogen uptake, it does not require special sample preparation and is compatible with microfabrication techniques.
Estimation of the H content in micromechanically cleaved graphene flakes after hydrogen treatment is very difficult.The standard methods known from graphite, like thermal programmed desorption (TPD) [18][19][20], are insensitive to the possible amounts of desorbed hydrogen from micro-sized flake.Estimation of H coverage from scanning tunneling microscopy (STM) topography images carries limitations that STM probes the surface only locally, measurements are time consuming and difficult when graphene is deposited on the insulating substrate.An appealing alternative is Raman spectroscopy, which is a relatively easy, non-destructive, non-contacting and quick method to probe H coverage from even micrometer sized samples and can be carried out at room temperature and atmospheric pressure.Chemisorption of H induces Raman bands which are normally symmetry forbidden in the graphene spectrum.The assignment of these bands to hydrogen adsorbates allows an indirect estimate of the H content [21].
In this work we demonstrate the hydrogenation of graphene by an RF plasma of an argon-hydrogen gas mixture using reactive ion etching (RIE).This technique has not been explored for graphene hydrogenation so far, despite the fact that RIE is widely used for electronic device microfabrication.We characterize the hydrogenation properties of the RF plasma and its reversibility under moderate thermal annealing by means of Raman spectroscopy.Further we present the electronic transport measurements in single layer (SLG) and bilayer graphene (BLG) which enables us to relate the structural defects to graphene transport properties.In the control experiment we compare the effect of the Ar/H 2 plasma with the pure Ar plasma in two types of sample (in bare flakes on insulating substrate and in graphene devices).The observed differences highlight the role of the floating potential of the non-contacted graphene flakes for acceleration of the graphene erosion.As this effect is completely suppressed in graphene devices, we conclude that there graphene hydrogenation happens primarily due to the hydrogen ions and not to highly accelerated plasma electrons fragmenting water add-layer on graphene surface, as suggested in Ref. [22] .
A. RF plasma conditions
The hydrogenation is performed in a reactive ion etching reactor with a parallel plate geometry, schematically depicted in Fig. 1.The diameter of the bottom electrodes, on which the samples are placed, is 300 mm while the opposite top wall of the chamber serves as a grounded counter-electrode.A high frequency generator operating at 13.56 MHz is capacitively coupled to the bottom electrode, and a matching of the electrical network to the plasma is accomplished by mechanical tuning of the impedance in the circuit.
In view of safety considerations, we use a gas mixture of H 2 (15%) with Ar (85%) as a balance gas.The ionization energy of Ar, E Ar = 15.76 eV, is very close to the ionization energy of H 2 , E H2 = 15.42 eV, therefore the induced plasma is composed of ions from both species.The inlet of gas is controlled by an Ar mass flow controller.In all presented plasma hydrogenation processes the gas flow is kept constant at 200 sccm and the pressure in the chamber is 0.05 mbar.To reduce the reactivity of the plasma, especially carbon sputtering by Ar ions, we use the plasma at the lowest ignition power, P = 3 W (power density is ∼ 4 mW/cm 2 ), and we tune the circuit impedance to reduce the built-in DC self-bias between the bottom electrode and the plasma (V SB in Fig. 1 acceleration and possible sputtering effects on graphene.We analyze two cases, one where the graphene flakes are electrically insulated from the chamber electrodes (by the SiO 2 substrate) and one where the flake is in electric contact with the source electrode (latter called a graphene device).In the first case, the potential of the flake is floating, which may result in negative charging of the flake before the plasma quasi-equilibrium state and V SB =0 is reached (in the first 3 seconds after the plasma ignition).This charging is largely suppressed for graphene device, which is in electrical contact with the chamber electrode.On the basis of work of Nunomura et al. [23] we estimate that we are in a collisional regime, with ion bombardment energy in the range of 5-20 eV and that the dominant hydrogen radicals are H + 3 , with a much smaller concentrations of H + 2 and H + .We note that the processing conditions and hydrogenation speed are different from the one explored in Ref. [17].The gas pressure in that process is 2 orders of magnitude higher than it is here, and Luo et al. used a grounded bottom electrode.
B. Raman spectroscopy of prisitine and hydrogenated graphene
Information about the H content can be obtained indirectly from Raman spectra [17].In pristine graphene only two vibrational modes are Raman active: an inplane optical vibration of E 2g symmetry, at 1580 cm −1 , called G band, produced by sp 2 carbon network and a resonantly enhanced two phonon scattering process, around 2670 cm −1 , called 2D (or sometimes G').The presence of sp 3 defects breaks the translational symmetry in the graphene lattice and activates other resonant transitions.The most significant is so-called defect band D at 1340 cm −1 , forbidden in the ideal sp 2 graphene lattice.The D band results from a second order process involving intervalley elastic scattering of electron by defect and inelastic scattering on phonon.It is worth noting that the 2D mode is an overtone of the D peak, with the difference that in case of the 2D band an electron is scattered by a second phonon instead of a defect.Additionally sp 3 defects induce a much weaker D' band at 1620 cm −1 , coming from intravalley defect scattering and a peak, which can be assigned to the combination of the D and G mode (G+D) at ∼ 2940 cm −1 [21].These properties of graphene make Raman spectroscopy a sensitive tool for detection of chemisorbed H defects.It is worth noting that the physisorbed molecules do not change the hybridization of carbons and hence do not contribute to the Raman signal of the D band.In the Ar/H 2 plasma process presented here, one has to take into account also the effect of the Ar ions, which by bombarding graphene could induce other sp 3 -type defects: vacancies.These defects also contribute to the D band intensity in Raman spectra; therefore, care must be taken when one assigns the D band intensity solely to the H adatoms. Later in this work we prove by studies of thermal desorption that the sputtering effect of Ar ions is largely suppressed in the chosen plasma conditions (considerably low RF power, high gas pressure).This assures the hydrogen origin of sp 3 defects.To quantify the level of hydrogenation we use the integrated intensity ratio I D /I G of Raman bands, which relates the amount of sp 3 defects in the graphene lattice to its inherent sp 2 bonds.Raman spectra are obtained using a Horiba T64000 micro-Raman spectrometer with 532 nm laser excitation wavelength, spectral resolution of ∼ 2 cm −1 , laser spot size <10 µm in diameter and power density below 0.5 mW to avoid laser induced heating.First, we study the evolution of the D band and its amplitude in comparison with the G band in Raman spectra at various plasma exposure times.For that purpose we select a set of graphene flakes deposited on SiO 2 /Si substrate (300 nm of SiO 2 ) by micromechanical cleavage of Kish graphite.For each flake we obtain a pristine Raman spectrum.With that we exclude the presence of initial disorder.By analyzing the shape and FWHM of the 2D band we confirm the number of layers in the chosen flakes [24,25].Then each sample is exposed separately to the Ar/H 2 plasma for a specific amount of time and immediately after that the Raman spectrum in ambient conditions is acquired.
A typical Raman spectrum of graphene before and after the plasma exposure is shown in Fig. 2a.Hydrogenation results in activation of additional vibrational modes, two of which are depicted in Fig. 2a: a evolution of the integrated intensity ratio between the D and G bands in the Raman spectrum, I D /I G , after diffe-rent exposure times is shown in Fig. 2b.We note its similar behavior to that presented in Ref. [17].The increase of the exposure time results in the increase of the ratio between the D and G band up to the point where there are so many defects in the graphene lattice that the graphene electronic band structure is degraded, reducing possible optical transitions for both D and G bands [26].The initial increase and then decrease of the I D /I G ratio with an increasing number of defects in graphene is reported irrespective of the origin of the defects [27][28][29][30].After hydrogenation, all original Raman bands of graphene show an increase of their FWHM, which is attributed to the local deformation of the lattice and a larger variation in vibrational/phonon energy.
C. Reversibility of hydrogenation under annealing
To confirm that the defects in graphene detected by Raman spectroscopy originate from H adsorbates, we study the change of the I D /I G ratio after heat treatment.The comparative studies of hydrogen desorption in graphite by TPD show that H starts to desorb already at moderate temperatures, >100 ℃, with the desorption maxima at 175 ℃ and 290 ℃ and estimated activation energy for desorption is 0.6 eV [20].Note that these temperatures are too low to heal possible vacancies in graphene.We perform the heating in a nitrogen environment on a hot-plate, with temperature ranging from 75 ℃ to 275 ℃, each time for 1 min.As can be seen in Fig. 2c, heating results in a decrease of the ratio I D /I G .It starts already at 75 ℃ and continues decreasing with increase of heating temperature.Desorption of hydrogen below 100 ℃, also reported in [17], can originate from different nature of hydrogenation by plasma in comparison with an atomic hydrogen source, as energetic ions can bind to graphene in more diverse, also meta-stable, configurations of hydrogen clusters.
After heating at 275 ℃, I D /I G drops below 0.2 in the case of the samples exposed to plasma for less than 1 h.The samples exposed for 80 min and 2 h show a much smaller decrease of defect band intensity with temperature.This means that after prolonged exposures the D band in these flakes must originate primarily from carbon vacancies rather than H adsorbates.In a control experiment we expose the graphene flakes to the pure Ar plasma at the same RIE exposure conditions.We observe that the pure Ar plasma induces substantial etching of graphene, with complete erosion of the flake after about 30 min.The different etching rate of the Ar/H 2 plasma versus pure Ar plasma can be explained by the mass difference between H and Ar ions.Lighter H ions are faster accelerated by the bias difference between the plasma and graphene and they reach the graphene surface sooner than Ar ions.By charge transfer H ions effectively neutralize the negative potential of the flake, reducing the self-bias voltage between the sample and the plasma and the acceleration of much heavier Ar ions.Although the carbon vacancies seem to contribute substantially to the D band signal after the plasma exposures with Ar, this effect is completely suppressed in graphene devices, where the flake is in electric contact with bottom electrode.Exposure of the contacted flake to the Ar plasma did not produce any defect related Raman bands even after prolonged exposure (>3 h) and later in the text we show no significant change in graphene electronic mobility under the Ar plasma exposure.This emphasizes the role of the floating potential of the graphene sample for amplifying the etching speed.
D. Electronic transport in hydrogenated graphene
To gain more information about the role of different H coverage on electronic transport, we perform 4 terminal resistivity measurements in single and bilayer graphene devices after sequential exposure to the Ar/H 2 plasma (devices are exposed simultaneously).The measurements are done at room temperature and in vacuum shortly after the plasma exposure.The inset of Fig. 3a shows exemplary resistivity measurements at different charge carrier concentrations for SLG device.The carrier concentration n can be extracted from the charge induced by the gate voltage Vg with respect to the voltage of the charge neutrality point (CNP) V D (also called Dirac point, where the valence band of graphene touches the conduction band) by using the formula: n = C g /e(V D − V g ), where gate capacitance C g =115 aF/µm 2 for 300 nm SiO 2 .Upon exposure the position of the Dirac point shifts to positive voltages, indicating the hole doping from H. Linking this shift directly to the amount of adsorbed H is however not appropriate here, as the measurements are done ex-situ and other dopants, like physisorbed water molecules, could screen the doping induced by H [31].For that reason we focus on the resistivity changes at the charge neutrality point and in a high doping regime, where graphene shows metallic behavior (here arbitrarily taken at ∼ 2 × 10 12 cm −2 ).In Fig. 3a one can see that with the increase of the exposure time the SLG resistivity changes from a few kΩ to MΩ and for BLG to hundreds of kΩ.Upon hydrogenation the resistivity difference between CNP and a high doping regime changes from ∼3 kΩ to ∼300 kΩ, and its gate voltage characteristic broadens indicating the large amount of charge impurities/inhomogeneities.(If one defines the width of resistivity dependence ρ from the charge carrier concentration as the distance between its deflection points, then upon hydrogenation this width changes in SLG from 8 × 10 11 cm −2 to > 1 × 10 14 cm −2 ).As one might expect, the increase of graphene resistivity with exposure time is slower for BLG than for SLG, as there the graphene layer underneath is unexposed.Moreover, BLG shows a monotonic increase of resistivity with exposure, whereas for SLG we observe a non-monotonic change in resistivity, which suggests a change in the transport mechanism for exposure times >30 min.The same be-havior is reflected by the electron mean free path l, calculated here using the formula: l = 2D/v F , where v F is Fermi velocity of electrons in graphene, v F = 10 6 m/s, and D is a diffusion coefficient (obtained from Einstein relation D = σ/e 2 ν, ν is the density of states).In the calculation we neglect the effect of finite temperature on the density of states (DOS) and any broadening due to charge impurities; the interlayer coupling in DOS of bilayer graphene γ 1 = 0.4 eV, after [32]).Figure 3b shows a change of the mean free path l with the H plasma exposure.It decreases monotonically for BLG and non-monotonically for SLG.The shaded area marks the mean free path distances below the length of the C-C bond (∼1.4 Å), where the diffusion transport model loses its physical meaning.The fact that the estimated mean free path for SLG after ∼2 h of exposure enters this range provides us with evidence that the transport there can no longer be described by the semi-classical Drude model.Low temperature measurements presented in Ref. [6] show that in the heavily hydrogenated samples the transport enters a variable range hopping regime, but the full description of this transition is still lacking.
Additionally, in a control experiment we perform the same electrical characterization of graphene devices exposed to the pure Ar plasma.The change of graphene resistivity upon exposure is confronted with the effect of Ar/H 2 treatment in Fig. 3a.We see that after Ar exposure graphene resistivity does not change, remaining in the kΩ regime and also no D band could be resolved in Raman spectra.With these two characterization techniques we measure no influence of the Ar plasma on the graphene devices in spite of strong graphene erosion in the case of non-contacted flakes (such flakes are completely sputtered after 30 min).This confirms that with the chosen plasma conditions no detectable damage is introduced by Ar ions and that in the flakes with zero selfbias the defects detected by Raman spectroscopy come only from H adsorbates.These findings also disprove the suggestion of Ref. [22] that under exposure to the Ar/H 2 plasma the observed defect band in Raman comes from the fragmentation of a water add-layer by high energy plasma electrons.If that were the case, we should see the Raman band after exposure to Ar in graphene devices even when the Ar plasma does not introduce defects itself.The high energetic plasma electrons from Ar ions should similarly fragmentate a water add-layer, which is always present in the vicinity of graphene due to the used substrate (SiO 2 is hydrophilic).Since no Raman band is observed after Ar exposure, the Ar plasma does not cause graphene erosion and that water layers do not contribute to hydrogenation in the plasma process described here.
E. Relation between the mean free path and defect density
Having ascertained that the defects characterized by Raman spectra originate only from H, we can relate the mean free path to the defect distances L D extracted from the I D /I G ratio.The commonly used Tuinstra-Kroenig experimental dependence [33], which relates the I D /I G ratio to the size of graphite nanocrystalinities and therefore defect distances, was obtained from X-ray diffraction measurements.Estimation of defect concentration from that relation is inappropriate here, because in Tuinstra-Koenig experiment only the edge defects and not the whole surface area contribute to the Raman scattering.We therefore apply a relation established for low energy (90 eV) argon ion bombarded graphene from Ref. [27], which in the regime measured electronically here (I D /I G <2.5) has a form I D /I G = (102 ± 2)/L 2 D .The proportionality coefficient was obtained experimentally for Raman laser wavelength λ = 514.5 nm, which is close to the one used here (λ = 532 nm) and therefore we neglect its possible energy dispersion [34].As we measure Raman spectra after 3 different exposures (see inset in Fig. 4), the I D /I G ratios for the exposures in between are estimated assuming their linear increase in time between the consecutive ratios.The estimated defect distance is compared to the electronic mean free path extracted from transport measurements in Fig. 4. We observe a nonlinear relation between the defect distances and the mean free path in both SLG and BLG.Assuming a parabolic dependence of the mean free path from defect distance: l = L 2 D /σ we obtain a scattering cross section σ of 7 nm for SGL and 4 nm for BLG.This confirms that the cross-section for electron scattering on the impurity potential is larger than the size of the structural disorder caused by this impurity.These scattering cross-section is roughly the same within the first four exposures and then it strongly increases, suggesting a coalescence of the hydrogenated regions.The lower scattering cross-section in BLG supports the theoretical predictions that the impurity potential is screened more effectively in BLG than in SLG [35].After the last exposure, the H coverage determined from the defect distance L D is ∼0.05%.
As in Ref. [17] we find that after the Ar/H 2 plasma exposure the I D /I G ratio for BLG device is larger than that for SLG device (see inset in Fig. 4).This observation is in contradiction to the Raman ratios after exposure of graphene to atomic H and when other defects are introduced [14,36].It is also counterintuitive, as in the bilayer the presence of the second graphene layer reduces the rippling imposed by the amorphous SiO 2 substrate, which should increase the potential barrier for chemisorption of H. Also the intensity of the G band in the case of BLG should be greater than in SLG, as bilayer resting on a substrate can absorb H only on the top layer, leaving the layer beneath intact.With the same surface disorder, the I D /I G ratio for BLG is estimated to be 3.5 times smaller than for SLG [14].From that we conclude that the binding of H in our process is effectively 4 times larger for BLG than for SLG.The observed discrepancy may be inherent to the reactivity of H + 3 ions, the most dominant hydrogen-based component in RF plasma, and to their dissociation mechanisms at the graphene surface.Details of this process, together with the exact evolution of the I D /I G ratio with the number of exposed layers, need computational verification.Monte Carlo simulations of graphite bombarded with H atoms predict that the highest adsorption rate is for H-beam with incident energy of 5 eV; then in a higher energy range (around 15 eV) the H atoms are reflected back from the surface and at even higher energies (>30 eV) H atoms are able to penetrate through the hexagonal ring and initiate chemical sputtering [37].Here the plasma ion kinetic energy ranges from 5 eV to 20 eV, which covers both: chemisorption and reflection regime for H ions.This may explain the somewhat longer exposure times for similar hydrogenation levels than in Ref. [17].This also indicates that the efficiency of this process may be still further improved, by for example increasing the gas pressure or by increasing the RF power.Although the maximum hydrogenation limit is not explored here, this plasma technique is expected to allow a much higher hydrogen uptake than the one reported here (0.05%).
III. CONCLUSIONS
In this work we report for the first time the realization of graphene hydrogenation in reactive ion etching (RIE) system.We study the evolution of the intensity ratio of Raman bands I D /I G and on this basis quantify the induced disorder.With moderate heating we are able to reverse the hydrogenation to almost initial level, which confirms that the observed disorder in Raman spectra stem from adsorbed H.We emphasize here the importance of graphene electric potential during the plasma exposure to suppress erosion of the flakes.We perform electrical studies of single and bilayer graphene after several plasma exposures and link them with the amount of the structural disorder characterized by Raman spectroscopy.The nonlinear correspondence between the mean free path and the estimated defect distances is highlighted, from which the scattering cross-section for hydrogen defect is obtained.We prove that under the chosen plasma conditions, hydrogenation occurs primarily due to the hydrogen ions and not due to fragmentation of a water addlayer by highly accelerated plasma electrons.We also demonstrate that by controlling the electric potential of the graphene during the plasma exposure we suppress the sputtering of carbons in graphene.For that reason the hydrogenation level can be precisely controlled and reversed.The described hydrogenation process can be easily implemented in any RIE system, which we believe will stimulate the research of hydrogenated and functionalized graphene.
FIG. 2: (Color online) (a) Raman spectrum of pristine single layer graphene (black) and after 20 min of exposure to the Ar/H2 plasma (blue).Exposure induces additional Raman bands: a D band around 1340 cm −1 and a weaker D' band around 1620 cm −1 .The increase of FWHM of original graphene bands (G, 2D) is apparent.(b) Integrated intensity ratio between the D and G bands of SLG after different Ar/H2 plasma exposure times.The scattering of the data for different samples is attributed to the floating potential of the graphene flake during exposure.(c)The change of the ID/IG ratio of exposed flakes under annealing on hot-plate for 1 min.The plasma exposure time for each flake is indicated next to the corresponding ID/IG values.In flakes exposed for less than 1 h the D band could be almost fully suppressed (ID/IG <0.2), which confirms the H-type origin of defects.In longer-exposed samples (80 min and 2 h) annealing does not significantly reduce ID/IG, which suggests a different nature of defects there, e.g.vacancies.
FIG. 3 :
FIG. 3: (Color online) (a) Resistivity of single (blue dots) and double layer graphene (black squares) after several exposures to the Ar/H2 plasma.Filled circles represent the resistivity at the Dirac point, open circles represent the resistivity in a metallic regime (at 2 × 10 12 cm −2 carrier density).For comparison, filled and open diamonds describe the resistivity changes in SGL after the Ar plasma exposure.The inset presents the exemplary resistivity curve for SLG.(b) Mean free path of charge carriers in graphene after the exposures.The shaded area indicates the values below the length of C-C bond, where the calculations of the mean free path loses its physical meaning. | 2011-09-08T10:46:38.000Z | 2011-09-08T00:00:00.000 | {
"year": 2011,
"sha1": "c1f3bbed6ae687b3292248151e00d13a021e1795",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1109.1684",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7bbdbdcfd6e59b5dabb31b00b20d337f20e67644",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
256336272 | pes2o/s2orc | v3-fos-license | Numerical solution for a class of parabolic integro-differential equations subject to integral boundary conditions
Many physical phenomena can be modelled through nonlocal boundary value problems whose boundary conditions involve integral terms. In this work we propose a numerical algorithm, by combining second-order Crank–Nicolson schema for the temporal discretization and Legendre–Chebyshev pseudo-spectral method (LC–PSM) for the space discretization, to solve a class of parabolic integrodifferential equations subject to nonlocal boundary conditions. The approach proposed in this paper is based on Galerkin formulation and Legendre polynomials. Results on stability and convergence are established. Numerical tests are presented to support theoretical results and to demonstrate the accuracy and effectiveness of the proposed method
Introduction
In the last decades, the theory of integrodifferential equations has been extensively investigated by many researchers, and it has become a very active research area. The study of this class of equations ranges from the theoretical aspects of solvability and well-posedness to the analytic and numerical methods for obtaining solutions. A strong motivation for studying integrodifferential equations of PDEs type comes from the fact that they could serve as mathematical models for many problems in physics, mechanics, biology and other fields of sciences.
In this work, we are concerned with the numerical solution of the following parabolic integrodifferential equation: (2.1) where the functional K(·, ·) is defined as follows: Here and in what follows, we use the notation (·, ·) to denote the L 2 -inner product and · for the induced norm on the space L 2 ( ). Denote by H m ( ) the standard Sobolev space with norm and semi-norm denoted by · m and | · | m , respectively. Solvability of the above variational problem is addressed in the following theorem [10].
Space discretization: LC-PSM
Let P N ( ) be the space consisting of all algebraic polynomials of degree at most N and denote by I C N : Based on the above weak formulation, we pose the semi-discrete Legendre-Chebyshev Galerkin schema as: Let L k be the kth degree Legendre polynomial defined by the following three-term recurrence formula: We recall that the set of Legendre polynomials is mutually orthogonal in L 2 ( ), namely Let N be a positive integer, we define [5] The following lemma is the key technique in our algorithm.
Lemma 2.2 [22] For two integer j, k ∈ N, let us denote, and Thanks to linear algebra arguments on can easily prove that Consequently, the numerical solution u N of (2.3) can be expanded in terms of (ϕ k ) N k=0 with time-dependent coefficients, namely (2.5) Inserting (2.5) into (2.3) and taking v = ϕ j , 0 ≤ j ≤ N , we obtain the following system of ODEs Then, the initial value problem (2.6) and (2.7) can be written in matrix formulation as follows: The coefficients m jk and p jk are already determined in Lemma (2.2). For the matrix Q, one can uses the values of φ j (±1) to determinate its entries. In fact, since φ j (±1) = 0 for 0 ≤ j ≤ N − 2, hence Q is almost-null matrix except the two last rows whose entries
Fully-discretization schema
For time advancing, we use the second-order Crank-Nicolson scheme to discretize the differential system (2.8). For a given positive integer M, we define the 3) leads to the following recurrent algebraic system The above algebraic system can be solved easily using either direct or iterative methods. As a choice, on can use QR factorization method, given its accurate results and ease of implementation.
Error analysis
In this section, we derive L 2 -error estimate for the error e N (t) = u N (t) − u(t). For this purpose, we first, in the next subsection, recall a sequence of lemmas that will be needed to perform the error analysis.
Preliminaries
Now, we introduce two projection operators and their approximation properties. First, let P N : L 2 ( ) → P N ( ) be the L 2 -orthogonal projection, namely We also define the operator P 1 N : From the definition of P 1 N , one can obtain Next, we give the approximation property of the projection operator P 1 N and the interpolation operator I C N . Lemma 3.1 [14] If v ∈ H r ( ) with r ≥ 1, then the following estimate holds where C > 0 is a positive constant independent on N .
Lemma 3.2 [14] Let v ∈ H 1 ( ), there exists a positive constant C independent on N such that where C > 0 is a positive constant independent on N . Remark 3.3 Under the same assumptions of Lemma (3.2), we can obtain using approximation property (3.3) the following inequality Now, we derive a basic estimate that will be used later in our proofs.
Error estimates
In this subsection, we consider the stability and convergence of the semi-discrete approximation (2.3). We first state a Gronwall-type inequality that will be used in the proof of our main results.
Lemma 3.5 Let E(t) and H (t) be two non-negative integrable functions on
where C 1 , C 2 ∈ R + . Then there exists C > 0 such that Proof For a non-negative function E(t), we perform a permutation of variables to obtain: Hence, inequality (3.7) of Lemma (3.5) becomes Now, applying the standard Gronwall inequality yields the desired estimate (3.8).
Theorem 3.6 Let u 0 ∈ H 1 ( ) and f ∈ C 1 0, T ; H 1 ( ) . Then the solution u N (t) of (2.3) satisfies (3.10) We have to estimate the terms on the right-hand side of (3.10). For the first term I 1 , we use the hypothesis (1.4) and then apply Cauchy and Young inequalities. (3.11) Next, combining Cauchy and Young inequalities with approximation property (3.5) to estimate I 2 . (3.12) The estimate of I 3 is an immediate consequence of Lemma (3.6), namely Putting things together and choosing 0 < ε < 1 yields (3.14) Integrating both sides of (3.14) form 0 to t, we obtain where Thanks to the Gronwall-type inequality (3.5), we get Lemma 3.7 Assume that u ∈ C 1 (0, T ; H r ( )) , r ≥ 2. Then the following estimate holds where C > 0 is a positive constant independent on N .
Proof From (2.1), (2.3) and (3.1) we know that for a fixed t ∈ J the θ N (t) satisfies for all v ∈ P N ( ) the following error equation: Now, we estimate the terms on the right hand-side of inequality (3.17) using a standard procedure. For the term I 1 , we apply Cauchy and Young inequalities and take into account (1.4), In a similar manner, we can obtain for I 2 By virtue of approximation property (3.2), we bound I 2 as follows For the term I 3 (3.21) Similarly, To estimate of the term I 5 we use Lemma (3.4). Setting w = θ N (t) + ρ N (t) and v = θ N (t) in (3.6) yields using the triangular inequality hence, due to Lemma (3.1), on can obtain, In virtue of above estimates, the inequality (3.18) becomes By taking ε sufficiently small and integrating (3.26) over (0, t), we obtain where Gronwall-type inequality (3.5) implies Take into account, for all 0 < t ≤ T , which is the desired result. Now, we are in position to state our main result concerning the convergence of the semi-discrete approximation (2.3).
Theorem 3.8 Let u(t) and u N (t)
be the solution of (2.1) and (2.3), respectively. If u ∈ C 1 (0, T ; H r ( )) with r ≥ 1, then the following error estimate holds, where C > 0 is a positive constant independent on N .
Proof Using triangular inequality, we have By the aid Lemmas (3.2) and (3.7), for all t ∈ J we obtain This completes the proof.
Numerical experiments
In this section, we carry out several numerical experiments to verify the efficiency and accuracy of the proposed (LC-PSM), and we will compare our results against results obtained using other methods. Example 4.1 In this first test problem, the following parabolic integrodifferential equation is considered The exact solution to the above integrodifferential problem is given as Figure 1 presents the computational results obtained by applying (LC-PSM) to the above test problem, where the profiles of exact and approximate solutions as well as the absolute error are plotted. From the numerical results illustrated in Fig. 1, one can observe that the approximate solution shows a great agreement with the exact solution, which confirms that (LG-PSM) yields a very accurate an efficient numerical method for the numerical resolution of nonlocal boundary value problems of integrodifferential parabolic type.
For comparison purposes, in Tables 1 and 2 we compared our computational results with the results obtained in [10]. Obviously, the proposed (LC-PSM) in this paper gives more accurate solutions with less CPU time than the finite difference schema used in mentioned reference.
Example 4.2
To examine the spatial discretization, we take in this example a test problem that has an analytic solution with limited regularity. Let us consider the following problem: , The exact solution is given as the following: We first choose a step time small enough so that the error of the temporal discretization can be eliminated, and make the polynomial degree N varies. Table 3 shows the error in L 2 and L ∞ -norms at a selected point t = 1 and by going through each line one can observe an increasing accuracy until the error of the temporal discretization becomes dominant.
To examine the theoretical result, we plot in Fig. 2 the decay rates of error in L 2 -norm versus N in a logscale and the lines of decay rates N −2 and N −4 . As expected, L 2 -error of (LC-PSM) for the solved problem in this example has a rate of convergence between N −3 and N −4 , which supports the results established in Theorem (3.8) since u ∈ H 3 ( ) and u / ∈ H 4 ( )
Conclusions
In this paper, we are concerned in the implement and analysis of the spectral method to solve a class of integrodifferential parabolic equations subject to nonlocal boundary conditions of Neumann-type. We combined the Legendre spectral method based on Galerkin formulation to discretize the problem in the spatial direction and the second-order Crank-Nicolson finite difference schema for the temporal discretization. Rigorous error analysis has been carried out in L 2 -norm for the proposed method, and the computational results of numerical examples have supported the theoretical results. Moreover, a comparison with fully finite-difference schema clearly shows that the presented method is computationally superior with less required CPU time. It should be noted that other high-order methods can be used for time integration to improve the accuracy of the fully discretization. Convergence and stability of such combinations are still undiscussed.
In future works, we plan to investigate how to implement space-time spectral method for the resolution of this class and other challenging models, such as nonlocal boundary value problems in the two-dimensional case and fractional integrodifferential problems. | 2023-01-29T15:09:31.588Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "2db71599be337a64d1788c4bd82b2d0b010fbf54",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40065-022-00371-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2db71599be337a64d1788c4bd82b2d0b010fbf54",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
235222626 | pes2o/s2orc | v3-fos-license | Utilization of a Reduced Graft from a Severely Traumatized Liver: A Case Report and Strategy to Increase Availability of Livers for Transplant
Background: Trauma victims with liver lacerations in the hilum are typically excluded from liver donation. We report a case of a successful liver transplant from a deceased donor with a grade 4 hilar liver laceration. Case Presentation: We used a liver with a high-grade laceration from a 28-year-old brain-dead traffic accident victim. The liver had grade IV lacerations in the right and caudate lobes. In situ split liver technique was applied to control the lacerations after an intraoperative cholangiogram revealed favourable anatomy. The left hemi-liver graft was procured, retaining the entire vena cava and the full length of the main hepatic vasculature. The recipient was a 62-year-old female patient with end-stage liver disease, with a Model for End-Stage Liver Disease-Sodium score of 19. The left lobe graft was transplanted using the standard piggyback technique. The patient was discharged on postoperative day 7 after an uneventful recovery. At twomonth follow-up, she continues to do well, with normal hepatic function and unremarkable imaging studies. Conclusion: This is the first reported case of a successful liver transplant of a severely lacerated liver made possible by the application of split liver techniques. In situ splitting of a severely traumatized liver could permit the utilization of a reduced graft for small recipients. © 2021 Marlon F. Levy. Hosting by Science Repository. All rights reserved.
Introduction
Multiple strategies have been employed to increase the donor pool in liver transplantation. Livers from donation after circulatory death, advanced age donors, and donors with steatohepatitis are all successfully transplanted. Yet traditionally, organs from trauma victims with crosssectional or imaging exploratory laparotomy that depicts damage to the portal hilum have been excluded from donation [1]. Trauma is a common cause of multiorgan donor death [2]. Multiple concerns exist in using livers that have grade 4 and 5 liver injuries [1]. Chief among these concerns is that damage to the portal and arterial inflow to the liver or hepatic outflow may lead to thrombosis and early graft failure. As such, many centers exclude donors with known grade 4 and 5 liver injuries from consideration for donation. Moreover, limited reports exist of the use of these livers [1,[3][4][5].
Split liver transplant, while widely accepted in the pediatric population, remains underutilized in adults [6,7]. Advances in surgical techniques and donor-recipient matching, however, have allowed expanding the use of split liver transplants in adults [6,7]. In particular, transplant centers with living donor experience have shown excellent outcomes. This has further validated the feasibility of both living donor and split liver transplantation [6,7]. We report a case of successful LT performed using a liver with a high-grade laceration procured with an in situ split liver technique.
I Donor
The donor was a 28-year-old female patient who had abdominal and chest trauma, and a severe head injury from a motor vehicle accident. On admission, she had a Glasgow coma scale score of 3. Her systolic blood
1A
1B pressure was 110 mmHg. A computed tomography scan showed a massive intracerebral hemorrhage, a grade IV liver laceration involving segments I, IV, V, VI, and VIII ( Figures 1A & 1B), a grade I splenic laceration, and a left pneumo-hemothorax. The donor subsequently was declared brain dead 1 day later, and her family consented to organ donation. The donor's mean arterial blood pressure was maintained with low doses of norepinephrine. Her liver function test prior to procurement were aspartate aminotransferase level of 32 U/L, alanine aminotransferase level of 214 U/L, total bilirubin of 1.3 mg/dL, and albumin at 3.5 g/dL. A multiorgan procurement was performed on day 4 of her admission. Multiple liver parenchymal lacerations were observed in the right lobe. Intraoperative liver ultrasound showed a hypoechoic lesion located in segment I and VIII. An intraoperative cholangiogram was also performed following aortic cannulation. It showed a biliary injury and subsequent leak from the caudate branch ( Figure 2). The right hepatic artery and right branch of the portal vein was identified and ligated. The line of ischaemia was used to determine the plane of transection. A parenchymal division was performed with a clamp-crushing technique. Intrahepatic hematoma and a biloma were found along the plane of transection. After cross-clamp and perfusion, the right hepatic vein was divided with a vascular stapler, and the left lobe graft was procured along with the entire vena cava, the common bile duct, the main portal vein, and the celiac trunk ( Figure 3). On the back table, we performed another bile leak test using methylene blue. The sites of leakage were sutured with 4.0 PROLENE sutures. The graft weight was 544 g, and the graftrecipient weight ratio (GRWR) was 1.15.
II Recipient
The recipient was a 62-year-old female patient who had alcoholic liver cirrhosis and refractory ascites. Her Model for End-Stage Liver Disease Na (MELD-sodium) score was 19 points. She was sequence number 42 on the offer, with eight centers having declined the organ ahead of our center. At our center, we declined this organ for 17 recipients ahead of the patient who was ultimately transplanted. The liver graft was transplanted using the piggy-back technique. Organ reperfusion was homogeneous and without bleeding or bile leak from the resected surface. Cold ischaemic time was 5 hours 18 minutes, and warm ischaemic time was 29 minutes.
The postoperative course was uneventful and notably without bile leak. All surgical drains were removed within 5 days after surgery. The patient was discharged on day 7 with normal liver function. The patient recovered well 3 months after surgery without any complications. At most recent follow-up, the allograft function was excellent (AST= 19 U/L, ALT= 24 U/L, Bili= 0.3 mg/dL). CT angiography with reconstructions affirms the transplanted anatomy ( Figure 4).
Discussion
In the United States, brain-dead donors represent the vast majority of organ donation. A substantial number of these patients have suffered trauma [2]. Lacerated livers are generally considered high-risk organs for developing initial poor function or primary non-function. Several risk factors such as the parenchymal injury in and of itself, aggravated injury by warm or cold ischaemia time, bleeding or thrombosis of damaged vessels, or a potential risk for hepatic abscesses or bilomas all may increase the risk of early graft loss [1,8]. Broering et al. reported that macroscopically traumatized livers result in a higher incidence of primary non-function, requiring urgent re-transplantation, with a 64% 6months graft survival rate, in their 14-cases series of transplanting livers with high-grade lacerations [5]. There have been several individual case reports of whole-organ transplants with high grade lacerations. Lacerated livers have been managed by a combination of sealants, glue, suturing, packing, or arterial embolization [1,3,4]. In our case, we employed a similar evaluation and management strategy to that used in our living donor assessments. Specifically, we reviewed the films as a group to gain consensus on feasibility, performed an early intraoperative cholangiogram, and procured the liver with experienced living donor surgeons. Because the injury was confined for the most part to the right lobe, our experience with left lobe living donor grafts greatly informed our strategies.
One of the most challenging problems using livers with grade 4 and 5 injuries is the management of associated bile duct injuries [8,9]. The incidence of ischaemic intrahepatic bile duct injury following liver transplantation is 5-19% [8]. These are characterized by nonanastomotic biliary strictures and dilatation in graft tissue, which occur mostly within six months from post-liver transplant [10]. Intrahepatic bile duct injury may lead to bile leakage, ascites, biloma, hemobilia, and abscess [1,5,11]. Some etiologic factors for intra-parenchymal bile duct injury include hepatic artery thrombosis, ischaemia, preservation injury, and trauma to the biliary tract [5,11]. To diagnose bile duct injury, we recommend performing intraoperative cholangiography early in the procurement and/or cholangiography during the back-table procedure [1,5]. However, as mentioned above, the potential risk of biliary complications, even if the cholangiogram is negative, is high. It is generally thought that traumatized livers fulfilling the Moore injury grade IV or higher along with extensive vascular disruption are too severely damaged to be managed appropriately and considered for LT [1].
Application of split liver techniques could be a more reasonable approach than attempting to use the whole graft in these cases. Partial liver graft, however, are predisposed to a higher rate of vascular or biliary complications, resulting from anatomic variation and difficulty in vascular and biliary reconstruction [12]. Since the left lobe graft in our case retained the entire vena cava, and full length of artery, portal vein, and bile duct, anastomoses could be constructed as they are in wholeorgan liver transplant. Such anatomical advantages promise excellent outcomes [7]. We would consider it crucial that the partial graft in these instances retain all main vascular and biliary branches. Further, a minimal dissection technique utilized in LDLT should be used for split liver transplant to optimize blood supply to the recipient bile duct, particularly when choledochocholedocostomy (as opposed to a choledocho-jejunostomy) is performed [13].
Careful donor and recipient selection are also essential for achieving successful outcomes. Choosing patients with an oncological indication such as HCC and in a good physical condition, which means patients with lower MELD-score would be beneficial for transplanting a lacerated liver [1,[3][4][5]. Moreover, hemi-liver split liver transplant for adult recipients carries the potential risk of graft failure due to size mismatch. A Graft-Recipient Weight Ratio (GRWR) of no less than 1.0% is generally considered the minimal requirement in hemi-liver split liver transplant to avoid early graft dysfunction [14]. To achieve such graft-recipient matching, split grafts should be taken from larger donors and transplanted into smaller recipients. The duration of cold and warm ischaemic time is also crucial. Although there are no defined hard limits, 8-14 hours for cold and below 45 minutes for the warm ischaemic time have been reported as safe cut-offs [15,16]. In our case, we chose a small recipient with an acceptable MELD score, resulting in GRWR of 1.2. The use of a 'local' donor was ideal for achieving a shorter cold ischaemic time. The donor team needs to work more closely than usual with recipient teams in these cases to avoid a prolonged cold ischaemic time. Coordination with the teams procuring thoracic organs is also essential as a careful in situ split reduces the possibility of vascular and biliary injuries and minimizes the cold ischaemia time.
In summary, the successful application of well-established split-liver transplantation techniques, along with the extensive experience of a livedonor liver transplant team, allowed the use of a young allograft into a suitably selected recipient, with excellent outcomes. When organ procurement organizations are confronted with such a liver injury on an organ donor, we recommend they consider extending allocation to teams that may safely make use of this resource.
Funding
None. | 2021-05-27T23:36:23.262Z | 2021-04-30T00:00:00.000 | {
"year": 2021,
"sha1": "ad4641d5ad2bf3ce4e209f285ab0fbe8d80ec34c",
"oa_license": "CCBY",
"oa_url": "https://www.sciencerepository.org/articles/utilization-of-a-reduced-graft-from-a-severely-traumatized-liver_SCR-2021-4-109.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ad4641d5ad2bf3ce4e209f285ab0fbe8d80ec34c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244968180 | pes2o/s2orc | v3-fos-license | I am what I am: Exploring the identity construal in Pakistani School EFL Textbooks
This paper aims at exploring how identity is construed in children’s literature and how the powerful legitimize to identify the textbook consumers by exercising their influence. Drawing on Systemic-functional Linguistics (Halliday and Matthiessen, 2014), particularly Genre theory (Martin and Rose, 2008), it examines how English language textbooks used in Pakistan are written to construe, a project, and normalize a particular sociocultural identity. The sociocultural positioning being projected through the textbooks can be norm-conforming, contesting or can suggest otherwise. The majority of the students in Pakistan are mandated to learn state governed textbooks which serve them build up a sociopolitical identity. Therefore, underlying semiotic modalities realizing a perspective are pertinent to be explored in order to unfold discursive strategies for constructing identity. It is widely acknowledged that any educational curriculum is the most effective tool to construct and circulate a reality. Therefore, challenging any literacy pedagogy embedding particular outcomes can help transforming educational practices across the school curriculum (Martin and Rose, 2012). The data comprises Punjab English textbooks for the government schools. The findings suggest that the intriguing intricacies of textbook discourses can be successfully examined through analyzing linguistic patterns and that the textbooks construe sociocultural identity. The findings also provide insightful implications for discourse analysis based on SFL by contributing explorations of identity.
Introduction
state that "identity is the positioning of self and other" (p. 586). As compared to the past, the identity is no more a fixed criterion. Benwell and Stokoe (2006) postulated that identity in the first glimpse is a projection of one's self and is further developed into the perception of a social and collective identity (p. 17). From this notion, the identity became a subjective enterprise. Identity is constructed and negotiated via discourse which incarnates in itself the social semiotics (Bucholtz & Hall, 2005, p. 586). Martin & Rose (1991) conjugated social semiotics with the genre by relating that genre is a socially distilled type of text that contains grammar, lexis, dialogism, narratives, and discourse which are consummated as a semiotic resource of interaction. This paper is an attempt to enlist this approach drawing particularly on Systemic Functional Linguistics through genre theory. It scrutinizes the English textbooks employed in Pakistani schools, which are used as, a written medium to shape and predict the identity of a student. This paper operationalizes SFL as a medium to recognize the ideational and textual metafunctions that enable a student to develop an identity and its view of the world. This paper further parses the English textbooks written and used in the Pakistani context, Punjab. The charisma of Indexicality factor visibly present in textbooks relates to the semiotics of linguistic forms to identify a person to a social construct (Bucholtz & Hall, 2005, p. 596) and through this identity became a style when it comes to semiotics. Behind the veil of identity, ideology plays the role of paramount constituent. Ideology is a construct of hegemony, which are prevailed through educational manifestations. To probe out the prevalent relationship among identity and ideology incarnated in the curriculum through education, two frameworks have been implemented firstly, the content of the textbooks are seen under the lens of Genre theory and secondly, through SFL, the ideational and textual metafunctions are being explored. Wang (2016), in establishing a relationship between schools and ideologies, stipulates that through textbooks schools impart ideologies among the students. It contains social, cultural, political, economic, religious, and all other dimensions that help in making the identity of a person, withal, Apple (2019), envisions that education constructs our identity at two levels; first at the individual level and secondly, at the broader level (national or international). Ideology and curriculum are interlinked as it shapes up an institution, the knowledge being formed, and the instructor. Consequently, Apple 1978 links out that educational institutes have domineering power in society. Schools are important to maintain the social order. They are considered a source of 'legitimate knowledge'. Schools serve to promote the ideologies of the powerful ones (p.63).
The findings illustrate that textbook promote three foremost identities by exerting the genre theory and SFL. Hence, in considering the findings discern that how educational institutes fabricate our identity and make us social animals by narrowing down our vision and individual will.
Moreover, this study pinnacles out that how identity is constructed through Punjab Government textbooks in Pakistani schools. Mahboob (2015) relates that Pakistani textbook denote the religious and political identities constructed among the students. Through this, they develop a norm to be followed by all social members. This paper focuses on textbooks which are only the essence of educational institutes. Since the majority of the population attends the schools only so the basic hidden identities found their roots to be implanted in the minds. Mehboob (2013) points out the identity management and language variation in his article. He focuses on the government-approved textbooks being taught in Sindh. He examined the content and the language used in the textbooks promote hidden ideology and how it makes the identities. He says that through these textbooks we can make our local identity that is acceptable to a particular socioeconomic forum. He highlights that the educational system does not serve for education anymore rather it has now become a zone of experiments done by the powerful groups which unknowingly is affecting the construction of prevailing identities.
This study aims to determine that how Government schools in Pakistan can utilize instructive materials to develop their identity to keep up with the socio-economic status. The status assumes a part in controlling the substance and language introduced in the educational plan. To highlight the ideological paradigm of textbooks that are part of children's educational habits and try to manage their identities.
Review of Related Literature
Ideology and identity are invariably a matter of thrust for scholars and an enormous altercation has been done over it. Bucholtz & Hall (2004) elucidates that identity and ideology are interconnected by power representing beliefs and ideas given culturally. Ideologies can easily manipulate the social, political, economic, and religious aspects of a person. It helps people to maintain a position in a social world. Verschueren (2012), indicates that ideology and identity are linked together. Ideology prevails among all genres that are education, teaching, reports, and other mediums. It creates a hegemony by the dominance of the powerful people. Identity is one of the major themes of ideology. Ideology forms the reality and a norm that is acceptable in a social world (pp. 10-12). Shahrebabaki (2018) relates that identity means sameness, an association with a particular social group. One can identify themselves with the linguistic variety used (pp. 217-220). Fina et al. (2006) demonstrate identity is rooted in history, not as a conscious process. Furthermore, Fina (2018) pins out that there are many types of identities such as individual, collective, social, personal, and situational. Though both of them are interconnected and are shaping each other (pp. 268-269). Milner (2010); Bucholtz & Hall (2005); Kopytowska (2012); Bernstein (1998);and Kress & Hodge (1993); to cite a few, stipulates that identity is not constant, it is always in action, is inculcated in representation, and enactment, and there is always an actor and a viewer.
Textbooks and curriculum are the centre of any educational medium to nurture the hidden motives of any powerful group. Nuan et al. (1998) state that curriculum refers to expectations related to what is to be learned, its purpose, its outcomes, evaluation, and the role of teachers. Moreover, Branzila (2018) postulates that children's literature acts as a medium to connect its readers to a social acting world by fantasizing about it. It instils personal, national, and traditional identity besides textbooks. It also promotes national identity in them. Barnard (2004) states that textbooks serve ideological purposes as they came directly out of society. Students came immune to ideologies and consider it as a fact that data presented in the textbooks are considered factual and authentic, which makes up a society. Curdt-Christiansen & Weninger (2015) highlights that textbook make ideologies visible representing the social structure of society. Textbooks contain historical, cultural, social, and economic outlooks. Content presented in textbooks is official that has a determination behind it. It contains both visible and underpinned aspects that lie in the construction, counts, and impacts of knowledge. Shah et al. (2013), further constitutes that textbook used in schools are of political and educational interest. Textbooks are seen in a conflict between them. Punjab matric level textbooks promote religious ideology. This textbook has negated the freedom to act, of speech, and promote mostly religious ideology. It is done to promote the ideology of political beliefs (pp. 113-120).
Since each discourse is a quintessence of a subsequent genre similarly, there are a couple of genres being adopted by curriculum creators. In this sense, Swales (1990) proposed that Genre is the distribution of texts in different categories. It helps to know about the language used in a setting. It has its role in academic discourses also. One can differentiate among genres through the discourse incorporated in a text (pp. 01-28). A genre can be of any type such as literary studies, written, videos, rhetoric, spoken, music, folklore, and linguistics. It's all about choice as its hidden motive is to highlight the socially accepted norms and values (pp. 33-34). Bawarshi (2000) states that the genre functions as a social venture as it helps us to execute their selves in a social world. It helps in realizing the situation and the roles we have to perform that is to function up the identity. Through different genres, one can perform differently and multiple identities merge out of it. By reading different genres we can easily perform in a social semiotic world.
Methodology
This paper scrutinizes identity erection via textbooks, in Pakistan, by application of Halliday's model of 'Systemic Functional Linguistics' specifically on ideational and textual metafunction. The research sample was extracted from the approbated Pakistani English textbooks of grades 7, 8, 9, and 10 and total texts are twenty-one (21) in number employed by Government schools of Punjab. The basic paradigm of this study was to analyze the textbooks at two general levels. The first level was to analyze the content of the book. Secondly, by anatomizing the language used, in the books by implicature of Halliday's 'Systemic Functional Linguistics' model of ideational, and textual functions.
In the light of Halliday's 'Systemic Functional Linguistics' the present study aims at exploring the answers of the questions, mentioned below: 1. How identities are imprinted on the mind through textbooks? 2. What are the genres used in textbooks to establish identities? 3. What are ideologies behind the identity construction implied through textbooks as a tool?
Theoretical Background
Halliday introduced three metafunctions of language termed as an interpersonal, ideational, and functional stratum. These are the potential functions performed by a language in terms of social interaction and ideology construction. These metafunctions lay bare the lamina of how a language is used in a context and how it is realized. Language is used as a system as realized in a textual mainframe.
The Ideational function is the construction of social actions performed by a person. It is involved in construction, recursion, and retain any experience. It is related to the semantic and functional manifestations of language rather than syntactic orientation. The main concern of ideational function is: the illustration of content and conveying of information. It helps in constructing shared competence amongst a discrete community. A text can be varied as general, specific, structured, and unstructured. To explain how an action is performed and the processes involved in it. The textual function is related to the verbal world, particularly, how the information is presented in a text. It is all about part and parcels of a sentence as an encoded message. It is the illustration of competence (identity in general) utilizing verbal expression incarnated in a textual form. It involves the portrayal of self by the language incorporation in a piece of text. It is concerned with the realization of a text as a dialogue or monologue, either the field is accompanying or constituting. Although there is no thorough designation of the context of culture, yet some of the comprehensive classes of context have been acknowledged for quite a while, and they are manifested in three subsequent terms: field, tenor, and mode.
Genre lies beyond the boundaries of language given by Halliday. It accompanies all aspects of a text in a single term. It gives power to people to control and manage the ideology they tend to promote. Genre is a wider term containing interpersonal, ideational, and textual metafunctions in it. The genre can be divided into histories, reports and explanations, and stories. The historical genre focuses on past events and shows the manipulation of time. They aim to promote that what causes leads a particular event and their long-established effects. Under the historical genre typology came biography (focuses on life accounts), history (centers at the public chronicles), and personal recounts (its pivot is personal narrative). In reports and explanations, the data is classified, described, observations, experiments, and the effects of viable causes. The categories that came under this genre are factorial (constitute the facts and figures), sequential means (manifest the cause-and-effect relationship), and consequential effect (how multiple causes affect a single event). In a story, we have the fables and stories having some orientation, complication, and resolution. Their impetus is to educate students by narrating the accounts of the others and to infer a lesson from it. The last genre is poetry that has a rhythm and has a musical quality to have an enduring effect on every soul.
Discussion and Analysis
Identity management is done in the textbooks approved by the government in various ways. One can identify it through the language and ideologies promoted in a text that is the content of the English textbooks. Another way is to analyze it through the language in the textbooks. The analysis is done at both content and language levels.
Content analysis of the Textbooks
Textbooks provide us with a wide range of the contents accompanying all aspects of life. The choice of content placement is done at a higher level as it points out the hidden ideologies. We have followed Martin & Rose (1991) expresses that "genre as part of a functional model of language and attendant modalities of communication." The government only approved those contents that are in favor of their ideologies and curriculum is spread in the district as a national curriculum and implicitly ideologies are promoted. Through content analysis, we can analyze the distribution of the material in disparate genres. By analysing the contents of Punjab government-approved textbooks of grade 7, 8, 9, and 10 th class we came to know that identities are promoted by placing the materials of different genres in a single textbook. By analysis, we ascertain three identities religious, national, and moral.
Genre Analysis
Genre distribution of the texts present in the textbooks are done to know about the recounts and the placement of the narratives. The Genre accompanies every aspect of life either verbal, nonverbal, chats, etc. In a social context, the genre can be seen as a functional model. Their purpose is to promote the implicit and explicit motivation and the effect produce by it. It shows that what writers have written and the reasons behind it. We can analyze the content, style, structure, language, and audiences. The contents of English textbooks of grades 7, 8, 9, and 10 are divided into four genres historical, reports and explanations, the story, and poetry.
Within the medium of the historical genre, text can be divided into a biographical, historical, and personal recount. Texts such as "The Last Sermon of Rasool Hazrat Muhammad ,"ﷺ "Tolerance of the Rasool ,"ﷺ "Sultan Ahmad Masjid", "Hazrat Muhammad ﷺ an Embodiment of Justice" contains histories. "Quaid-i-Azam", "Hazrat Umar (R.A)", "Hazrat Asma (R.A)", "The Quaid's Vision and Pakistan" illustrate biographical recount. "Eid-ul-Azha", "Faithfulness" lies under personal narrative. "The Saviour of Mankind" comes under the category of narrative (history). "All is Not Lost" comes under the category of a personal recount. "The Rooster and the Fox", "Clever Mirchu", "A Great Virtue", "Try Again" demonstrate the genre of the story as moral lessons can be taught more effectively through the form of stories as compared to facts. In reports and explanation genre, the "Traffic Sense" describes the consequential effect. "Hockey" contains factional information and the importance of the sport. "Patriotism" lies under the sequential explanation. "A Nation's Strength", "Prayer" accompanies the genre of poetry. Where a Nation's strength is related to national identity and Prayer plays its role in the construction of religious identity.
5.3
Language of the Textbook Identities are constructed in Punjab government-approved English textbooks through the use of language. In a written text, the incorporation of language is vital. Language plays an intrinsic role in the maintenance and sustenance of identity. It enables people to know their culture and to be known in a social world.
Religious Identity
Werbner (2010) points out that the construction of religious identity is done at a broader level. Power plays a vital role in its maintenance. It makes people inclined towards religion, ritual, and its performance in a social world (pp. 240-244). Language pertaining to religious words, rituals, and religious personalities are integrated into English textbooks. Biographies, histories, personal recounts, and poems play a requisite role in promoting religious identities. This text promotes religious identity by representing one of the incidents from the life history of Hazrat Muhammad ,ﷺ the last prophet. The text starts with the description of religion Islam and followed with the last preaching of Hazrat Muhammad ﷺ on the occasion of Hajj, on "9 th Zil-Hijjah" at Arafat, an Islamic liturgy. It lies under the historical genre, in which He ﷺ promotes Islam and the comprehension of the substantial spirit of the religion. The main points of the sermon were that the only praiseworthy is Allah, equality of people, all are the lineage of Adam, accountability, rights of women, inheritance, the issue of debt, the punishment of criminals who committed has to face it, and the Book of Allah as guidance for all. These words are highlighted and at the end, everyone witnessed the message of the religion and its codes and conducts are being conveyed thoroughly by Rasool .ﷺ In the end ﷺ repeated this sentence "O Lord: Bear Thou witness unto it" two times to ensure that He ﷺ has completed the task given by Almighty Allah.
Text 02: Eid-ul-Azha (Pg. 28, book 7, Punjab Curriculum & Textbook Board, Lahore) By placing an account of a religious festival of the Muslims that is "Eid-ul-Azha", commemorated on "10 th of Zil-Hajjah", the component of sacrifice is being established, a salient religious strand. The description of the chapter states the motif of that festival is to promote brotherhood. The words employed here are festival, celebration, brotherhood, rejoice, devotion, affirmative, bade, memory, sacrifice, religious, enthusiasm, millions, and butcher are written in bold words to show the significance of this festival, a religious practice to be accepted by all. This chapter highlights that the share we have should be divided into three parts equally, to realize the desolation and necessitates of the needy ones and the neighbors and focus on poor relatives. This text shows religious identity by foregrounding the proceedings from the life of Rasool ,ﷺ the last prophet. This chapter starts with ِ ِ ْم س ِ ِب ِ يم حِ ِِٱلرَّ ِ ن مٰ حْ ِِٱلرَّ ِ ِٱّللٰ ", which means "In the name of God, the Most Gracious, the Most Merciful". The text then follows the accounts of tolerance from the life of Rasool .ﷺ Words such as revenge, conquest, amnesty, destroy, recognize, troubled, custom, forbidden, objected denotes that whatever bad circumstances may occur one should remain tolerant as our Rasool ﷺ was. The account of Taif incident has been given that despite all of the inequity He ﷺ promoted peace and brotherhood and has forbidden revenge and fostered tolerance.
Text 04: Prayer ((Pg. 55, book 8, Punjab Curriculum & Textbook Board, Lahore)
The text "Prayer," lies under the genre of poetry, starts with an image of a girl fully covered from head to toe, sitting on the mat, her hands are lifted up, and making a prayer. Language use in the poem embodies the significance of prayer and why everyone should offer it. As the last three lines "Not to remind You, Of me, But myself, Of this and all of You" depict the hidden ideology that prayer is important for all to be practiced but done through an individual instance. The religious motif is not imposed directly but through a lesson-like quality.
Text 05: Hazrat Umar (R.A) (Pg. 60, book 8, Punjab Curriculum & Textbook Board, Lahore)
Through the biography of "Hazrat Umar (R.A)", one of the Caliphs of Islam, through a powerful religious figure, religious identity is developed. By explaining the accounts of the life of Hazrat Umar (R.A), it is shown that how he embraced Islam, his Caliph period, his attitude towards his people, his attitude with servants, and he died while offering Namaz. The last account predicts that one should not be worried about the death being on religion Islam and he was killed while offering Namaz promoting a vigorous religious message.
Text 06: The Saviour of Mankind (Pg. 01, book 9, Punjab Curriculum & Textbook Board, Lahore
In the text "The Saviour of Mankind", the language employed shows that how a non-civilized community is being transformed into a civilized one. There is an embodiment of Quranic verses such as from (surah 96 aayat 1-5) and (surah 33 aayat 45-46) that represent the divine message in the following chapter. The following text portrays the process of dive message revelation and Rasool ﷺ missions promote the message of Allah and the message of Tauheed. Islam has been seen as a religion of civilization and shows the right way of living and the hurdles faced by Rasool ﷺ in the spread of Allah's message. And the statement of Hazrat Ayesha (R.A) at the end of the text summarises the whole text that 'Hazrat Muhammad ﷺ was an embodiment of Quran'.
Text 07: Hazrat Asma (R.A) (Pg. 32, book 9, Punjab Curriculum & Textbook Board, Lahore)
The biography of "Hazrat Asma (R.A)" is present, though not a clear biographical genre but shown from a religious perspective. There is an embodiment of the Arabic language, in the third paragraph, some services of her are depicted. But, the remaining chapter is related to her closed ones and their services towards Rasool .ﷺ She (R.A) is being represented as the one that had a pearl of great wisdom and knows how to protect her loved ones and relatives. She is an embodiment of courage and generosity and no matter in what circumstances you are have firm faith in Almighty Allah.
Text 08: Sultan Ahmad Masjid (Pg. 73, book 9, Punjab Curriculum & Textbook Board, Lahore)
In the text, "Sultan Ahmad Masjid", there is the picture of the Mosque, for Muslims a respected place and is related to the house of Allah. The chapter highlights the importance of Sultan Ahmad Masjid, its construction, the addition of Ayat (verses) from the Holy Quran, and the importance of Namaz. It was constructed betwixt 1609 to 1616, knowing worldwide as the Blue Mosque, made in the reign of Ahmad I. This text pinnacles out that Islamic architecture is splendour and has a wide history and Masjid plays a vital role in Islam. The text "Hazrat Muhammad ﷺ an Embodiment of Justice" starts with ِ ِ ْم س ِ ِِب ِ يم حِ ِِٱلرَّ ِ ن مٰ حْ ِِٱلرَّ ِ ِٱّللٰ "In the Name of Allah, the Most Gracious the Most Merciful". It embodies religion and its imposition. There is the use of Arabic language as in ,ﷺ and the sayings of the Prophet .ﷺ It was more a moral lesson promoting religion. When a person wants guidance, success, goodness, morality, piety, spirituality, and in any field of life they can look up at one person that is Hazrat Muhammad .ﷺ This text points out His ﷺ ways of resolving the issue by narrating the incident of Black Stone, His ﷺ way of doing justice explained by a Qureshi woman incident and the trust He ﷺ has from the non-Muslims by accounting the experience with His enemies, Jews.
From the above analysis, we came to know that there is a hidden ideology in the selection of a curriculum. Religious texts include pictures, language, religious personalities to make people aware of them. Through this, religious identity is developed as there is an on and off coalesce of religion in textbooks. Unconsciously we are made aware of it and when we go into a social world, we employ those implicit identities.
5.5
National Identity Popow (2014), pinnacles that the nation is one of the constituents of a person's identity. Patriotism is endorsed among people at a local level, global level, through mass communication, and through social media platforms. It is done at the micro as well as macro level. And in all this scenario the pivot is language and the use of language and how influential it is and what impression it left a mark upon people. Nations are constructed, through this our relations are constructed, we became social animals and ultimately lead to identity formation (pp. 03-05). National identities in English textbooks of Grade 7, 8, 9, and 10 th are constructed through respected genres such as factional, explanation, biography, and poem.
Text 10: A Nation's Strength (Pg. 89, book 7, Punjab Curriculum & Textbook Board, Lahore)
The national identity is promoted through the poem "A Nation's Strength". This poem highlights the importance of a nation and how to take hold of it. People, honor, stand fast, suffer, brave, dare, pillars are written in bold form to convey the hidden ideology. The poem's ideology is to make people conscious of what the nation demands from them and what role they have to perform. Word choices imprint an enduring effect within the minds of the people that only those who are brave and work hard can do this. This nation does not approve any idle man but the one that acts as one of its strong pillars. This text contains the biography of our leader, the national hero Quaid-i-Azam. His determination, his conscientiousness, and his role in making Pakistan, a separate homeland for Muslims. The first paragraph contains his early information that he was born in 1876, from where he gets early and higher education. In 1905, he joined politics and started doing efforts for the unity of Hindus and Muslims. But in 1930, he was convinced that Hindus and Muslims can't be united and implemented Iqbal's vision of a separate land. Because of his efforts and ascertainment, Pakistan came into being in 1947. Though he has to face many problems but his motto was "Work, Work, and Work" and he died in 1948.
Text 12: Hockey (Pg. 46, book 8, Punjab Curriculum & Textbook Board, Lahore) "Hockey" is our national sport and through this, the national spirit is revived. Firstly, the importance of sport in ones' life is depicted and the words such as fundamental, contribute, and adherence are used to make an impact. Then in the second paragraph, there is a worldwide depiction of sports and hockey among one of them. The third paragraph highlights Pakistan's victory in the sport. In the fourth and fifth paragraphs, there is the depiction of the number of players in the sport and how a match proceeds. Firstly, only men take part in that sport now, women also take part in that sport. And in the last paragraph, the achievements that Pakistan has. The language used and the presence of factional data is to make people motivated and take more part in making the nation proud.
Text 13: Patriotism (Pg. 12, book 9, Punjab Curriculum & Textbook Board, Lahore) The text "Patriotism" builds a national identity indicated from the title that shows devotion towards a country. The following text deals with what is patriotism, what role this feeling performs in a person's life, the role of Quaid-i-Azam, the true spirit of patriotism, and at the end a quotation to develop patriotism among people. The text starts with the definition of patriotism means devotion to motherland and being able to do any sacrifice for the country. The role of Quaid-i-Azam in making a separate homeland for the subcontinent Muslims and promoted that 'patriotism makes a nation strong and united.' And it further explores the role of the militiary in making a strong nation and the importance of native land where one can breathe freely and easily.
Text 14: The Quaid's Vision and Pakistan (Pg. 62, book 9, Punjab Curriculum & Textbook Board, Lahore)
The present text is allocated to the biography of the founder and leader of Pakistan Quaid-i-Azam that is "The Quaid's Vision and Pakistan". The respective chapter contains the picture of Quaid-i-Azam which makes us proud and enthusiastic to read more about it. The chapter starts with the depiction of Quaid-i-Azam as a savior to take people out of a difficult time. His statement is endorsed to make people motivated to have a separate nation, don't fear death when it comes to your nation. He focuses on a nation and its building and how we stand as different ones'. The ideology behind a nation and Quaid's role in making Pakistan. This text focuses on Quaid's tour of countrywide and he endorsed the concept of oneness among the nation that is 'we are different in terms of culture, civilization, language and literature…, a distinctive outlook of life.' His ideology behind Pakistan and a question to ponder out that we are truly endorsing the Quaid's vision or it is lost?
Curriculum selection in textbooks promotes the ideology to promote a national spirit among people. To make people conscious of the nation, what it needs, and if it demands sacrifice be in the front line and promote national identities among students.
5.6
Moral Identity Hart (2005), indicates that the purpose of moral identity is self-assessment. It is to know our moral failures, actions underline morality, and the awareness of psychological motifs of morality. Its purpose is to do the process of self-evaluation and through the faults and follies of others, we can transform ourselves (pp. 165-175). In grade 7, 8, 9, and 10 th textbooks moral identity is formulated through stories (fables), and a personal account.
Text 15: Traffic Sense (Pg. 53, book 7, Punjab Curriculum & Textbook Board, Lahore) "Traffic Sense", promotes a moral sense of our duty to follow rules and regulations to avoid unfavourable circumstance. In the text, words such as accidents, carelessness, a vehicle, aware, approaching, traffic sense, a moment, risk, minimize, the zebra crossing, signal, queue, levelcrossing, railway track, and violation of consciousness of people about the dangers. The language employed first threatens people of not following the traffic rules and then rules are given to avoid accidents. This chapter also questions that being a responsible citizen you observe the rules or not and road signs are understandable.
Text 16: The Rooster and the Fox (Pg. 126, book 7, Punjab Curriculum & Textbook Board, Lahore)
In "The Rooster and the Fox", through fable, the hidden ideology promoted here is morality among people. The language employed shows a narration and by the adverse behavior of fox and ethics of a rooster, morality is being imparted. Words such as cunning, and tricks are being used for fox and rooster innocent yet clever. Through fox, cunningness is shown that one who makes a dig for others may himself falls into it. One cannot have the desired outcomes if they outlined the moral boundaries.
Text 17: Clever Mirchu (Pg. 38, book 8, Punjab Curriculum & Textbook Board, Lahore)
In "Clever Mirchu" it is depicted that how one person with quick instincts in a difficult situation can save other people's lives. The story highlights the concept of togetherness and family bond. Mirchu is the youngest son of a farmer and is called Mirchu because his size was like pepper. The text highlights that accomplishments, the role performed, ability to face problems are not dependent on the size rather one's ability to think effectively in unsatisfactory events.
Text 18: A Great Virtue (Pg. 65, book 8, Punjab Curriculum & Textbook Board, Lahore)
"A Great Virtue", serves a moral purpose that no matter how a poor person is, it is the virtuous intentions that matter. Words used here are terrible, followed, hesitation, step in, manage, shelter, accommodate, generous, shivering, and guilty. One should try to help others when he/she has a small amount of it as done by the old man who hut was that small that people stand in that to avoid the storm. The story pinnacles out the theme that when a person does good with others ultimately good also falls in its destiny.
Text 19: All is Not Lost (Pg. 93, book 9, Punjab Curriculum & Textbook Board, Lahore) It contains a personal recount that is "All is Not Lost", which manifests the personal experience of a nurse. It is to know how a little help can save a life and bring families hope. She as a nurse does not lose hope over a patient and little hope in unfavorable circumstances is like a light in a dark room. This text highlights the problems a nurse has to face in the profession and being composed in tough situations is the utmost demand of this profession.
Text 20: Faithfulness (Pg. 149, book 10, Punjab Curriculum & Textbook Board, Lahore)
It contains a personal account, "Faithfulness", to which the title represents a moral purpose. This chapter accounts for an incident of Caliph Hazrat Umar (R.A), and a villager. The Caliph's task was to find out a murderer and should not punish an innocent person. After three days the true accuser is being found out and punished. In the end, the faithful act is performed as they forgive the murderers and gave the moral purpose that to forgive someone is a divine act. If you have faith in Allah then nothing is impossible.
Text 21: Try Again (Pg. 27, book 10, Punjab Curriculum & Textbook Board, Lahore) The poem "Try Again", highlights a moral lesson of not giving up hope and to being contended. Success is not achieved at the first go rather one has to face failures to be successful. One should not feel disgrace in striving for success rather one should try again and again.
Results
From the above analysis, we conclude that the government has the power to promote its ideology, and this can be done through textbooks. The English course recognized by Punjabi textbooks is more than just a book to learn more about English. These textbooks often know a person's status and social class. Its focus is to promote English, so that capable students can be recognized in society, and the medium of instruction in public schools is Urdu. From the above analysis, some texts may violate the genre structure, but the existence of certain characteristics classifies them as respected genres for ideological purposes. The language and content used in books tend to promote a specific set of beliefs and ideas. These textbooks implicitly promote ideologies that help build religious, national, and moral identities. Understanding one's tendency towards a certain identity will give us authority and prominence in the world. It is only conducted in public schools, while private schools have other purposes. Religious identity is cultivated through the implementation of Islamic events, Islamic figures, Islamic architects, and their history. Through the incarnation of national hero events, promote national thinking, encourage students to play a role in national construction, promote patriotism, and promote national spirit. Moral identity is promoted through the beautiful story of how a person understands the lives of others.
Conclusion
The purpose of any institution, government, or organization is to teach English to be able to live in a social world. To know English means the guarantee to have a stable job and a door open to all opportunities, but, it may not the purpose of any government. The Governments have power and access to promote their ideologies and this can be done through textbooks. The English curriculum approved by Punjab textbooks is more than a book that provides an insight into English. These textbooks tend to know a person's status and to which social grading they lie. Their focal point is to promote English, to able students to be recognized socially whilst the medium of the teaching of government schools is Urdu. Students develop literacies in Urdu and English language, Urdu being a national language is only spoken by 7.08% of the total population. This means that the governmentapproved textbooks are unable to make a student to use their knowledge of the language globally rather they violate the genres and tends to promote the ideological practices of a domineering group. From the above analysis, some texts may violate genre structure, but the presence of some features classifies them in respected genres to achieve ideological purposes. The language as well as the content used in the books is inclined to promote a particular set of beliefs and ideas. These textbooks implicitly promote ideologies that help in the religious, national, and moral identity establishment. By knowing one's inclination toward an identity, gives us authority and prominence in the world. It is only done in government schools while private schools have other purposes. Religious identity is fostered by implementing incidents from Islam, Islamic personalities, Islamic architect, and its history. National ideologies are promoted through the incarnation of incidents of national heroes, to encourage the feeling of patriotism and to promote the nation by uplifting students to play their part in making a nation strong. Moral identities are promoted through the stories of how one can take lesson of goodness from others' lives. Though most of the students attend schools and are unable to attend Universities so an immense part of their ideologies is developed through school that they foster up further in a social world. By doing so we are making an orthodox society that limits a person's rationale and confined them in particular scenarios. By imposing religious ideology, the rights of minorities (Hindus, Christians, etc) are suppressed and they are supposed to read Muslim ideology. There is no presence of a single text in the textbooks that promote the religion of the minorities and their culture though they are playing an active role in society. This shows the dominance of one group over the other and the minorities are discouraged and are neglected at a wider level. These can cause restlessness in society and makes people jumbled upon knowing who exactly they are. | 2021-10-18T17:05:06.636Z | 2021-09-22T00:00:00.000 | {
"year": 2021,
"sha1": "5c1878236998d1180f04c896a8db8aef673121e7",
"oa_license": "CCBYNC",
"oa_url": "https://ramss.spcrd.org/index.php/ramss/article/download/174/186",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f7a1039874159cc6da5cc92cef95651617559f1f",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
53943482 | pes2o/s2orc | v3-fos-license | Thermokarst Development Detected from High-Definition Topographic Data in Central Yakutia
Eastern Siberia is characterized by widespread permafrost thawing and subsequent thermokarst development. Estimation of the impacts of the predicted rise in precipitation and air temperatures under climate change requires quantitative knowledge about the spatial distribution of thermokarst development. In the last few years, unmanned aerial systems (UAS) and structure-from-motion multi-view stereo (SfM-MVS) photogrammetry attracted a tremendous amount of interest for acquiring high-definition topographic data. This study detected characteristics of thermokarst landforms using UAS and SfM-MVS photogrammetry at a disused airfield (3.0 ha) and for arable land that was previously used for farming (6.3 ha) in the Churapcha area, located on the right bank of the Lena River in central Yakutia. Orthorectified photographs and digital terrain models with spatial resolutions of 4.0 cm and 8.0 cm, respectively, were obtained for this study. At the disused airfield site and the abandoned arable land, 174 and 867 high-centered polygons that developed after the 1990s were detected, respectively. The data showed that the average diameter and average area of the polygons at the disused airfield site were 11.6 m and 111.2 m2, respectively, while those of the polygons in the abandoned arable land were 7.4 m and 46.8 m2, respectively. The abandoned arable land was characterized by smaller polygons and a higher polygon density. The differences in polygon size for the abandoned arable land and the disused airfield site indicate a difference in the ice wedge distributions and thermokarst developments. The subsidence rate was estimated as 2.1 cm/year for the disused airfield site and 3.9 cm/year for the abandoned arable land.
Introduction
The term "thermokarst" refers to a process that produces characteristic landforms as a result of the thawing of ice-rich permafrost or the melting of massive ice [1,2].Thermokarst research is important because it can be used to estimate permafrost degradation.Thermokarst formation is associated with landscape disturbances and climate change [3].At present, thermokarst is most active in open natural and anthropogenic landscapes.Under recent warming trends, the destruction of the transient layer, which strongly protects frozen permafrost from thawing [4], resulted in increased thermokarst landform development [5].Several field studies reported the activation of topographical subsidence along with thermokarst development in continuous permafrost zones.The vulnerability of permafrost to degradation induced by thermokarst subsidence depends on the degree of surface disturbance (e.g., from wild fires [6], clear-cutting [7], and anthropogenic land use [8,9]) and the subsequent deepening of the active layer thickness.
Thermokarst includes the thawing process of frozen ice-rich grounds and underground ice, accompanied by the formation of subsidence [10].Climatic warming resulted in deeper thawing in these regions.This could lead to massive degradation of the permafrost, particularly in regions with underlying ice-rich permafrost.Pleistocene ice-rich permafrost with syngenetic ice wedges, called yedoma deposits [11], are widespread in eastern Siberia, extending to the subarctic boreal region in the central Lena River basin.The dynamics of thermokarst landforms on an ice complex, or yedoma, were first studied by Efimov and Grave [12] using the Sakha terminology, which was then refined by Solov'ev [13].In this classification [13], the complete evolution of thermokarst landforms can be traced from primary subsidence to alas-a round subsidence depression with grass and lakes [14].
Eastern Siberia is characterized by widespread thawing of permafrost and the subsequent development of thermokarst.Estimation of the impact of the predicted increase in precipitation and air temperatures under climate change requires quantitative knowledge about the spatial distribution of thermokarst development [15,16].Recent warming in eastern Siberia resulted in the development of cryogenic processes in permafrost landscapes [17,18].The most vulnerable permafrost is located under open landscapes (mostly grassland) of the ice complex or yedoma, where the active layer thickness reaches the top of the ice wedges almost every summer, resulting in the melting and degradation of the permafrost.The thermokarst landforms are high-centered polygons (hereafter "polygons"), which now characterize almost all treeless areas of the ice complex in eastern Siberia.The most vulnerable areas are anthropogenic landscapes, such as arable land, areas experiencing deforestation and forest fires, as well as areas where the forest is affected by insect attacks.
In eastern Siberia, especially in central Yakutia, increases of 0.57 • C/decade and 0.70 • C/decade in the average annual air temperatures were observed in Churapcha and Yakutia [19], respectively.The increases in winter temperatures (0.69 • C/decade and 0.89 • C/decade, respectively) contributed significantly to this trend.The change in annual precipitation is insignificant, except for sharp increases during 2005-2008 [20].The warming climate had a negative impact on the natural environment and economy of central Yakutia [9].A reduction in usable lands, and damage to buildings and communities caused significant harm to the local population.Degradation of the permafrost leads to the release of organic carbon, which can contribute to further climate change.Therefore, detailed studies of thermokarst at high spatio-temporal resolutions have great practical and theoretical significance.
Temporal variation in satellite and aerial photographs combined with detailed field measurements is the basic method used to detect thermokarst development, as conducted in central Yakutia [17,21], the coastal area of eastern Siberia [22], and multiple sites over the Pan-Arctic region [23].In contrast, interferometric synthetic aperture radar (InSAR) and light detection and ranging (LiDAR) techniques recently showed thermokarst development over the coastal tundra [24] and extensive areas affected by wild fires [6] in Alaska.A limitation of the detection of the spatial extent of thermokarst development is that smaller topographical features are not fully detected in traditional remote sensing images, such as satellite and aerial images, because of their relatively coarse spatial resolution.The primary thermokarst landforms are typically characterized by polygons with depths of less than a few meters and with widths of less than a few tens of meters.The detection and measurement of polygons and their topographical characteristics using conventional satellite images and aerial photographs are difficult in the visible bands due to these low spatial resolutions.To analyze surface subsidence accompanied by the very rapid formation of thermokarst over a few years, a spatial resolution of <1.0 m is needed.However, the acquisition of high-definition datasets with ground-based measurements remains challenging in remote environments.
In recent years, the use of unmanned aerial systems (UAS) and structure-from-motion multi-view stereo (SfM-MVS) photogrammetry became popular in bridging the spatial gap between ground-based measurements and conventional satellite analyses.The UAS and SfM-MVS photogrammetry technique was widely applied in various geomorphological studies (e.g., References [25,26]).The data processing of SfM-MVS photogrammetry is straightforward and allows for instantaneous acquisition of high-definition and accurate topographic datasets in remote environments from various platforms, including UAS.Unlike satellite images and traditional aerial photographs with conventional manual photogrammetry, the combination of UAS and SfM-MVS photogrammetry provides high-definition orthorectified images and digital terrain models (DTM) for detailed geomorphological terrain analysis with a spatial resolution of <1.0 m [25,26].The technique is also suitable for the detection of thermokarst landforms.The purpose of this study is to determine features of the initial stage of thermokarst development under recent warming by studying the morphological features of thermokarst using high-definition topographic data from UAS and SfM-MVS photogrammetry.
Study Area
The study area is located in the Churapcha area on the right bank of the Lena River in central Yakutia (Figure 1).The Churapcha area is characterized by widespread permafrost and a typical landscape feature of the central Yakutia, called "charan", which is a unique meadow-forest-steppe landscape formed in the late Pleistocene with current park birch, larch forests, and intermittent dry grassland [14] (Figure 1).Karavaev [27] determined that such meadow-forest-steppe landscapes originated from the Upper Pleistocene.The soils on the meadow-steppe areas are solonetsous [28] or solonetzes [29], which are characterized by a deeper humus horizon of up to 0.2-0.3m, whereas the larch taiga contains pale solodized soils with a shallower humus horizon.These soils lie on the ice complex, composed of loess loams with ice wedges.References [25,26]).The data processing of SfM-MVS photogrammetry is straightforward and allows for instantaneous acquisition of high-definition and accurate topographic datasets in remote environments from various platforms, including UAS.Unlike satellite images and traditional aerial photographs with conventional manual photogrammetry, the combination of UAS and SfM-MVS photogrammetry provides high-definition orthorectified images and digital terrain models (DTM) for detailed geomorphological terrain analysis with a spatial resolution of <1.0 m [25,26].The technique is also suitable for the detection of thermokarst landforms.The purpose of this study is to determine features of the initial stage of thermokarst development under recent warming by studying the morphological features of thermokarst using high-definition topographic data from UAS and SfM-MVS photogrammetry.
Study Area
The study area is located in the Churapcha area on the right bank of the Lena River in central Yakutia (Figure 1).The Churapcha area is characterized by widespread permafrost and a typical landscape feature of the central Yakutia, called "charan", which is a unique meadow-forest-steppe landscape formed in the late Pleistocene with current park birch, larch forests, and intermittent dry grassland [14] (Figure 1).Karavaev [27] determined that such meadow-forest-steppe landscapes originated from the Upper Pleistocene.The soils on the meadow-steppe areas are solonetsous [28] or solonetzes [29], which are characterized by a deeper humus horizon of up to 0.2-0.3m, whereas the larch taiga contains pale solodized soils with a shallower humus horizon.These soils lie on the ice complex, composed of loess loams with ice wedges.Recently, a significant increase in thermokarst activity was observed, especially in dry grasslands which have been anthropogenically disturbed since the 20th century.To assess the thermokarst developments in the disturbed areas, the Melnikov Permafrost Institute monitored the typical thermokarst landforms at a presently disused airfield site and a plot of abandoned arable land in the dry grasslands (Figure 1) since the 1980s.Based on the research, this study also analyzed the disused airfield site (3.0 ha) and the abandoned arable land (6.3 ha) (Figures 1 and 2).
The Churapcha area is also characterized by an almost flat surface at about 180-200 m above sea level (a.s.l.) with thermokarst depressions (alas, with a relative depth of about 7-8 m).The sediments of the ice complex contain syngenetic polygonal ice wedges up to a depth of 12-14 m, which lie at a depth of 2.2-2.3 m below the surface.Field measurements by the Melnikov Permafrost Institute showed that the width of the upper parts of ice wedges varies from 1.5 m to 3.0 m.The transverse dimensions of the soil blocks between the ice wedges are usually about 8 m and 11 m at the Recently, a significant increase in thermokarst activity was observed, especially in dry grasslands which have been anthropogenically disturbed since the 20th century.To assess the thermokarst developments in the disturbed areas, the Melnikov Permafrost Institute monitored the typical thermokarst landforms at a presently disused airfield site and a plot of abandoned arable land in the dry grasslands (Figure 1) since the 1980s.Based on the research, this study also analyzed the disused airfield site (3.0 ha) and the abandoned arable land (6.3 ha) (Figures 1 and 2).
Methods
Field survey and topographic measurements were conducted in September 2017.Aerial images of the study areas were obtained by using a UAS (DJI Phantom 4) with a digital camera (12.4 mega pixels) (Table 1), which enabled autonomous flight and image acquisition.Images were captured at an altitude of about 100-120 m above ground level with image overlap of more than 10 images.In total, 167 and 130 images of the disused airfield site and the abandoned arable land, respectively, were acquired.We also measured ground control points (GCPs) with a global navigation satellite system receiver (Emlid Reach RTK) using the kinematic method (Table 1).These data were processed using RTKLIB (ver.2.4.3).The standard deviations of the GCP analysis were less than 0.01 m in total across three dimensions for the disused airfield site and the abandoned arable land.
The aerial images were processed with five GCPs each for the disused airfield site and the abandoned arable land using SfM-MVS photogrammetry software (Agisoft PhotoScan, Professional Edition).Following standard SfM-MVS photogrammetry workflows (e.g., Reference [26]), we obtained orthorectified images and DTMs.The residual errors (root-mean-squared error) at the GCPs The Churapcha area is also characterized by an almost flat surface at about 180-200 m above sea level (a.s.l.) with thermokarst depressions (alas, with a relative depth of about 7-8 m).The sediments of the ice complex contain syngenetic polygonal ice wedges up to a depth of 12-14 m, which lie at a depth of 2.2-2.3 m below the surface.Field measurements by the Melnikov Permafrost Institute showed that the width of the upper parts of ice wedges varies from 1.5 m to 3.0 m.The transverse dimensions of the soil blocks between the ice wedges are usually about 8 m and 11 m at the abandoned arable land and disused airfield site, respectively.The disused airfield site is located at the top of the interfluve of Tatta and Kokhara rivers, while the abandoned arable land is located on a gentle slope at the confluence of their basins.The volumetric ice content (ice wedge) in the upper part of the permafrost is approximately 17% at the disused airfield site and 25% at the abandoned arable land, as estimated by the method of Gasanov [30] and field measurements by the Melnikov Permafrost Institute.
The regional climate is extra-continental.Observations from a meteorological station at the disused airfield site show that the mean annual air temperature is −11.5 • C, while monthly mean temperatures for January and July are −44.0• C and 18.1 • C, respectively.The average annual duration of the frost-free period is 109 days.The amount of precipitation is 254.0 mm/year with 140.0 mm occurring during the warm period [31].However, during 2007-2016, the mean annual air temperature increased to −8.9 • C and the total precipitation reached 269.0 mm/year.
The permafrost thickness is estimated at about 540 m in Churapcha [32].The mean annual ground temperature at the meteorological station at a depth of 3.2 m was −2.1 ± 0.7 • C for 1967-2014.At our monitoring sites in the forest areas, the soil temperature ranged from −2 to −3 • C at a depth of 3.2 m, and from −1.5 to −2 • C in the meadows with an active layer thickness of 1.3 m and 2.0 m in 2015, for the forests and meadows, respectively.Increasing air temperatures have been observed in central Yakutia since the early 1990s [33].The increase in the active layer thickness in open areas has caused rapid thermokarst subsidence since the 1990s [34].
The disused airfield site was characterized by grassland vegetation and was used as an airport from the mid-1960s until the end of the 1980s (Figure 2c,e, and personal communication with local people).The Melnikov Permafrost Institute carried out field measurements on geocryological conditions, such as active layer thickness and distribution of ice wedges at the runway of the presently disused airport site in 1988.These results also show the presence of a flat and straight topography, and grassland vegetation at the end of the 1980s (Figure 2c).However, there has been a significant increase in thermokarst activity in this area since the 1990s.Melting of the tops of ice wedges caused subsidence of the ground surface, deepening of the troughs, and growth of the polygons (Figure 2).Considering the location of measurements by the Melnikov Permafrost Institute, an area of 3.0 ha at the disused airfield site (Figure 2) was selected for this study.Recently, local residents constructed houses in these areas.Areas with such topographic modifications were also excluded from the study site.
The second study area is an area of abandoned arable land about 2.5 km east of the disused airfield site.The area was used for farming and had an almost straight topography, allowing it to be suitable for farming until the 1980s.Farming began in the Churapcha area in the 1930s and expanded rapidly in the 1960s (Figure 2e).After the abandon of farming, the thermokarst activity has enhanced since the early 1990s.The abandoned arable land is surrounded by artificial mounds, and the area within the mounds (6.3 ha) was selected for analysis.In September 2017, three thermokarst lakes existed between the polygons (Figure 2b).
Methods
Field survey and topographic measurements were conducted in September 2017.Aerial images of the study areas were obtained by using a UAS (DJI Phantom 4) with a digital camera (12.4 mega pixels) (Table 1), which enabled autonomous flight and image acquisition.Images were captured at an altitude of about 100-120 m above ground level with image overlap of more than 10 images.In total, 167 and 130 images of the disused airfield site and the abandoned arable land, respectively, were acquired.We also measured ground control points (GCPs) with a global navigation satellite system receiver (Emlid Reach RTK) using the kinematic method (Table 1).These data were processed using RTKLIB (ver.2.4.3).The standard deviations of the GCP analysis were less than 0.01 m in total across three dimensions for the disused airfield site and the abandoned arable land.
The aerial images were processed with five GCPs each for the disused airfield site and the abandoned arable land using SfM-MVS photogrammetry software (Agisoft PhotoScan, Professional Edition).Following standard SfM-MVS photogrammetry workflows (e.g., Reference [26]), we obtained orthorectified images and DTMs.The residual errors (root-mean-squared error) at the GCPs were 16.5 cm and 30.1 cm in total across three dimensions for the disused airfield site and the abandoned arable land, respectively.Anthropogenic features (such as houses and power poles), trees, and grasses in the study areas were filtered from the DTMs for the topographic analysis.Polygons were manually delineated from the orthorectified images; the DTMs derive relief maps and slope maps with emphasizing the polygon edges.The polygons in this study were detected as the interior of the polygons [35].The spatial distribution of the polygons was examined together with the differences in the polygon diameter (m), area (m 2 ), and density (number/ha) in the study areas.
Thereafter, the volume and rate of thermokarst subsidence was estimated.Since no aerial photographs and topographic data were available for the 1980s and the 1990s, it was assumed that the original topography was straight before the thermokarst development.Measurements by the Melnikov Permafrost Institute show that the disused airfield and the arable land had straight topography during the late 1980s (Figure 2c).The early stage of thermokarst is characterized by depressed troughs and relative stability in the central areas of the polygons.In the present study, it was assumed that the top terrain of the polygons remained the same as the original topography, and the summit-level map was regarded as the original topographic map.The summit level reflects the main first-order characteristics of the topography [36,37].Summit-level maps are generally interpreted for general dynamic level of erosion or subsidence.In this study, the summit-level topography was calculated using the DTMs in 2017 through the window analysis in ArcMap (ver.10.3) using a window size of 16.0 m × 16.0 m, considering the polygon size.
The two topographic datasets with identical spatial resolutions were used to obtain the difference in height based on cell-by-cell subtraction.This analysis is particularly relevant to geomorphic studies because the difference between the DTMs provides spatially distributed surface models of the topographic and volumetric changes (e.g., Reference [38]).The total subsidence volume (m 3 ) and height (m) were estimated from the differences in the heights between the summit-level topographies and DTMs in 2017.Namely, the subsidence height represents the relative height of troughs.The subsidence rate (m/year) was also estimated from 27 years of data (1990-2017).Additionally, ground-based leveling was performed along the 100-m line transect at the disused airfield site in September 2017 at intervals of 2.0 m.The topography obtained from the UAS and SfM-MVS photogrammetry was examined based on results of the ground-based leveling.
The Disused Airfield Site
Orthorectified images and DTMs with spatial resolutions of 4.0 cm and 8.0 cm, respectively, were obtained (Figure 3a,b).A total of 174 polygons, with a density of 57.6/ha, were detected.The mean and median polygon diameters were 11.6 m and 12.2 m, respectively.The area of the polygons ranged from 14.6 m 2 to 248.5 m 2 , with a mean area of 111.2 m 2 and a median of 116.0 m 2 (Figure 3c, Table 2).Comparison of the DTMs and the ground-based leveling shows that the data obtained from the UAS and SfM-MVS photogrammetry are a good representation the topography (Figure 3e), although the DTMs have an uncertainty of a few tens of centimeters in the SfM-MVS photogrammetry.Figure 3e also indicates some overestimation of the trough depth because of a relatively dense grassland vegetation, which was not fully filtered.
Figure 3d shows total subsidence estimated from the difference in height between the summit-level topography and the DTMs.The largest subsidence was found in the southern part of the study area.The maximum and spatial average of the total subsidence were 204.8 cm and 55.6 cm, respectively (Table 2).The total volume of subsidence was 1.7 × 10 4 m 3 .The maximum and spatial average of the subsidence rates were 7.6 cm/year and 2.1 cm/year, respectively, at the disused airfield site during the 27 years.
The Abandoned Arable Land
Orthorectified images and DTMs with spatial resolutions of 4.0 cm and 8.0 cm, respectively, were obtained (Figure 4a,b).A total of 867 polygons were detected with a density of 137.8/ha.The mean and median polygon diameters were 7.4 m and 7.2 m, respectively.The area of the polygons ranged from 5.9 m 2 to 174.1 m 2 with a mean of 46.8 m 2 and a median of 40.4 m 2 (Figure 4c, Table 2).
Figure 4d shows the total subsidence estimated from the difference in height between the summit-level topography and the DTMs.Some of the larger subsidence areas were found around the initiating thermokarst lakes and the western part of the study area.The maximum and the spatial average of the total subsidence were 321.9 cm and 106.0 cm, respectively (Table 2).The total volume of subsidence was 6.7 × 10 4 m 3 .The maximum and the spatial average of the subsidence rates were 11.9 cm/year and 3.9 cm/year, respectively, at the abandoned arable land site during the 27 years of the study period.
Remote Sens. 2018, 10, x FOR PEER REVIEW 7 of 14 photogrammetry.Figure 3e also indicates some overestimation of the trough depth because of a relatively dense grassland vegetation, which was not fully filtered.Figure 3d shows total subsidence estimated from the difference in height between the summitlevel topography and the DTMs.The largest subsidence was found in the southern part of the study area.The maximum and spatial average of the total subsidence were 204.8 cm and 55.6 cm, respectively (Table 2).The total volume of subsidence was 1.7 × 10 4 m 3 .The maximum and spatial average of the subsidence rates were 7.6 cm/year and 2.1 cm/year, respectively, at the disused airfield site during the 27 years.
The Abandoned Arable Land
Orthorectified images and DTMs with spatial resolutions of 4.0 cm and 8.0 cm, respectively, were obtained (Figure 4a,b).A total of 867 polygons were detected with a density of 137.8/ha.The mean and median polygon diameters were 7.4 m and 7.2 m, respectively.The area of the polygons ranged from 5.9 m 2 to 174.1 m 2 with a mean of 46.8 m 2 and a median of 40.4 m 2 (Figure 4c, Table 2).
Figure 4d shows the total subsidence estimated from the difference in height between the summitlevel topography and the DTMs.Some of the larger subsidence areas were found around the initiating thermokarst lakes and the western part of the study area.The maximum and the spatial average of the total subsidence were 321.9 cm and 106.0 cm, respectively (Table 2).The total volume of subsidence was 6.7 × 10 4 m 3 .The maximum and the spatial average of the subsidence rates were 11.9 cm/year and 3.9 cm/year, respectively, at the abandoned arable land site during the 27 years of the study period.
Size of Polygons and Subsidence
Polygon systems formed by thermokarst and their geomorphometry were previously examined in the Arctic region (e.g., [35]).These studies showed that the polygon size is controlled by the initiation age and various environmental factors such as geological material, ice content, and ice-wedge distribution.The disused airfield site and the abandoned arable land site are located only 2.5 km apart; thus, there is little difference in the climatic conditions for the two sites.Thermokarst development began in the early 1990s at both sites.However, the polygon size at the disused airfield site was approximately two larger than that at the abandoned arable land site (Table 2, Figure 5).These smaller polygons at the abandoned arable land site were densely distributed.
Size of Polygons and Subsidence
Polygon systems formed by thermokarst and their geomorphometry were previously examined in the Arctic region (e.g., [35]).These studies showed that the polygon size is controlled by the initiation age and various environmental factors such as geological material, ice content, and icewedge distribution.The disused airfield site and the abandoned arable land site are located only 2.5 km apart; thus, there is little difference in the climatic conditions for the two sites.Thermokarst development began in the early 1990s at both sites.However, the polygon size at the disused airfield site was approximately two times larger than that at the abandoned arable land site (Table 2, Figure 5).These smaller polygons at the abandoned arable land site were densely distributed.The average polygon diameter at the disused airfield site was 11.6 m, while that at the abandoned arable land site was 7.4 m.The spatial distribution of the ice wedges is the main factor that controls the polygon size in the early stages of thermokarst landform development.Field measurements by the Melnikov Permafrost Institute demonstrate that the transverse dimensions of the soil blocks between the ice wedges were about 11 m at the disused airfield site and 8 m at the abandoned arable land site.The spatial distributions of the ice wedges correspond well with the polygon diameters.
The spatial average of the subsidence rate at the abandoned arable land site was 3.9 cm/year, which was approximately twice the rate of 2.1 cm/year at the disused airfield site.This difference might be explained by the fact that the abandoned arable land is characterized by a high polygon density and high volumetric ice content.The volumetric ice content in the abandoned arable land was approximately 1.5 times higher than that in the disused airfield site.This depends on the structure of ice wedges, which corresponds well with the average size of the polygons.The small size of the polygons at the abandoned arable land is explained by the landscape conditions of the formation of two generations of ice wedges [39].The first generation of ice wedges in the presently abandoned arable land might be the same as those at the disused airport.However, the abandoned arable land was more likely to form the second generation of ice wedges because the area was located in wetter surface conditions with the gentle slope of the confluence of the Kokhara and Tatta rivers.Therefore, a significant difference in volumetric ice content due to ice wedges could result in an almost twofold difference in subsidence.These observations indicate that the ice-wedge distribution and ice content have strongly affected the thermokarst development since the 1990s in the Churapcha area.The average polygon diameter at the disused airfield site was 11.6 m, while that at the abandoned arable land site was 7.4 m.The spatial distribution of the ice wedges is the main factor that controls the polygon size in the early stages of thermokarst landform development.Field measurements by the Melnikov Permafrost Institute demonstrate that the transverse dimensions of the soil blocks between the ice wedges were about 11 m at the disused airfield site and 8 m at the abandoned arable land site.The spatial distributions of the ice wedges correspond well with the polygon diameters.
The spatial average of the subsidence rate at the abandoned arable land site was 3.9 cm/year, which was approximately twice the rate of 2.1 cm/year at the disused airfield site.This difference might be explained by the fact that the abandoned arable land is characterized by a high polygon density and high volumetric ice content.The volumetric ice content in the abandoned arable land was approximately 1.5 times higher than that in the disused airfield site.This depends on the structure of ice wedges, which corresponds well with the average size of the polygons.The small size of the polygons at the abandoned arable land is explained by the landscape conditions of the formation of two generations of ice wedges [39].The first generation of ice wedges in the presently abandoned arable land might be the same as those at the disused airport.However, the abandoned arable land was more likely to form the second generation of ice wedges because the area was located in wetter surface conditions with the gentle slope of the confluence of the Kokhara and Tatta rivers.Therefore, a significant difference in volumetric ice content due to ice wedges could result in an almost twofold difference in subsidence.These observations indicate that the ice-wedge distribution and ice content have strongly affected the thermokarst development since the 1990s in the Churapcha area.
To discuss the processes of subsidence, the long-term ground measurement data by the Melnikov Permafrost Institute [17] at Yukechi in central Yakutia were obtained, which has a similar anthropogenic disturbed landscape with ice wedges (i.e., arable land that was abandoned in the 1960s) [40] and permafrost conditions similar to the Churapcha area.These data showed a subsidence rate of 1.5-7.0cm/year during 1992-2016 (Figure 6).The undisturbed relatively stable areas, which were not directly impacted by the thermokarst depression, subsided at a mean rate of 0.2-0.6 cm/year.Regarding the trough between polygons, subsidence rates reached a maximum of 15.0-16.0cm/year in the initial period of about eight years from 1992, which then decreased to 5.0-7.0 cm/year.This nonlinear subsidence was explained by thermophysical processes resulting in an increasing talik under thermokarst lakes, as well as a reduction in the volumetric ice content [17].
To discuss the processes of subsidence, the long-term ground measurement data by the Melnikov Permafrost Institute [17] at Yukechi in central Yakutia were obtained, which has a similar anthropogenic disturbed landscape with ice wedges (i.e., arable land that was abandoned in the 1960s) [40] and permafrost conditions similar to the Churapcha area.These data showed a subsidence rate of 1.5-7.0cm/year during 1992-2016 (Figure 6).The undisturbed relatively stable areas, which were not directly impacted by the thermokarst depression, subsided at a mean rate of 0.2-0.6 cm/year.Regarding the trough between polygons, the subsidence rates reached a maximum of 15.0-16.0cm/year in the initial period of about eight years from 1992, which then decreased to 5.0-7.0 cm/year.This nonlinear subsidence was explained by thermophysical processes resulting in an increasing talik under thermokarst lakes, as well as a reduction in the volumetric ice content [17].
The present study showed that the maximum subsidence rates measured in the troughs reached 7.6 cm/year and 11.9 cm/year at the disused airfield site and the abandoned arable land, respectively (Table 2).These estimated subsidence rates correspond to the field measurements at the Yukechi site.This study calculated the spatial average and the maximum subsidence rates, assuming linear subsidence, during 1990-2017 (Table 2).However, the nonlinear subsidence processes also indicate that higher rates of subsidence occurred in the study sites.[17].Markers B, D, and ad68 were observed in areas with subsidence starting during 1990-2016, while markers A, C, and bc72 were observed in undisturbed relatively stable areas.
Advantages and Limitations of This Study
By using the advantages of the UAS and SfM-MVS photogrammetry technique for obtaining high-definition datasets, this study is the first to detect the distribution of polygons and related subsidence in the Churapcha region in central Yakutia.In this study, a spatial resolution of <1.0 m was needed to measure the initial stage of thermokarst landscape.Ground surveys by total station and terrestrial laser scanning, and airborne LiDAR are generally employed to obtain high-definition topographic datasets.However, these techniques are associated with time consumption, as well as high capital and logistical cost, especially in remote areas.On the other hand, the UAS and SfM-MVS photogrammetry technique is a revolutionary, low-cost, user-friendly photogrammetry technique [25], and has logistical advantages (Table 1).The technique allowed for instantaneous acquisition of high-definition datasets in the study area.
The results of this study emphasize the importance of initial topographic characteristics of thermokarst, such as polygon size, distribution, and subsidence affected by structures of ice wedges and ice contents.This study highlighted the rapid spatio-temporal subsidence rates at the typical The present study showed that the maximum subsidence rates measured in the troughs reached 7.6 cm/year and 11.9 cm/year at the disused airfield site and the abandoned arable land, respectively (Table 2).These estimated subsidence rates correspond to the field measurements at the Yukechi site.This study calculated the spatial average and the maximum subsidence rates, assuming linear subsidence, during 1990-2017 (Table 2).However, the nonlinear subsidence processes also indicate that higher rates of subsidence occurred in the study sites.
Advantages and Limitations of This Study
By using the advantages of the UAS and SfM-MVS photogrammetry technique for obtaining high-definition datasets, this study is the first to detect the distribution of polygons and related subsidence in the Churapcha region in central Yakutia.In this study, a spatial resolution of <1.0 m was needed to measure the initial stage of thermokarst landscape.Ground surveys by total station and terrestrial laser scanning, and airborne LiDAR are generally employed to obtain high-definition topographic datasets.However, these techniques are associated with time consumption, as well as high capital and logistical cost, especially in remote areas.On the other hand, the UAS and SfM-MVS photogrammetry technique is a revolutionary, low-cost, user-friendly photogrammetry technique [25], and has logistical advantages (Table 1).The technique allowed for instantaneous acquisition of high-definition datasets in the study area.
The results of this study emphasize the importance of initial topographic characteristics of thermokarst, such as polygon size, distribution, and subsidence affected by structures of ice wedges and ice contents.This study highlighted the rapid spatio-temporal subsidence rates at the typical landscape, which affected landscape changes and increased carbon emission, in central Yakutia.This study also showed the availability of the UAS and SfM-MVS photogrammetry technique to measure thermokarst landforms.The dry grassland with sparse vegetation and complex topography of the polygons in the study area were ideally suited for applying the technique.Similar thermokarst landforms are widely distributed in Siberia and the Pan-Arctic region (e.g., [23]).The UAS and SfM-MVS photogrammetry technique is, therefore, suitable for measuring thermokarst landforms in the areas.
The accuracy of orthorectified images and DTMs from SfM-MVS photogrammetry were affected by conditions of image acquisition and GCP measurements.The residual errors (root-mean-squared error) in SfM-MVS photogrammetry were 16.5 cm and 30.1 cm for the disused airfield site and the abandoned arable land, respectively.Considering the inherent uncertainty in the orthorectified images and the DTMs, the study refrained from discussing the topographic characteristics of polygons and troughs lower than the spatial scale of 30 cm.However, the average diameters of the polygons at the disused airfield site and at the abandoned arable land were 11.6 m and 7.4 m, respectively (Figure 5).The maximum and spatial average of the total subsidence at the disused airfield were 204.8 cm and 55.6 cm, while those at the abandoned arable land were 321.9 cm and 106.0 cm, respectively.Even if the uncertainty was considered, the differences between the polygon size and the total subsidence in the two study sites were significant.
This study assumed that the height of the polygons and the summit levels were reflected the original topography.The trough depth might be underestimated because a relatively dense grassland vegetation in the troughs was not fully filtered.These uncertainties resulted in the underestimation of the subsidence.Acceleration of thermokarst also caused the subsidence of polygons [17].However, addressing the effect of polygon subsidence was difficult from the single UAS dataset available for the period considered in this study.Additionally, water-filled troughs at the abandoned arable land cannot be measured from the optical UAS images in this study.These factors imply that the actual subsidence rates were larger than our estimations.
This study demonstrated the difficulty of addressing the mechanism determining polygon size and distribution.The topography is affected by complex spatio-temporal interactions between factors such as variations in the air and ground temperatures and subsurface conditions.In addition, recent climate change and anthropogenic activity led to enhanced cryogenic degradation in central Yakutia [17,20].Field surveys of analog morphologies in terrestrial permafrost environments are necessary to complement our findings and provide ground truth for robust UAS monitoring of thermokarst developments.
Conclusions
The properties of thermokarst distributions and their topographic characteristics in central Yakutia were detected using the advantages of high-definition datasets provided by the UAS and SfM-MVS photogrammetry technique.The subsidence rates were examined at a disused airfield site (3.0 ha) and an abandoned arable land (6.3 ha).
The orthorectified photographs and digital surface models with spatial resolutions of 4.0 cm and 8.0 cm for the airport and arable land, respectively, were obtained.We detected 174 and 867 high-centered polygons that developed from the 1990s at the disused airfield site and the abandoned arable land, respectively.The data showed that the polygons at the disused airfield site have an average diameter of 11.6 m and an average area of 111.2 m 2 , while the average diameter and average area in the abandoned arable lands were 7.4 m and 46.8 m 2 , respectively.The abandoned arable land is characterized by smaller polygons and a higher polygon density.This indicates that there are differences in the ice-wedge distributions for the two sites.The maximum and spatial average of the total subsidence at the disused airfield were 204.8 cm and 55.6 cm, while those at the abandoned arable land were 321.9 cm and 106.0 cm, respectively.The spatial average of subsidence corresponded to the subsidence rates of 2.1 cm/year for the disused airfield site and 3.9 cm/year for the abandoned arable land.This study demonstrated the difficulty in understanding thermokarst development in detail from a single UAS dataset.Recently, a warming climate accelerated the thermokarst development.Future studies should monitor thermokarst developments using frequent measurements with the UAS.
Figure 1 .
Figure 1.Location and land use classification of the study area.A disused airfield site and an abandoned arable land in the Churapcha region were analyzed.
Figure 1 .
Figure 1.Location and land use classification of the study area.A disused airfield site and an abandoned arable land in the Churapcha region were analyzed.
Figure 2 .
Figure 2. Photographs of the study areas.Aerial photographs of (a) the disused airfield site and (b) the abandoned arable land.The ground level photographs at the disused airfield site taken in (c) 1988 and (d) 2016; (e) shows the study area in August 1969 taken by the Corona satellite system provided by the United States Geological Survey.
Figure 2 .
Figure 2. Photographs of the study areas.Aerial photographs of (a) the disused airfield site and (b) the abandoned arable land.The ground level photographs at the disused airfield site taken in (c) 1988 and (d) 2016; (e) shows the study area in August 1969 taken by the Corona satellite system provided by the United States Geological Survey.
Figure 3 .
Figure 3. Results for the disused airfield site.(a) Orthorectified images with a spatial resolution of 4.0 cm; (b) digital surface models (DTMs) with a spatial resolution of 8.0 cm; (c) the distribution of the polygons (blue polygons); and (d) the total subsidence, estimated from the differences between the summit-level topography and the DTMs.(e) Comparison of the DTMs and ground leveling along the 100-m transect line (A-B in (b)).The dashed black rectangle shows the study area.Black rectangles in (a) show the location of the ground control points (GCPs).
Figure 3 .
Figure 3. Results for the disused airfield site.(a) Orthorectified images with a spatial resolution of 4.0 cm; (b) digital surface models (DTMs) with a spatial resolution of 8.0 cm; (c) the distribution of the polygons (blue polygons); and (d) the total subsidence, estimated from the differences between the summit-level topography and the DTMs.(e) Comparison of the DTMs and ground leveling along the 100-m transect line (A-B in (b)).The dashed black rectangle shows the study area.Black rectangles in (a) show the location of the ground control points (GCPs).
Figure 4 .
Figure 4. Results for the abandoned arable land.(a) Orthorectified images with a spatial resolution of 4.0 cm; (b) DTMs with a spatial resolution of 8.0 cm; (c) the distribution of the high-centered polygons (blue polygons); and (d) the total subsidence, estimated from the differences between the summitlevel topography and the DTMs.The dashed black rectangle shows the study area.Black rectangles in (a) show the location of the GCPs.
Figure 4 .
Figure 4. Results for the abandoned arable land.(a) Orthorectified images with a spatial resolution of 4.0 cm; (b) DTMs with a spatial resolution of 8.0 cm; (c) the distribution of the high-centered polygons (blue polygons); and (d) the total subsidence, estimated from the differences between the summit-level topography and the DTMs.The dashed black rectangle shows the study area.Black rectangles in (a) show the location of the GCPs.
Figure 5 .
Figure 5. Frequency density distribution of the polygon diameters at the disused airfield site (black line) and at the abandoned arable land site (dashed gray line).
Figure 5 .
Figure 5. Frequency density distribution of the polygon diameters at the disused airfield site (black line) and at the abandoned arable land site (dashed gray line).
Figure 6 .
Figure 6.Surface subsidence rates at Yukechi in central Yakutia during 1992-2016, measured by the Melnikov Permafrost Institute in landscapes with ice wedges [17].Markers B, D, and ad68 were observed in areas with subsidence starting during 1990-2016, while markers A, C, and bc72 were observed in undisturbed relatively stable areas.
Figure 6 .
Figure 6.Surface subsidence rates at Yukechi in central Yakutia during 1992-2016, measured by the Melnikov Permafrost Institute in landscapes with ice wedges [17].Markers B, D, and ad68 were observed in areas with subsidence starting during 1990-2016, while markers A, C, and bc72 were observed in undisturbed relatively stable areas.
Table 1 .
General information related to the unmanned aerial system (UAS), camera, and the global navigation satellite system (GNSS).
Table 2 .
Statistics of the topographic characteristics and subsidence at the disused airfield site and the abandoned arable land.
Table 2 .
Statistics of the topographic characteristics and subsidence at the disused airfield site and the abandoned arable land. | 2018-12-02T14:02:19.464Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "6106843a4b58850f7da7b200b0c2e370890197f2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/10/10/1579/pdf?version=1538384706",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6106843a4b58850f7da7b200b0c2e370890197f2",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Geology"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
4020216 | pes2o/s2orc | v3-fos-license | Adsorption of Congo Red by Ni / Al-CO 3 : Equilibrium , Thermodynamic and Kinetic Studies
Experimental investigations were carried out using Ni/Al-CO3 layered double hydroxide as adsorbent for removal of toxic anionic dye namely Congo red from aqueous solutions. The effect of contact time, initial dye concentration and temperature were experimentally studied in batch mode to evaluate the kinetic, equilibrium and thermodynamic parameters of the adsorption process. Experimental results revealed that the degradation of the dye is mostly dependent on temperature. The dye degradation process obeyed the zero-order kinetic model, first-order kinetic model, second-order kinetic model, pseudo second order kinetic and third order kinetic model with correlation coefficient values 1, 0.9998, 0.9999, 0.9999 and 0.9997 respectively. Langmuir, Freundlich, Temkin and Dubinin-Kaganer-Radushkevic isotherms were applied to the equilibrium data and was well described by all. Thermodynamic studies showed congo red adsorption on the layered double hydroxide was endothermic and spontaneous in nature. The results indicate that layered double hydroxide could be employed as alternative for removal of anionic dyes from industrial wastewater. Keyword: Congo Red, Layered double hydroxides, Kinetic, Dye, Adsorption, isotherms, thermodynamics. decolorized once released into the aquatic environment2. Dyes can also cause allergic dermatitis and skin irritation. Some of them have been reported to be carcinogenic and mutagenic for aquatic organisms3. Therefore, it is very important to develop new systems that can be used for removing dyes from waters. 1308 AYAWEI et al., Orient. J. Chem., Vol. 31(3), 1307-1318 (2015) The treatment of dyes in industrial wastewater possesses several problems since dyes are generally difficult to biodegradate and photodegradate. Many different techniques including cloud point extraction, oxidation processes, nanofiltration, ozonation and coagulation have been used for the removal of colored dyes from wastewater4-7. However, adsorption is the most popular physicochemical treatment for the removal of dissolved organics from waters. Layered double hydroxides (LDH) or anionic clays are lamellar ionic compounds, containing a positively charged layer and exchangeable anions in the interlayer8. They consist of brucite-like layers, with a partial MII for MIII substitution, leading to an excess of positive charge compensated with anions situated in the interlayer together with water molecules. These materials can be represented by the general formula [ 1− ( )2 + / − . 2 −, abbreviated as [ − , where = 2+, 2+, 2+ ... = 3+, 3+, 3+ ... − = 23−, −, 3− .... The flexibility of the structure of these materials as well as their high anionic exchange capacity make them suitable for many applications9,10, such as the sorption of many inorganic11-14 and organic15-20 anions, potential contaminants of waters. Some authors have reported about LDH containing chelating agents such as etilendiaminetetraacetate, edta21 and nitrilotriacetate, NTA,22 as well as the metal cations uptake by these materials23,24,25. Materials and Methods Synthesis of Ni/Al -CO3 Carbonate form of Ni/Al LDH was synthesized by co-precipitation method. A 50 ml aqueous solution containing 0.3 M Ni (NO3)2.6H2O and 0.1 M Al (NO3)3.9H2O with Ni/Al ratios 2:1, was added drop wise into a 50 ml mixed solution of (NaOH (2M) + Na2CO3 (1M) with vigorous stirring and maintaining a pH of greater than 10 at room temperature. After complete addition which last between 2 hours 30 minutes to 3 hours, the slurry formed was aged at 60°C for 18 hours. The products were centrifuged at 5000 rpm for 5 minutes, with distilled water 3-4 times and dried by freeze drying. Characterization of Layered Double Hydroxide X-ray diffraction (XRD) pattern of the sample was characterized by using a Shimadzu XRD-6000 diffractometer, with Ni-filtered Cu-Kα radiation (λ= 1.54 A°) at 40 kV and 200 mA. Solid samples were mounted on alumina sample holder and basal spacing (d-spacing) was determined via powder technique. Samples scan were carried out at 10-60°, 2θ / min at 0.003° steps. FTIR spectrum was obtained using a Perkin Elmer 1725X spectrometer where samples will be were finely ground and mixed with KBr and pressed into a disc. Spectrums of samples were scanned at 2 cm-1 resolution between 400 and 4000cm-1. FESEM/EDX was obtained using Carl Zeiss SMT supra 40 VPFESEM Germany and inca penta FET x 3 EDX, Oxford. It was operated at extra high tension (HT) at 5.0 kV and magnification at 20000X. FESEM uses electron to produce images (morphology) of samples and was attached with EDX for qualitative elemental analysis. Preparation of Congo Red Solution Congo red (CI=22120) was supplied by Merck (Mumbai, India). A stock solution of CR dye was prepare (100mg/L) by dissolving a required amount of dye powder in deionized water. The stock solution was diluted with deionized water to obtain the desire concentration ranging from 20 to 40mg/ L. The concentration of CR in the experimental solution was determined from the calibration curve prepared by measuring the absorbance of different known concentrations of CR solutions at max =497 nm using a UV-vis spectrophotometer (Shimadzu, Kyoto, Japan). The pH meter using a combined glass electrode (model HI 9025C, Hanna instruments, Singapore). 1309 AYAWEI et al., Orient. J. Chem., Vol. 31(3), 1307-1318 (2015) Experimental Procedure Batch adsorption experiments were carried out to study the effect of initial congo red concentration, contact time and temperature on the adsorption of Congo red on the layered double hydroxide. Adsorption studies were carried out using 25 ml of each dye solution and 0.2g of the adsorbent. At the end of each experiment, the content of each tube was filtered using a Whatman No 14 filter paper after which the concentration of residual congo red was determined by UV-Vis spectrophotometer analysis. All experiments were carefully conducted to acquire good result. In order to determine the rate of adsorption, experiments were conducted with different initial concentrations of dyes ranging from 20 to 40 mg/L. All other factors are kept constant. The effect of period of contact on the removal of the dye on adsorbent in a single cycle was determined by time intervals of 10, 20 and 30 minutes. The adsorption experiments were performed at three different temperatures viz., 40, 60 and 80ÚC in a thermostat attached with a shaker (Remi make). The constancy of the temperature was maintained with an accuracy of ± 0.5o C. The equilibrium adsorption capacity and the yield of adsorption were calculated respectively by equations 1 and 2 below: = − ...(1) % = − 100 ...(2( where Cinit and Ceql are, respectively, the initial and equilibrium concentrations of metal ions in solution (mmol/l) and m is the layered double hydroxide dosage (g/l). Isotherms analysis To determine the equilibrium parameters of the adsorption process four isotherms namely, Freundlich, Langmuir, Temkin and Dubinin– Kaganer–Radushkevich (DKR) were applied to test the experimental data. For the Freundlich isotherm the In-In version was used: = + 1 The Langmuir model linearization (a plot of 1/qeql vs 1/Ceql) was expected to give a straight line with intercept of 1/qmax: 1 = 1 + 1 The essential characteristics of the Langmuir isotherm were expressed in terms of a dimensionless separation factor or equilibrium parameter Sf. = 1 1 + With Co as initial concentration of Congo Red in solution, the magnitude of the parameter Sf provides a measure of the type of adsorption isotherm. If Sf> 1.0, the isotherm is unfavourable; Sf = 1.0 (linear); 0< Sf < 1.0(favourable) and Sf=0 (irreversible). The DKR isotherm is reported to be more general than the Langmuir and Freundlich isotherms. It helps to determine the apparent energy of adsorption. The characteristic porosity of adsorbent toward the adsorbate and does not assume a homogenous surface or constant sorption potential26. The Dubinin–Kaganer–Radushkevich (DKR) model has the linear form = − β ...(6) where Xm is the maximum sorption 1310 AYAWEI et al., Orient. J. Chem., Vol. 31(3), 1307-1318 (2015) capacity, β is the activity coefficient related to mean sorption energy, and å is the Polanyi potential, which is equal to ε = (1 + 1 ) ....(7) where R is the gas constant (kJ/kmol) . The slope of the plot of ln e q versus ε 2 gives β (mol2/J2 ) and the intercept yields the sorption capacity, Xm (mg/g). The values of â and Xm, as a function of temperature are listed in table 1 with their corresponding value of the correlation coefficient, R2. It can be observed that the values of â increase as temperature increases while the values of Xm decrease with increasing temperature. The values of the adsorption energy, E, was obtained from the relationship [27]
INTRODUCTION
It was estimated that 10-20 % of dye was lost during the dyeing process and released as effluent 1 .Due to their chemical structures, dyes are resistant to fading on exposure to light, water and many chemicals and, therefore, are difficult to be The treatment of dyes in industrial wastewater possesses several problems since dyes are generally difficult to biodegradate and photodegradate.Many different techniques including cloud point extraction, oxidation processes, nanofiltration, ozonation and coagulation have been used for the removal of colored dyes from wastewater [4][5][6][7] .However, adsorption is the most popular physicochemical treatment for the removal of dissolved organics from waters.
Layered double hydroxides (LDH) or anionic clays are lamellar ionic compounds, containing a positively charged layer and exchangeable anions in the interlayer 8 .They consist of brucite-like layers, with a partial M II for M III substitution, leading to an excess of positive charge compensated with anions situated in the interlayer together with water molecules.These materials can be represented by the general formula The flexibility of the structure of these materials as well as their high anionic exchange capacity make them suitable for many applications 9,10 , such as the sorption of many inorganic [11][12][13][14] and organic [15][16][17][18][19][20] anions, potential contaminants of waters.Some authors have reported about LDH containing chelating agents such as etilendiaminetetraacetate, edta 21 and nitrilotriacetate, NTA, 22 as well as the metal cations uptake by these materials 23,24,25 .
Synthesis of Ni/Al -CO 3
Carbonate form of Ni/Al LDH was synthesized by co-precipitation method.A 50 ml aqueous solution containing 0.3 M Ni (NO 3 ) 2 .6H 2 O and 0.1 M Al (NO 3 ) 3 .9H 2 O with Ni/Al ratios 2:1, was added drop wise into a 50 ml mixed solution of (NaOH (2M) + Na 2 CO 3 (1M) with vigorous stirring and maintaining a pH of greater than 10 at room temperature.After complete addition which last between 2 hours 30 minutes to 3 hours, the slurry formed was aged at 60°C for 18 hours.The products were centrifuged at 5000 rpm for 5 minutes, with distilled water 3-4 times and dried by freeze drying.
Characterization of Layered Double Hydroxide
X-ray diffraction (XRD) pattern of the sample was characterized by using a Shimadzu XRD-6000 diffractometer, with Ni-filtered Cu-Kα radiation (λ= 1.54 A°) at 40 kV and 200 mA.Solid samples were mounted on alumina sample holder and basal spacing (d-spacing) was determined via powder technique.Samples scan were carried out at 10-60°, 2θ / min at 0.003° steps.
FTIR spectrum was obtained using a Perkin Elmer 1725X spectrometer where samples will be were finely ground and mixed with KBr and pressed into a disc.Spectrums of samples were scanned at 2 cm -1 resolution between 400 and 4000cm -1 .FESEM/EDX was obtained using Carl Zeiss SMT supra 40 VPFESEM Germany and inca penta FET x 3 EDX, Oxford.It was operated at extra high tension (HT) at 5.0 kV and magnification at 20000X.FESEM uses electron to produce images (morphology) of samples and was attached with EDX for qualitative elemental analysis.
Preparation of Congo Red Solution
Congo red (CI=22120) was supplied by Merck (Mumbai, India).A stock solution of CR dye was prepare (100mg/L) by dissolving a required amount of dye powder in deionized water.The stock solution was diluted with deionized water to obtain the desire concentration ranging from 20 to 40mg/ L.
The concentration of CR in the experimental solution was determined from the calibration curve prepared by measuring the absorbance of different known concentrations of CR solutions at max =497 nm using a UV-vis spectrophotometer (Shimadzu, Kyoto, Japan).The pH meter using a combined glass electrode (model HI 9025C, Hanna instruments, Singapore).
Experimental Procedure
Batch adsorption experiments were carried out to study the effect of initial congo red concentration, contact time and temperature on the adsorption of Congo red on the layered double hydroxide.Adsorption studies were carried out using 25 ml of each dye solution and 0.2g of the adsorbent.At the end of each experiment, the content of each tube was filtered using a Whatman No 14 filter paper after which the concentration of residual congo red was determined by UV-Vis spectrophotometer analysis.All experiments were carefully conducted to acquire good result.
In order to determine the rate of adsorption, experiments were conducted with different initial concentrations of dyes ranging from 20 to 40 mg/L.All other factors are kept constant.
The effect of period of contact on the removal of the dye on adsorbent in a single cycle was determined by time intervals of 10, 20 and 30 minutes.
The adsorption experiments were performed at three different temperatures viz., 40, 60 and 80ÚC in a thermostat attached with a shaker (Remi make).The constancy of the temperature was maintained with an accuracy of ± 0.5º C.
The equilibrium adsorption capacity and the yield of adsorption were calculated respectively by equations 1 and 2 below: where C init and C eql are, respectively, the initial and equilibrium concentrations of metal ions in solution (mmol/l) and m is the layered double hydroxide dosage (g/l).
Isotherms analysis
To determine the equilibrium parameters of the adsorption process four isotherms namely, Freundlich, Langmuir, Temkin and Dubinin-Kaganer-Radushkevich (DKR) were applied to test the experimental data.
For the Freundlich isotherm the In-In version was used:
= + 1
The Langmuir model linearization (a plot of 1/ qeql vs 1/C eql ) was expected to give a straight line with intercept of 1/q max : The essential characteristics of the Langmuir isotherm were expressed in terms of a dimensionless separation factor or equilibrium parameter S f .
= 1 1 +
With C o as initial concentration of Congo Red in solution, the magnitude of the parameter S f provides a measure of the type of adsorption isotherm.If S f > 1.0, the isotherm is unfavourable; S f = 1.0 (linear); 0< S f < 1.0(favourable) and S f =0 (irreversible).
The DKR isotherm is reported to be more general than the Langmuir and Freundlich isotherms.It helps to determine the apparent energy of adsorption.The characteristic porosity of adsorbent toward the adsorbate and does not assume a homogenous surface or constant sorption potential 26 .
The Dubinin-Kaganer-Radushkevich (DKR) model has the linear form = − β ... (6) where X m is the maximum sorption capacity, β is the activity coefficient related to mean sorption energy, and å is the Polanyi potential, which is equal to .... (7) where R is the gas constant (kJ/kmol) .The slope of the plot of ln e q versus ε 2 gives β (mol 2 /J 2 ) and the intercept yields the sorption capacity, Xm (mg/g).The values of â and Xm, as a function of temperature are listed in table 1 with their corresponding value of the correlation coefficient, R 2 .It can be observed that the values of â increase as temperature increases while the values of Xm decrease with increasing temperature.
The values of the adsorption energy, E, was obtained from the relationship [27] The Temkins isotherm model was also applied to the experimental data, unlike the Langmuir and Freundlich isotherm models, this isotherm takes into account the interactions between adsorbents and metal ions to be adsorbed and is based on the adsorption that the free energy of adsorption is simply a function of surface coverage [28].The linear form of the Temkins isotherm model equation is given in (9).qe = BlnA + BlnCe ...( 9Where B= [RT/b T ] in (J/mol) corresponding to the heat of adsorption, R is the ideal gas constant, T(K) is the absolute temperature, b T is the Temkins isotherm constant and A (L/g) is the equilibrium binding constant corresponding to the maximum binding energy.
= + 10
First-Order Kinetic model, Second-Order Kinetic model, Third-order kinetic model Pseudo-second order model where q o (mg/g) and q t (mg/g) are the adsorbed amounts of CR at equilibrium and time t (min); K o, K 1, K 2 and K 3 are the adsorption rate constants for the kinetic models.
Thermodynamic parameters
Thermodynamic parameters such as change in Gibb's free energy DG o , enthalpy DH o and entropy DS o were determined using the following equation [9]: where K d is the apparent equilibrium constant, q eql (or [Congo Red] uptake ); is the amount of metal adsorbed on the unitary sorbent mass (mmol/ g) at equilibrium and C eql (or [Congo Red] eql ) equilibrium concentrations of metal ions in solution (mmol/l), when amount adsorbed is equals q eql ;relationship depends on the type of the adsorption that occurs, i.e. multi-layer, chemical, physical adsorption, etc.
The thermodynamic equilibrium constants (K d ) of the Congo Red adsorption on studied layered double hydroxide were calculated from the intercept of the plots of ln (q eql /C eql ) vs. q eql .Then, the standard free energy change DG o , enthalpy change DH 0 and entropy change ΔS o were calculated from the Van't-Hoff equation [9].Δ = − ... (16) where K d is the apparent equilibrium constant; T is the temperature in Kelvin and R is the gas constant (8.314Jmol -1 K -1 ): The slope and intercept of the Van't-Hoff plot [8] of ln K d vs. 1/T were used to determine the values of ΔH o and ΔS o , Then, the influence of the temperature on the system entropy was evaluated using the equations [11].The plot of ΔG o vs. t also give the result of ΔH o and ΔS o .
The thermodynamic parameters of the adsorption were also calculated by using the Langmuir constant (K L ), Freundlich constants (K F ) for the equations [12-14] instead of (K d ).The obtained data on thermodynamic parameters were compared, when it was possible.
The differential isosteric heat of adsorption (DH x ) at constant surface coverage was calculated using the Clausius-Clapeyron equation [12] Integration gives the following equation [10] where K is a constant.The differential isosteric heat of adsorption was calculated from the slope of the plot of ln(C eql ) vs 1/T and was used for an indication of the adsorbent surface heterogeneity.For this purpose, the equilibrium concentration (C eql ) at constant amount of adsorbate adsorbed was obtained from the adsorption isotherm data at different temperatures.
Characterization of LDH FT-IR
Figure 2 shows the pre and post adsorption spectra of congo red on Ni/Al-LHD.The strong bond around 3400cm -1 as shown in 4(a) is The broadening of the bond was attributed to the hydrogen-bond formation.Less intense absorption bond around 1650 -1500 cm -1 was assigned to the bending vibration of the interlayer water molecules.The carbonate ion peak is around 1400cm -1 which is consistent with layered double hydroxides.Figure 4(b) shows sharp peak around 733cm -1 which is characteristic of bending vibration of primary amine and phosphate bond stressing between 1100cm -1 -1200cm -1 .This shows adsorption actually occurs via formation of complex between the congo red and the layered double hydroxides [29].
XRD
Figure 3 shows the XRD patterns of the Ni/ Al.The basal reflections are observed at low 2q values and weaker non-basal reflections at higher
Effect of Concentration
Removal efficiency of Congo Red by adsorbents is illustrated in figure 5.It shows that removal efficiency decreased with increasing of initial concentration (52.5%, 50% and 47.8%) respectively, this is probably due to rapid adsorption at all available site and relatively small amount of adsorbent that was used, an increase in the amount
Isotherm Analysis
To investigate an interaction of adsorbate molecules and adsorbent surface, four well-known models, the Langmuir, Freundlich, Dubinin-kaganerradushkevic and Temkin isotherms, were selected to explicate LDH interaction in this study.
The Langmuir plot in figure 6 fitted the experimental data with R 2 = 0.9924 and therefore, confirm monolayer coverage.
The influence of isotherm shape on whether adsorption is favourable or unfavourable has been considered.For a Langmuir type adsorption process, the isotherm shape can be classified by a dimension less constant separation factor (R L ), given by Eq. ( 4).The calculated value of R L from figure 5 is 0.7, which is within the range of 0-1, thus confirms the favourable uptake of the layered double hydroxide adsorption process.
From the plot of Inq e against InC e the Freundlich constants K f and n which respectively indicating the adsorption capacity and the adsorption intensity, were calculated from the intercept and slope as shown in figure 6 and table 1.
Fig. 17: Plot of 1/qt vs. t for the adsorption of Congo Red onto layered double hydroxide
The fraction of the layered double hydroxide surface covered by the Congo Red is given as 0.47 (table 1).This value indicates that 47% of the pore spaces of the Layered double hydroxide surface were covered by the Congo Red which means high degree of adsorption.
The plot of lnq e against e 2 is shown in Figure 8 and the constants q D and B D were calculated from the intercept and slope respectively.The DKR constants such as q D, B D , and the apparent energy E where calculated to be 0.9589, 1.0635 and 0.6859KJ/mol respectively.If the value of E lies between 8 and 16kJ/mol the sorption process is a chemisorptions one, while values of below 8kJ/ mol indicates a physical adsorption process.The value of the apparent energy of adsorption (0.6859kJ/mol) obtained indicated physiosorptions between layered double hydroxide and Congo Red dye.
Temkin adsorption isotherm model is usually chosen to evaluate the adsorption potentials of an adsorbent for the adsorbate from an experimental data.This model gives the mechanism and adsorption capacity of an adsorbate in a sorption process.By plotting q e against Inc e, the Temkin constants A and B where calculated from the slope and intercept.The constants A and B are 1.377 and 2.0498 respectively while the correlation coefficient value was 0.9689, indicating that the adsorption process is physical.
Effect of Temperature
As shown in figure 10 adsorption was lowest at 313K (51%), and increased slightly to 333K (53%) and 353K (55.5%).This means that adsorption capacity increase with higher temperature.
The values of the enthalpy change ("H o ) and entropy change "S o were calculated from equation 10 to be 4.152KJ/mol and 13.5J/molK respectively, as shown in figure 11.A positive "H o suggests that sorption proceeded favourably at higher temperature and the sorption mechanism was endothermic.A positive value of ÄS 0 (13.5J/molK) reflects the affinity of the adsorbent towards the adsorbate species.In addition, positive value of ÄS 0 suggests increased randomness at the solid/ solution interface with some structural changes in the adsorbate and the adsorbent.The adsorbed solvent molecules, which are displaced by the adsorbate species, gain more translational entropy than is lost by the adsorbate ions/molecules, thus allowing for the prevalence of randomness in the system 23,24,25 .The positive ÄS 0 value also corresponds to an increase in the degree of freedom of the adsorbed species.
Isosteric heat of adsorption DHx is one of the basic requirements for the characterization and optimization of an adsorption process and is a critical design variable in estimating the performance of an adsorptive separation process.It also gives some indication about the surface energetic heterogeneity.Knowledge of the heats of sorption is very important for equipment and process design.A plot of InCe against 1/T in figure 12 gives a slope equal to DHx.The value of DHx derived from equation 11 was 39.9KJ/mol which indicates that adsorption mechanism was physical adsorption and in an heterogeneous surface 24,25 .
The activation energy Ea and the sticking probability S* were calculated from equation 12, the value shown in table 1 for Ea and S* are -9.34KJ/mol and 0.49 respectively, as shown in the plot in figure 13.The value of activation energy shows that the sorption process was a physical one less than 4.2KJ/mol.The sticking probability S* indicates the measure of the potential of an adsorbate to remain on the adsorbent.It is often interpreted as S*>1 (no sorption), S*=1 (mixture of physic-sorption and chemisorption), S* = 0 (indefinite stickingchemisorption), 0<S*<1 (favourable stickingphysic-sorption) [25].
Effect of Time
The adsorption kinetic study is important in predicting the mechanisms (chemical reaction or mass-transport process) that control the rate of the pollutant removal and retention time of adsorbed species at the solid-liquid interface.That information is important in the design of appropriate sorption treatment plants.
The effect of contact time of the phases on removal of Congo Red by the Layered double hydroxide from solutions of initial concentration equal to 400mg CR/L at three different times (10, 20 and 30 minutes) is presented in Figure 14.
The experimental data were fitted into different kinetic models including (Figures 15 -19) zero-order-kinetic model, first-order-kinetic mode, second-order-kinetic model, pseudo-second-orderkinetic model and third-order-kinetic model to ascertain the suitability of the models.The correlation coefficient values of 1, 0.9998, 0.999, 0.9999 and 0.9997 respectively confirms the applicability of the chosen kinetic models.
CONCLUSION
The present investigation shows that Ni/ Al-CO 3 synthesized by coprecipitation method can be employed as a potentially viable sorbent for the removal of Congo Red dye from industrial wastewaters.The Congo Red adsorption was found to be greatly dependent on temperature.The experimental data were well defined by Langmuir, Freundlich, Temkin and Dubininkaganer-Radushkevich isotherms.The experimental data also fitted all the kinetic models applied in this paper.The values of DH o and DS o indicated that the adsorption process was endothermic and process is dependent on increase in temperature, thereby increasing the randomness of the solid/liquid phase of the reaction system.
abbreviated as
Fig. 18 :Fig. 19 :
Fig. 18: Plot of 1/qt vs. t for the adsorption of Congo Red onto layered double hydroxide | 2018-03-22T20:12:49.937Z | 2015-09-10T00:00:00.000 | {
"year": 2015,
"sha1": "a9d6b092a26753fff41df585960f05964da10ef8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.13005/ojc/310307",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a9d6b092a26753fff41df585960f05964da10ef8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
265626979 | pes2o/s2orc | v3-fos-license | Code Switching and Code Mixing in Film Imperfect
West Java, some people use Sundanese as the language that was first taught in their family environment, while Indonesian is certainly taught in general. When a community group from different regions gather at one time, for example when West Javanese people speak Sundanese, then people appear from outside West Java such as Jakarta, then the community will change the language Abstract: In ordinary life, we frequently speak in a language that is not understood by others. When communicating, we frequently employ language switching or merging, also known as code switching and code mixing. Which seeks to make the interlocutor more understandable. Code switching and code mixing are used in today’s works of art, particularly films, to communicate amongst actors in their own films. This is done not just as a trick, but also because there is a lot of terminology from foreign languages that has been assimilated into Indonesian and is used more frequently. So, the purpose of this research is to look at and learn about the many sorts of code switching and code mixing in the IMPERFECT film, as well as the rationale for using code switching and code mixing in the IMPERFECT film.
Introduction
Language is a tool for both oral and written communication.When communicating in society, we have a tendency to use the appropriate language so that the message can be properly conveyed.Each region in Indonesia has its own regional language.As a result, we can be confident that almost everyone in Indonesia is fluent in at least two languages or bilingualism or multilingualism.This excludes the process of acquiring and teaching foreign languages in public schools or other professional institutions.This is also heavily reinforced by the rapid expansion of the media, which forces people to accept the introduction of other languages, particularly English.Indonesia can be said to be mostly bilingual or even multilingual because there are more than 150 racial groups and 483 languages.For example, in West Java, some people use Sundanese as the language that was first taught in their family environment, while Indonesian is certainly taught in general.When a community group from different regions gather at one time, for example when West Javanese people speak Sundanese, then people appear from outside West Java such as Jakarta, then the community will change the language Abstract: In ordinary life, we frequently speak in a language that is not understood by others.When communicating, we frequently employ language switching or merging, also known as code switching and code mixing.Which seeks to make the interlocutor more understandable.Code switching and code mixing are used in today's works of art, particularly films, to communicate amongst actors in their own films.This is done not just as a trick, but also because there is a lot of terminology from foreign languages that has been assimilated into Indonesian and is used more frequently.So, the purpose of this research is to look at and learn about the many sorts of code switching and code mixing in the IMPERFECT film, as well as the rationale for using code switching and code mixing in the IMPERFECT film.
Keywords:
Code-switching, Code-mixing, Imperfect film used to Indonesian or insert Indonesian so that it can be understood by speakers from Jakarta.This is included in speech events that usually occur in bilingual/multilingual communities.
There are 4 speech events that exist in bilingual/multilingual society, namely interference, integration, code-switching, and code-mixing.The four speech events have similar symptoms, namely the presence of elements from other languages used in writing or expression but with different problem concepts.Code-switching and code-mixing are speech events that usually occur in bilingual/multilingual societies.This situation arises because the speech community has mastered more than one different language so that they can use the choice of language in communication activities.This often happens, especially for students majoring in languages, who in this study, code transfer and mix codes with students According to Hurlock (1993), bilingualism is the capacity to communicate in two or more languages.This is becoming increasingly frequent, particularly in the modern period.since foreign languages are used in practically all modern gadgets.The purpose of speaking in these two languages is to develop effective communication skills based on social variables In Indonesia, code-switching and code-mixing are an increasingly common phenomenon of communication.This is due to the fact that code switching and code mixing may be found in everyday life as well as in developing media programs such as television shows, music, and movies.
Language is a tool for both oral and written communication.When communicating in society, we have a tendency to use the appropriate language so that the message can be properly conveyed.Each region in Indonesia has its own regional language.As a result, we can be confident that almost everyone in Indonesia is fluent in at least two languages or bilingualism or multilingualism.This excludes the process of acquiring and teaching foreign languages in public schools or other professional institutions.This is also heavily reinforced by the rapid expansion of the media, which forces people to accept the introduction of other languages, particularly English.
Code Switching or in the movie IMPERFECT is an event of changing the language used in a conversation.According to Appel in Chaer (2017: 107), the code expert is a symptom of switching language use due to changing situations.While Hymes in Chaer (2012: 107), writes that code transfer does not only occur between languages but can also occur between various a73tau styles contained in one language.Soewito in Chaer and Leonie (2010: 114) distinguishes two types of code-switching, namely: (1) Internal code transfer, where the code transfer occurs between own languages such as from Indonesian to Sundanese; and (2) External code transfer, code transfer that occurs between one's own language and other/foreign languages.
According to Hurlock (1993), bilingualism is the capacity to communicate in two or more languages.This is becoming increasingly frequent, particularly in the modern period.since foreign languages are used in practically all modern gadgets.The purpose of speaking in these two languages is to develop effective communication skills based on social variables In Indonesia, code-switching and code-mixing are an increasingly common phenomenon of communication.This is due to the fact that code switching and code mixing may be found in everyday life as well as in developing media programs such as television shows, music, and movies 1) Purpose of The Study The purpose of the study is to give information on the types forms of code-switching and code-mixing used in the film Imperfect 2) Problem Statement a.The types of code-switching in film IMPERFECT b.The types of code mixing in film IMPERFECT c.The reason the film IMPERFECT uses code switching and code mixing in the dialogue 3) Scope of the Study a.This study focuses on dialogue in the film IMPERFECT, b.Classify the types of code-switching and code-mixing in film IMPERFECT
Sociolinguistic
Sociolinguistics is the study of the interaction between language and society.According to Hudson, sociolinguistics is the study of language in connection to society, whereas sociology of language is the study of society in relation to language.Sociolinguistics studies the interaction between language and the context in which it is used.Language is used by everyone to deliver and receive information.
Also a tool for expressing emotions, such as happiness, sadness, rage, or aggravation.Language and social context are related in the sense that the multilingual community impacts language choice and variations in language usage, as well as the influence of language on the social environment in which it is used and its purposes.
Sociolinguistics is the study of the connection between language and society and the way people use language in different social situations.It asks the question, "How does language affect the social nature of human beings, and how does social interaction shape language?"It ranges greatly in depth and detail, from the study of dialects across a given region to the analysis of the way men and women speak to each other in certain situations.
The basic premise of sociolinguistics is that language is variable and ever-changing.As a result, language is not uniform or constant.Rather, it is varied and inconsistent for both the individual user and within and among groups of speakers who use the same language.
Bilingualism
The capacity to speak or converse in two or more languages is referred to as bilingualism.According to Trudgil (2003:24), bilingualism is the capacity of a person to speak two or more languages.Nowadays, many people can speak two or more languages, particularly those who live in a bilingual culture.
Capacity to communicate using inscribed, printed, or electronic signs or symbols for representing language.Literacy is customarily contrasted with orality (oral tradition), which encompasses a broad set of strategies for communicating through oral and aural media.In real world situations, however, literate and oral modes of communication coexist and interact, not only within the same culture but also within the very same individual.(For additional information on the history, forms, and uses of writing and literacy, see writing.) In order for literacy to function, cultures must agree on institutionalized sign-sound or sign-idea relationships that support the writing and reading of knowledge, art, and ideas.Numeracy (the ability to express quantities through numeric symbols) appeared about 8000 BCE, and literacy followed about 3200 BCE.Both technologies, however, are extremely recent developments when viewed in the context of human history.Today the extent of official literacy varies enormously, even within a single region, depending not only on the area's level of development but also on factors such as social status, gender, vocation, and the various criteria by which a given society understands and measures literacy.
Evidence from around the world has established that literacy is not defined by any single skill or practice.Rather, it takes myriad forms, depending largely on the nature of the written symbols (e.g., pictographs to depict concepts, or letters to denote specific sounds of a syllable) and the physical material that is used to display the writing (e.g., stone, paper, or a computer screen).Also important, however, is the particular cultural function that the written text performs for readers.Ancient and medieval literacy, for instance, was restricted to very few and was at first employed primarily for record keeping.It did not immediately displace oral tradition as the chief mode of communication.By contrast, the production of written texts in contemporary society is widespread and indeed depends on broad general literacy, widely distributed printed materials, and mass readership.
Code
Variations are referred to as code.Code is organized into four categories: idiolect, dialect, sociolect, list, and language.The code is useful in monolingual contexts, where its use is determined by the variety of the language.In English, for example, there is a great deal of variation.In English the word RUMAH may say home, house, apartment, and so on.Different code usage in a multilingual context is determined by linguistic variability and the specifications for its use as agreed upon by individuals.1) Code-switching Code-switching is a term in linguistics referring to using more than one language or variety in conversation.This is one of the numerous techniques to converse bilingually in two or more languages.According to Hymes (1974), code-switching refers to the simultaneous of two or more languages, dialects, or even speech styles.Code-switching, according to Bokamba (1989), is the mixing of words, phrases, and sentence boundaries inside the same speech event.There are several types of code-switching: -Intra-sentential switching -Intersentential switching -Emblematic switching -Established continuity with the previous speaker 2) Code mixing The mixing of one language in another language by the speaker in communication is known as code mixing.According to Gumperz (1977: 82), code-mixing is the use of one language by a speaker while effectively speaking another.A linguistic piece is a term or phrase from one language that has been.
Incorporated into another.According to Hudson (1996: 53), "code-mixing" occurs when "a proficient bilingual communicates with another fluent bilingual, modifying the language without any change at all in a context." Hoffman (1991: 112) distinguishes three forms of code mixing: intra sentential code-mixing, intra lexical code-mixing, and code mixing including a shift in pronunciation.
Reasoning of code-switching and code-mixing
When a bilingual switches or combines two languages, there are several factors from the speaker that must be considered.According to Hoffman (1991: 116), bilingual or multilingual people switch or mix their languages for seven reasons: talking about a particular topic, quoting somebody else, being empathic about something, interjection, repetition used for clarification, the intention of clarifying the speech content.for interlocutor, and expressing group identity.Some linguists use the terms code-mixing and code-switching more or less interchangeably.Especially in formal studies of syntax, morphology, etc., both terms are used to refer to utterances that draw from elements of two or more grammatical systems.[1]These studies are often interested in the alignment of elements from distinct systems, or on constraints that limit switching.Some work defines code-mixing as the placing or mixing of various linguistic units (affixes, words, phrases, clauses) from two different grammatical systems within the same sentence and speech context, while codeswitching is the placing or mixing of units (words, phrases, sentences) from two codes within the same speech context.The structural difference between code-switching and code-mixing is the position of the altered elements-for code-switching, the modification of the codes occurs inter-sententially, while for code-mixing, it occurs intra-sententially.[3] In other work the term code-switching emphasizes a multilingual speaker's movement from one grammatical system to another, while the term code-mixing suggests a hybrid form, drawing from distinct grammars.In other words, code-mixing emphasizes the formal aspects of language structures or linguistic competence, while code-switching emphasizes linguistic performance.
While many linguists have worked to describe the difference between code-switching and borrowing of words or phrases, the term code-mixing may be used to encompass both types of language behaviour.
Research design
The method used is a descriptive qualitative method by recording all the data that appears in the film IMPERFECT
Data source
This study's data is derived from the film Imperfect.where this film highlights the story of a girl's fight with body shaming This film has a running time of 112 minutes and has a lot of conversation in both English and Indonesian, indicating bilingualism.This film was released in 2019, yet it's still worth seeing again because of the various moral implications it conveys.
Data collection
To collect the data, are several steps: -Open t h e application Netflix to watch the film -Transcribing conversation from the film -Transcription validity from credible triangulator
Data analysis
In this stage, all of the data will be processed as follows: Select the data, Researchers utilize this process to choose statements or statement utterances that have code-switching and code-mixing properties.
-Sort data into categories based on the forms of code-switching and code-mixing.
-Discuss the reasons of code-switching and code-mixing
Findings and Discussion
The authors offer their results and discussions based on the data obtained at this stage.The data were examined following the procedures outlined in the previous stage of data analysis, as shown below.
Types of code-switching
Intra-sentential code switching When an utterance occurs within a phrase or sentence border, it is referred to be Intra sentential.Where each phrase or sentence appears in one or more languages.Example: Okay, so what's up guys?Banyak banget diantara kalian yang suka nanya di DM gue Inter-sentential code-switching When a bilingual or multilingual individual shifts from one language to another between different phrases, this is known as inter-sentential switching.Example: ternyata kamu ngankat juga, babe.Don't touch me here!!!
1).
Emblematic Emblematic arises when someone inserts tags from one language into other utterances from other languages.Example: Thank you tant, ini semua berkat kerja keras
2).
Establishing continuity According to Hoffman, the occurrence of continuity to forward the preceding speaker's discourse.For example, when one Indonesian speaker speaks English and another speaker attempts to react in English as well.
Types of Code Mixing
1).
Intra sentential
This type of code-mixing happens at the ends of phrases, clauses, or sentences.Example: dulu kan insecure banged….
Involving change i n pronunciation
This kind of happens on a phonological level, such as when someone utters another language but changes it to a different phonological pattern.Example; Isi kepalanya aku, casingnya Marsha.
Reasons for code-switching and code-mixing
The authors discovered the justification for adopting code-switching or code-mixing in poor films based on the findings of this investigation.
Talking Particular Topic
The first reason is that it discusses a specific subject in one language rather than another.Usually, a speaker feels more free and comfortable expressing their feelings, joy, or even rage via nonverbal language.
Being emphatic about something
This reason is used when someone speaking in a language other than their native language wants to highlight something.3).Repetition used for clarification When the speaker wishes to explain his speech, he will use repetition.He can occasionally utilize both languages he is proficient in to convey the same thing in order for listeners to understand it better (speech repeated).
Intention of clarifying the speech content interlocutor
This is the process.When one bilingual speaks to another bilingual, no code-switching occurs.That entails having the conversation's substance flow smoothly and understandable to the listener.
Interjection
This reasoning is used when the conversation turns heated or the speaker is taken aback.Expressing group identity This type is intended to represent the identity, a n d profile of the speaker
Conclusion
After doing this study, the writer has reached certain findings.The writer discovered that in imperfect films, code-switching and code-mixing are always used in their code.The most common sort of code-switching utilized in the film Imperfect is the one where the actor plays George (Boy William) talking to his girlfriend or his girlfriend's mother.In addition, code-mixing is frequently utilized in imperfect films is intra-sentential.Almost all of the characters in the film utilize this.Expressing group identity is the reason used by the actors and actresses in the film Imperfect.because it highlights his personal characteristics such as biodata, job, career, and so on. | 2023-12-05T17:04:24.002Z | 2023-11-18T00:00:00.000 | {
"year": 2023,
"sha1": "6f272e0f7ce4c526801fed5bf194aa8affe9adc6",
"oa_license": "CCBY",
"oa_url": "https://journals.eduped.org/index.php/ijcse/article/download/622/494",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "28a38c7f9e41abd255a734092a893c821a998f7a",
"s2fieldsofstudy": [
"Art",
"Linguistics"
],
"extfieldsofstudy": []
} |
251599955 | pes2o/s2orc | v3-fos-license | Guidezilla™ guide extension catheter I for transradial coronary intervention
Background Percutaneous coronary intervention (PCI) is the preferred treatment method for coronary artery diseases (CAD). This study aimed to evaluate the effectiveness and complications of the Guidezilla™ guide extension catheter I (GGEC I) in transradial coronary intervention (TRI). Methods This case series study included patients with CAD who underwent TRI using the GGEC I between August 2016 and January 2019 at the First Affiliated Hospital of Xi’an Jiaotong University. Results A total of 221 patients aged 65.1 ± 9.26 years were included. Coronary angiography results indicated that most patients (77.8%) had triple-vessel lesions, including 47.5% with chronic total occlusion (CTO). A total of 237 target lesions were treated, most being type C lesions (95.8%). The most common indication for GGEC I use was heavy calcification (67%), followed by extreme tortuosity (12.2%), extreme tortuosity and heavy calcification (10.9%), distally located lesion (4.5%), picking up the retrograde wire (3.2%), anomalous vessel origin (1.8%), and releasing the burr incarceration (0.4%). The mean operation time was 58 min, and the overall success rate was 94.1%. Four patients received a drug-coated balloon. No significant differences were found in operation time and success rate among the low (<23), intermediate (23–32), and severe (>32) CAD groups based on SYNTAX score stratification (P > 0.05). Two subacute thrombosis cases each were reported perioperatively, during hospitalization, and at the 1-month follow-up. Conclusion The GGEC I might have advantages for TRI and is unaffected by SYNTAX score stratification.
Introduction
Percutaneous coronary intervention (PCI) is the preferred treatment method for coronary artery diseases (CAD). Most complex coronary artery lesions can be treated owing to the continuous refinements of the hardware armamentarium and improvements of interventional technologies. Notably, some complex target lesions are difficult to reach using the current tools because of the inadequate backup support of the guiding catheter (1), especially in highly calcified and tortuous vessels (2). This is one of the main reasons for the failure of PCI in complex CAD.
Recently, transradial coronary intervention (TRI) has gained popularity in the treatment of CAD because of advantages such as low incidence rates of access site bleeding and vascular complications, early ambulation, improved patient comfort, and short hospital stay (3,4). Furthermore, several clinical trials (4,5) and metaanalyses (6) associated TRI with higher procedural success rates, lower mortality rates, and comparable main adverse cardiovascular and cerebrovascular event (MACCE) rates compared with TFI. Therefore, substantial research efforts have been directed toward strengthening the backup support of the original guiding catheter and guidewire for the successful management of complex coronary lesions. An extra support guidewire, buddy wire, anchoring balloon, and guiding catheter deep intubation techniques are standard for strengthening backup support (2, 7,8). Nevertheless, these solutions carry risks such as wire entanglement, anchoring vascular injury, coronary artery dissection, iatrogenic aortocoronary dissection (IACD), and even coronary artery perforation (9).
Few studies have examined the Guidezilla TM guide extension catheter (GGEC) as an alternative for treating complex coronary artery lesions with TRI. The GGEC is a unique, rapid-exchange, mother-in-child catheter that increases the backup support for the guiding catheter by facilitating deep coronary intubation and coaxial alignment, and also enables a smooth delivery of the interventional device to the target lesion for successful completion of PCI (10,11) and TRI (12)(13)(14).
The European and American guidelines state that the anatomical SYNTAX score is an essential tool that could help clinicians choose the most appropriate revascularization strategy for patients with complex CAD-PCI or coronary artery bypass graft (CABG) surgery (15,16). Based on the anatomical severity of CAD, the patients can be categorized as low (<23), intermediate (23)(24)(25)(26)(27)(28)(29)(30)(31)(32), or severe (>32) category according to the SYNTAX score. It was found that low-and intermediatecategory patients have similar long-term outcomes regardless of the revascularization strategy implemented (15,16). In contrast, CABG yields better outcomes for severe cases than PCI (17). Nevertheless, the SYNTAX scoring system lacks an individualized approach and clinical variables to guide the choice of the revascularization strategy accurately.
On the other hand, the SYNTAX II score contains eight predictors: anatomical SYNTAX score, age, creatinine clearance, left ventricular ejection fraction (LVEF), unprotected left main coronary artery (ULMCA) disease, peripheral vascular disease, female sex, and chronic obstructive pulmonary disease (COPD) (18,19). It can significantly predict the difference in 4-year mortality between patients who underwent CABG and PCI. As such, this version is better in assisting the choice of CABG or PCI for patients with complex CAD compared with the original SYNTAX score. Although several observational studies on PCI for CAD provided evidence for the benefits and low complication rates of the GGEC, (10)(11)(12)(13)(14) no research has investigated the impact of the SYNTAX score on treatment outcomes. Therefore, this study aimed to evaluate the effectiveness and complications of the GGEC I in TRI for patients stratified according to the SYNTAX score.
Study design and participants
This case series study included patients with CAD who underwent TRI using GGEC I between August 2016 and January 2019 at the First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China. This study was approved by the Ethics Committee for Human Study of the First Affiliated Hospital of Xi'an Jiaotong University. The study was conducted in compliance with the ethical principles of the Declaration of Helsinki. The requirement for informed consent was waived by the committee.
Data collection and definition
Data were collected from medical records, including age, sex, clinical presentation, coronary angiography indication, target vessel and character of the lesion, type of guiding catheter, guidewire, balloon, stent, operative time, surgical outcome, dissection, stent dislodgement, shaft breakage, inhospital complications, and at 1-month follow-up after TRI.
The target lesions were classified as type A, B1, B2, or C based on the American Heart Association/American College of Cardiology (AHA/ACC) criteria using variables such as length, angulation, tortuosity, calcification, and chronicity (20). Angulation was estimated by recording the angle formed between the proximal and distal vessel axes (≥45 • = moderate; ≥90 • = severe). A tortuous lesion was defined as having at least three ≥ 45 • bends in the vessel direction along the main trunk during the diastolic period. Calcification was determined 10.3389/fcvm.2022.931373 based on the density of the vessel wall before injection of the contrast agent. The SYNTAX score was used to assess the complexity of CAD and assist clinicians in choosing the most appropriate revascularization strategy for patients (20). First, the SYNTAX I score was evaluated for each patient based on coronary angiography results by two experienced interventional cardiologists. Then, the SYNTAX II score was calculated by evaluating clinical variables (age, creatinine clearance, LVEF, ULMCA disease, peripheral vascular disease, female sex, and COPD). These variables can be automatically calculated on the SYNTAX website at the time. Based on the SYNTAX I score, the patients were categorized as low (<23), intermediate (23-32), or severe (>32) CAD cases.
The GGEC I (Boston Scientific, Natick, MA, United States) is a 145-cm single-lumen rapid exchange catheter compatible with a 6F guiding catheter with an inner diameter of 0.057 inches (1.45 mm) and an outer diameter of 0.066 inches (1.68 mm). The guiding catheter is a 120-cm stainless steel hypo-tube with a 25-cm special wire-braided mesh/polymer structure. In the patients included in this study, the GGEC I was mainly used as a salvage treatment to create a smooth pathway for delivering a balloon or stent to the target lesion after high-pressure balloon pre-dilatation, without replacing the original guiding catheter. The specific application indications were: (1) anomalous origin or angulated take-off of native coronary arteries; (2) extremely tortuous vessel; (3) heavy calcification; (4) distally located lesion; (5) picking up the retrograde wire during a CTO intervention; (6) releasing the burr incarceration. In the presence of multiple indications, the main reason requiring GGEC I use was listed as the primary indication. A deep GGEC I intubation was defined as more than 20 mm depth. A schematic diagram of GGEC I use in complex PCI is shown in Figure 1. In this study, experienced interventional cardiologists performed TRI with the GGEC I according to standard clinical protocols. The other courses of treatment were decided by the attending physician and the consulting operator. Effectiveness was determined by the success of implanting a stent or drug-coated balloon (DCB) angioplasty in the targeted lesion area with residual stenosis of less than 20% and TIMI 3 flow. Complications were defined as procedure-related complications (e.g., dissection, perforation, stent stripping or dislodgement, shaft breakage, and acute thrombosis) and major clinical events (e.g., intractable angina, recurrence of myocardial infarction, repeated revascularization, and all-cause death) during the hospital stay and a 1-month follow-up.
Types of coronary artery disease Results
Single-vessel lesion, n (%) 9 (4. Statistical analyses were performed using SPSS 22.0 (IBM, Armonk, NY, United States). Continuous variables were expressed as median [interquartile range (IQR)] and compared by the Wilcoxon signed-rank test. Categorical data were presented as n (%) and compared by the chi-square (χ 2 ) test. All the contributing authors guaranteed the reliability, interpretation, and lack of bias for all the investigated aspects of the study. P < 0.05 was considered statistically significant. Indications for GGEC I use in TRI.
In most cases, the GGEC I was inserted more than 20 mm deep into the vessels using the balloon-assisted sliding and tracking (BLAST) technique. In 18 cases (7.6%), TRI was successfully completed with the help of GGEC I after rotational atherectomy. The mean operative time was 58 minutes, and the overall success rate was 94.1%, with four patients receiving DCB treatment without stents. For each vessel, CTO had a longer operative time (RCA 93 vs. 63.5 min; LAD 76.5 vs. Table 3). Furthermore, the analysis results for interventional devices showed that one to two workhorse guide wires (44.3 and 40.3%, respectively) were needed in most cases, while one or more than three CTO guide wires (17.6 and 15.4%, respectively) were needed for opening the CTO lesions (Figures 4, 5). On the other hand, one pre-dilated balloon (46.6%) and two or more than three post-dilated balloons (31.2 or 34.8%, respectively) were usually used for lesion modification and optimization, with a high proportion of domestic stent implantation (63.8%) (Figure 4). No GGEC-associated procedural complications (dissection, stent dislodgement, shaft breakage, etc.) were reported. Two patients (0.9%) with LAD PCI experienced subacute stent thrombosis on the fourth and sixth day after stenting and were successfully treated by emergency highpressure dilation with non-compliant (NC)-balloon. No other procedure-related complications and major clinical events occurred during the hospitalization or follow-up period of at least 1 month. Stratification based on the SYNTAX I score showed that there were no significant differences in procedure time and success rate among low (<23), intermediate (23-32), and severe (>32) CAD patients treated with TRI using the GGEC I (P > 0.05). The procedure times were 58.5, 54, and 57 min, respectively, while the success rates were 95.4, 93, and 93.7%, respectively ( Table 4).
In this study, 14 cases were reported as failure cases. In the low category, the only failure case was a 54-year-old male patient with LCX CTO as the target lesion. During the procedure, although the guidewire successfully passed through the occlusion and entered the distal true lumen, the pre-dilated balloon of 1.5 mm or 1.25 mm in diameter could not pass through the occluded lesion even after repeated attempts at providing support with the GGEC I due to extreme vascular tortuosity and heavy calcification; consequently, the operation was terminated. In the intermediate group, four operations failed, including the case of a 74-year-old male patient with RCA CTO as the target lesion, as the guidewire failed to pass through the occluded lesion. In the remaining three patients with LAD CTO, RCA CTO, and LCX as the target lesion, respectively, the reason for failure was extreme vascular tortuosity and heavy calcification preventing the small-sized pre-dilated balloon from passing through or effectively dilating the target lesion even with the support from the GGEC I. In the high severity group, a total of nine failed cases (all males) were reported, including seven cases with CTO lesions (one LCX, two LAD, and four RCA cases). Except for a 74-year-old male patient with RCA CTO who suddenly changed his mind and refused the operation, although the guidewire successfully passed through the occluded lesions, in the remaining six CTO and two RCA cases, the operation was stopped as the small-sized pre-dilated balloon could not pass through or effectively dilate the target lesion, even with the GGEC I's support owing to extreme vascular tortuosity and heavy calcification ( Table 4).
Discussion
The present study suggested that the GGEC I used in TRI could be beneficial, with few complications. The target vessel's anatomical characteristics might be the determinants of GGEC I use, regardless of the SYNTAX I or II score, contributing to increasing the success rate of complex PCI in patients with CAD.
In the present study, the overall mean operation time was 58 min, and the overall success rate was 94.1%. A roughly similar success rate was reported in a previous study (12). Expectedly, significantly longer operative time and significantly lower success rate were observed for the CTO compared with the corresponding vessels (21). The current results indicate that the CTO remains one of the greatest challenges faced by interventional cardiologists. Thus, a hybrid strategy should be actively implemented to open complex coronary CTO efficiently, safely, and precisely. In addition, one to two workhorse guidewires were required for most cases, while one or two to four CTO guidewires were used for opening the CTO. This usage was mainly dependent on the interventional cardiologist's knowledge of the anatomical characteristics of the CTO, the understanding of the CTO guidewire, and their ability to manipulate the CTO guidewire (22).
Furthermore, in most cases, more than two NC-balloons (66%, n = 221) and domestic stents (63.8%, n = 221) were used. The implementation of plaque modification of tortuous and calcified lesions (23-26) and the optimization of stent implantation (27, 28) might have contributed significantly to the relatively good short-term prognosis in this study. Of course, it is also dependent on the interventional cardiologist's experience and operation skills (22). Notably, this study detected two patients (0.9%) with the culprit anterior descending branch who developed subacute thrombosis within 1 week of the PCI due to stent malapposition caused by heavy vascular calcification. They were treated by an emergency procedure of NC-balloon highpressure dilation. No other major cardiovascular events were reported during the observation period in the hospital or during a 1-month follow-up after discharge. Therefore, the GGEC I . Under the support of the Corsair microcatheter, Sion was carefully manipulated through the S1 collateral branch (d) to advance the Corsair microcatheter to the RCA 3-segment along the guidewire, and changed to the UB 3 via the microcatheter; then, UB 3 was directed to the RCA 2-segment through the occluded vessel distal segment. Next, Miracle 6.0 was forwardly manipulated to penetrate the proximal fibrous cap of the occlusive lesion to the RCA 3-segment and the GGEC I was sent along Miracle 6.0 to the RCA1 segment. Then, the Reverse-Controlled Antegrade And Retrograde Subintimal Tracking (R-CART) technique was initiated (e), using Active Greeting Technique (AGT) to reversely manipulate Fielder XT into GGEC I (f). Fielder XT was pushed further forward into 6F SAL 1.0, then anchored with a balloon to push Corsair into 6F SAL 1.0 forcefully (g). The Rendezvous technique failed, and Sion was reverse manipulated into the forward Finecross microcatheter through the Corsair microcatheter, and the Finecross microcatheter was pushed forward to the posterior branches of left ventricular (PL) along Sion while withdrawing the Corsair microcatheter backward (h). Stents were implanted after dilating the occluded blood vessel with predilated balloons of different sizes along the guidewire after the Finecross microcatheter was removed (i). Case 3: An example of PCI using GGEC I in both LM and LAD bifurcation lesions. 6F JL3.5 angiography showed approximately 80% stenosis of the LM end, 50-80% stenosis of the LAD 6-7 segment, and subtotal occlusion in the proximal segment of D1 (a,b). The two Runthroughs were carefully manipulated to enter the ends of the LAD and D1, respectively, and narrow lesions were expanded with a balloon, resulting in residual stenosis of approximately 50% in the LM end, and severe dissection of LAD7 and proximal D1 with thrombolysis in myocardial infarction (TIMI) flow grade 3 as seen by angiography (c). The Crossover strategy was used to treat LM bifurcation lesions, and the inverse mini-crush technology was used to treat LAD and D1 bifurcation lesions (d). However, after repeated attempts, it was difficult for the stent to enter D1 and completely cover the lesion (e), so a stent was immediately implanted in LM-LAD (f). Angiography showed that the stent was fully expanded, with a blood flow of TIMI level 3 (g). Next, a GGEC I was sent along the guidewire to the opening of D1, and a stent successfully sent to D1 via the GGEC I and completely covered the lesion (h). After withdrawing the GGEC I to the LAD opening, the stent was successfully released in the D1 (i). Angiography showed that the stent was fully expanded and the blood flow was TIMI level 3 (j). Finally, by rewiring the Runthrough into the LAD and completing the Final Kissing step (k), the operation was a success (l). Case 4: An example of PCI using the GGEC I in LAD with extreme tortuosity and severely calcified lesions. 6F EBU 3.75 angiography showed extreme tortuosity and heavy calcification in LAD 6-8 segments with approximately 80% stenosis (a). Two Runthroughs were manipulated to reach the end of the LAD through the stenosis, and Sprinter 2.0 × 15 mm and NC Sprinter 2.5 × 15 mm were pushed into place in turn with difficulty, under the support of a double guidewire, which was used to expand the stenosis under high pressure (20-24 atm) (b). Then, three stents were implanted via the GGEC I from distal to proximal LAD 6-8 (c).
(Continued) FIGURE 3 Finally, NC-balloons of different figures were selected to expand the stents under high pressure (20-24 atm) (d), and angiography showed that the stents were fully expanded with a blood flow of TIMI 3 (e,f). Case 5: An example of PCI using the GGEC I in CTO of the RCA with abnormal opening. Angiography showed that the ascending aorta was significantly widened, the end of RCA 2 segment showed localized occlusion, and the bridging collaterals supplied blood to make the distal vessels partially visible (a). The LAD provided the collaterals, and the RCA was retrogradely perfused to the end of segment 3 (b). It was difficult to keep the 6F AL 1.0 guiding catheter in place, and Sion was patiently manipulated to "float" into the RCA (c). Then, the GGEC I was slowly pushed along Sion into the RCA (d). Sion was exchanged with Conquest pro 8-20 via the Finecross and manipulated carefully through the occluded segment (e) and into PL (f) under multi-position fluoroscopy. Under the support of the GGEC I, pre-expansion was performed using balloons of different specifications (g,h). Finally, the stent was successfully implanted in the occlusive segment (i). Case 6: An example of PCI using the GGEC I to release the burr incarceration. 6F EBU 3.75 angiography showed heavy calcification in LAD 6-8 segments with about 90% stenosis (a). Runthrough was carefully manipulated to reach the end of the LAD through the narrow lesions, and NC Trek 2.0 × 12 mm was selected for high-pressure dilatation (20-24 atm). However, the balloon was still not fully expanded, and the body had obvious indentation (b). Rotational atherectomy was started and a 1.5-mm burr was passed through the stenosis successfully, but it was incarcerated during the third polishing process (c). The first attempt to insert a second guidewire and dilate the stenosis near the burr with balloon was unsuccessful (d). The rotational catheter was immediately cut off, and the GGEC I was sent into the guiding catheter. It reached the LAD 6 segment along its inner core. After wrapping it tightly with the non-invasive head end of the GGEC I (dotted white line), the burr was successfully removed from the body together with the GGEC I (e). Finally, stents were successfully implanted after NC-balloon dilation (f).
is probably safe in TRI even in CAD patients with a high-risk SYNTAX score; in addition, a favorable long-term outcome is expected for the patients who underwent successful PCI. The GGEC I is a very useful tool in challenging cases of complex PCI, as its increased intubation depth provides a stronger backup force (11,12,29). Nevertheless, it should be noted that, as per the manufacturer's instructions, the extension of the catheter from the guiding catheter should be less than 15 cm. Otherwise, the mother-in-child catheter might lose coaxiality and hinder the withdrawal of the interventional device. In addition, it should be noted that the blunt tip with edged tubular structure of the GGEC I may be caught in fibrous plaques or stent metal mesh beams while pushing along the diseased coronary artery. On the other hand, the edge of the stainless-steel collar may hinder the interventional devices into the GGEC I. Therefore, non-standard procedures can lead to a series of perioperative complications, including shaft breakage, stent stripping or dislodgement, coronary artery dissection, and even perforation, incarceration, etc., and should especially be considered by inexperienced interventional cardiologists while treating patients. Compared with first-generation guide extension systems, including the GuideLiner TM (Teleflex, Morrisville, NC, United States), the GuideZilla TM (Boston Scientific, Natick, MS, United States) and the Telescope TM (Medtronic, Santa Rosa, CA, United States), the CrossLiner TM with a leading, flexible, low profile, monorail and inner microcatheter (0.017" leading "microcatheter" tip) may overcome the deficiencies of "blunt-end" tubular structures and allow safe, deep, coronary intubation (30), although further clinical validation is needed.
Based on the reported cases and previous studies (10,11,13), the following techniques were summarized. The GGEC I should be pushed into the guiding catheter through the Y-connector along the guidewire in the same direction and at a constant slow speed. When the GGEC I is difficult to be pushed through the lesion, the lesions must first be elaborately modified with multiple pre-dilatations (even with NC-balloons) along the way before trying to push once more. It can also be carefully and slowly pushed forward close to the target lesion site under fluoroscopy with the help of a dual guide wire support if necessary. Of course, the most desirable method is to use the BLAST technique in this situation. During the procedure, complications such as coronary artery injury, dissection, hematoma expansion, and longitudinal compression of the implanted stent should be monitored. When it is difficult to enter the guidewire, balloon, stent, or other interventional devices into the GGEC I due to resistance, which usually occurs when the aortic arch is extremely tortuous, the devices should be slightly retracted and rotated to adjust the angle, and then reinserted. On the other hand, the abovementioned interventional devices can be successfully inserted into the guiding catheter of the GGEC I after an 8-10-atm dilation of the edge of the stainless-steel collar with a pre-dilation balloon with a diameter of 2.0 mm. The GGEC I should be promptly withdrawn into the guiding catheter or the normal vascular segment after the balloon or stent is placed on the target lesion to avoid interference with coronary blood flow. Although the 6F GGEC I has a large inner diameter (0.057 inches), enabling the delivery of most interventional devices, it cannot be sent into the covered stent (0.064-0.068 in).
At present, the main indications for GGEC I use are: (1) anomalous origin or angulated take-off of native coronary arteries; (2) extremely tortuous vessel; (3) heavy calcification; (4) distally located lesion; (5) picking up the retrograde wire during CTO intervention; (6) releasing the burr incarceration (11,30,31). In the present study, the main indications were heavy calcification (67%) followed by extreme tortuosity (12.2%), extreme tortuosity and heavy calcification (10.9%), distally located lesion (4.5%), picking up the retrograde wire when using the active greeting technique (AGT, 3.2%), anomalous origin of the vessel (1.8%), and to release the burr incarceration (0.4%). In addition, the GGEC I is usually used for remedial TABLE 3 Procedural data of patients who underwent TRI using GGEC I (n = 221). purposes, which inevitably leads to longer operation time, higher radiation exposure dose, and higher contrast agent dosage (11,31,32). Still, the results of the SYNTAX I score stratification in this study indicate that the SYNTAX score was beneficial in guiding the revascularization strategy, and anatomical characteristics were more significant in the successful treatment of a single target vessel. Based on the observations mentioned above, there is reason to suggest that the anatomical characteristics of target coronary artery lesions can be quantified as follows. Diffuse, tortuous, calcification, angulation, CTO, abnormal coronary openings, and distal lesions can be scored as 1, 3, 3, 3, 2, 1, and 1, respectively. With an accumulated score ≥ 3, especially for patients belonging to the intermediate (23-32) and severe (>32) CAD groups, which are based on the SYNTAX I score, the GGEC I should be used actively to achieve a successful result. Its usage might shorten the operative time, reduce the radiation exposure dose, decrease the amount of contrast agent and avoid the risk of possible complications such as contrast-induced nephropathy (CIN) (12). Still, these possible advantages need to be confirmed in large randomized controlled trials. In addition, in the process of rotational atherectomy, the GGEC I can be used FIGURE 4 Percentages of interventional devices in patients who underwent TRI using the GGEC I (n = 221). Chronic total occlusion (CTO) guide wire usage analysis (n = 84). to release the burr incarceration, which is the best emergency treatment method (as shown in Figure 3, case 6). It could help avoid the fatal complications caused by long-time incarceration or the burr lodged in the vessel. The GGEC I can be prepared in advance for complex rotational atherectomies, including extreme tortuosity, calcification, and angular lesions. This study had several limitations. It had a retrospective, single-center study design. The performance of the intervention might be influenced by the cardiologist's skills and experience. Therefore, to address these limitations, a randomized controlled trial should be conducted based on the anatomical characteristics of the target coronary artery lesions.
Frontiers in Cardiovascular Medicine
In conclusion, the GGEC I might be beneficial, with few complications for complex PCI (TRI), and is not affected by SYNTAX score stratification. The anatomical properties of the target vessel might be the determinant for GGEC I use. Therefore, interventional cardiologists should predict the procedural difficulties and actively use this tool to achieve successful PCI, especially in the intermediate and complex cases based on the SYNTAX score.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee for Human Study of the First Affiliated Hospital of Xi'an Jiaotong University (XJLS-2016-354). The study was conducted in compliance with the ethical principles of the Declaration of Helsinki. The requirement for informed consent was waived by the committee. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
XL conceived the clinical study and made substantial contributions to the study design and data acquisition, analysis, and interpretation, as well as drafted the manuscript. QL, YF, YX, and DW participated in designing and conducting the study and interpreting the results. MD, JL, and TY oversaw the data collection and input processes. All authors contributed to revised the manuscript and gave final approval for manuscript publication.
Funding
This project was supported by the Key Science and Technology Program of Shaanxi Province, China (to XL; 2018SF-079). | 2022-08-17T13:16:00.127Z | 2022-08-17T00:00:00.000 | {
"year": 2022,
"sha1": "007303bbe68f9d8af7ef3e1347e1397015ee0aa6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "007303bbe68f9d8af7ef3e1347e1397015ee0aa6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
10630211 | pes2o/s2orc | v3-fos-license | Genetic variants and traits related to insulin-like growth factor-I and insulin resistance and their interaction with lifestyles on postmenopausal colorectal cancer risk
Genetic variants and traits in metabolic signaling pathways may interact with lifestyle factors such as obesity, physical activity, and exogenous estrogen (E), influencing postmenopausal colorectal cancer (CRC) risk, but these interrelated pathways are not fully understood. In this case-cohort study, we examined 33 single-nucleotide polymorphisms (SNPs) in genes related to insulin-like growth factor-I (IGF-I)/ insulin resistance (IR) traits and signaling pathways, using data from 704 postmenopausal women in Women’s Health Initiative Observation ancillary studies. Stratifying by the lifestyle modifiers, we assessed the effects of IGF-I/IR traits (fasting total and free IGF-I, IGF binding protein-3, insulin, glucose, and homeostatic model assessment–insulin resistance) on CRC risk as a mediator or influencing factor. Six SNPs in the INS, IGF-I, and IGFBP3 genes were associated with CRC risk, and those associations differed between non-obese/active and obese/inactive women and between E nonusers and users. Roughly 30% of the cancer risk due to the SNP was mediated by IGF-I/IR traits. Likewise, carriers of 11 SNPs in the IRS1 and AKT1/2 genes (signaling pathway–related genetic variants) had different associations with CRC risk between strata, and the proportion of the SNP–cancer association explained by traits varied from 30% to 50%. Our findings suggest that IGF-I/IR genetic variants interact with obesity, physical activity, and exogenous E, altering postmenopausal CRC risk, through IGF-I/IR traits, but also through different pathways. Unraveling gene–phenotype–lifestyle interactions will provide data on potential genetic targets in clinical trials for cancer prevention and intervention strategies to reduce CRC risk.
Introduction
Colorectal cancer (CRC) is the third most commonly diagnosed cancer and the third leading cause of cancer death in American women. [1] Incidence and death rates for CRC increase with age. Approximately 90% of new cases and deaths occur in people of age 50 and older. [1] About 35% of the susceptibility to CRC is ascribed to genetic factors, while the remaining 65% is attributed to environmental factors such as obesity, physical activity, diet, smoking, and Type II diabetes (DM). [2] The effect of the lifestyle factors on CRC risk may be partially mediated by insulin-like growth factor-I (IGF-I)/insulin pathway. The IGF-I/insulin resistance (IR) axis has been associated with CRC in multiple studies. [3][4][5] Higher levels of total/ free bioactive IGF-I and lower levels of IGF-binding protein 3 (IGFBP3) have been associated with higher CRC risk in both pre-and post-menopausal women. [3,4,6]. In postmenopausal women, high levels of both insulin and glucose are positively associated with CRC. [4,7] High IGF-I levels and IR (characterized by hyperinsulinemia and hyperglycemia) contribute to overexpression of IGF/insulin receptors. The overexpression leads to the enhanced anabolic state necessary for cell proliferation, differentiation, and anti-apoptosis via multiple abnormal cellular signaling cascades, including insulin receptor substrate-1 (IRS-1) and protein kinase B (Akt) pathway. [8,9] Thus, high IGF-I and IR, through deregulating or overactivating multiple downstream pathways, may exert their effects on carcinogenesis.
IGF-I/IR traits that have been associated with CRC risk include IGF-I, IGFBP3, insulin, glucose, and homeostatic model assessment-insulin resistance [HOMA-IR] levels in this study. Considering the associations of the IGF-I/IR traits and their signaling pathways with CRC risk, the genetic variants that may influence levels of the traits and aberrant signaling cascades are possibly associated with the risk of CRC. However, population-based epidemiologic studies of these genetic variants (e.g., single-nucleotide polymorphisms [SNPs]) and CRC risk have yielded inconsistent findings. [8,[10][11][12][13][14][15][16][17] These conflicts are possibly due to different sets of covariates (e.g., in women, whether to account for menopausal status or whether to consider different combinations of hormone therapy), lack of consideration for adjustment of relevant phenotypes and of interactions with lifestyle factors, and different races/ethnicities. Further, few studies examined those IGF-I/IR-related genetic variants and risk of CRC in postmenopausal women, a population highly susceptible to CRC.
Abdominal obesity and physical inactivity are associated with increased risk of postmenopausal CRC. [18][19][20] In particular, physical inactivity accounts for about 15% of CRC [21] and the highest physical activity (PA) guidelines scores, compared with the lowest scores, are associated with a 50% lower risk of CRC. [20] The relationships of obesity and obesity-related factors such as PA with CRC could be mediated via IGF-I/IR traits. [19,21,22] Few studies have examined whether the association between obesity/obesity factors and CRC risk is affected by IGF-I/IR genetic variants. [17,21,23]; though the genetic variants have minimal or modest effect on the obesity-CRC relationship, it suggests that genetic variants related to IGF-I/IR traits and their signaling pathways interact with obesity and jointly influence CRC susceptibility.
In addition, endogenous estrogen (E) interacts with IGF-I/IR traits and their signaling pathways as well as their target genes through a synergistic cross-talk mechanism, leading to an enhanced anabolic state necessary for tumor growth. [24][25][26][27][28] In postmenopausal women, endogenous E level is associated with higher risk of CRC. [4] Exogenous E has a different effect than endogenous E on CRC risk. Oral exogenous E has been postulated to interact with circulating levels of IGF-I/IR and their downstream pathways to affect cancer risk. Because of the first-pass effect induced by oral E, resulting in suppressing the production of IGF-I in liver, oral E users have lower IGF-I levels, followed by increased IR [29], and lower risk of CRC than E nonusers have. However, exogenous E has conflicting associations with CRC risk, depending on the combination of hormone therapy (E only vs. E + Progestin [P]) administered. While E + P users have consistently been related to a decreased CRC risk, [30,31] E only users have not been uniformly associated with CRC risk: lower [32,33] and higher [34] risks of CRC and nonsignificant associations [35,36] have been observed.
By conducting this case-cohort study among non-Hispanic white postmenopausal women, we examined the pathway of IGF-I/IR traits/signaling-genetic variants, IGF-I/IR traits, and CRC risk. In this pathway, IGF-I/IR traits (circulating levels of IGF-I, IGFBP3, insulin, glucose, and HOMA-IR) have two different roles in the relationship between the genetic variants and CRC: mediator (in relation to IGF-I/IR traits-related genetic variants) and influencing factor (in relation to IGF-I/IR signaling pathways-related genetic variants).
Further, obesity status, PA, and exogenous E use status could influence the association between IGF-I/IR genetic factors and their traits, and through these interactions, are associated with CRC. We thus evaluated how the pathway of IGF-I/IR's genetic variants, IGF-I/IR traits, and CRC is influenced by obesity, PA, and exogenous E (E only and E + P). Unraveling these complicated gene-phenotype-cancer pathways and interactions with lifestyle factors will provide insights into the role of the IGF-I/IR axis in the development of CRC in postmenopausal women.
Study population
The study included 704 postmenopausal women who were enrolled in ancillary studies of the Women's Health Initiative Observational Study (WHI-OS) from October 1, 1993 through December 31, 1998. Details of the WHI's design and rationale have been described elsewhere. [37] Eligible women were 50-79 years old, postmenopausal, planned to live near the clinical centers for at least 3 years after study enrollment, and able to provide written consent. The ancillary studies were designed for a nested case-cohort study within the WHI-OS, including only women who reported their race or ethnicity as non-Hispanic white (n = 2,148). For our study purpose, we initially included 1,136 of those women who were eligible for the colorectal case-cohort study (S1 Fig). Of those, we excluded 193 women who had been followed up for less than 1 year or had been diagnosed with any cancer at enrollment. Among these (n = 943), we included women (n = 887) who did not have DM at enrollment or later and had at least one of five measurements (i.e., total and free IGF-I, IGFBP3, glucose, and insulin obtained after at least 8 hours' fast) available at baseline. We excluded another 2 women whose information on SNPs was not available or whose missing-call rates were more than 50%. Finally, we excluded 181 women for whom the information on covariates was unavailable, resulting in a total of 704 women (CRC cases = 237, controls = 467; 80% of the eligible 885). As of February 29, 2004, the ancillary studies completed the selection of women with a mean follow-up of 77 months. [38] This study was approved by the institutional review boards of each participating clinical center of the WHI and the University of California, Los Angeles.
Data collection and cancer outcome variables
Data had been uniformly collected using standardized written protocols. At baseline, selfadministered questionnaires were completed by participants regarding demographic factors (age, education, family income, and family histories of DM or CRC), lifestyle factors (PA, smoking status, and alcohol intake), and medical (cardiovascular disease and hypercholesterolemia) and reproductive histories (oral contraceptive and exogenous E use [never vs. ever use of unopposed estrogen (E only) and opposed estrogen (E + P) from pills or patches], history of hysterectomy or oophorectomy, ages at menopause and menarche, and pregnancy history). Anthropometric measurements such as height, weight, and waist and hip circumferences were measured at baseline by trained staff. The above variables were initially selected for this study on the basis of a literature review for their associations with IGF-I/IR and CRC. After multicollinearity testing and univariate and stepwise regression analyses, they were finally selected to be analyzed.
Cancer outcomes were determined through a centralized review of medical charts, and cancer cases were coded according to the National Cancer Institute's Surveillance, Epidemiology, and End-Results guidelines. [39] The outcome variables for our study were CRC and the time to development of CRC. The time from enrollment to CRC development, censoring, or study end-point was recorded as the number of days and then converted into years.
Genotyping and laboratory methods
Six genes (S1-S6 Tables) were chosen on the basis of the biologic significance of their gene products or whether epidemiologic and/or experimental data support an association between the gene and the levels of IGFs and insulin or between the gene and risk of cancer. [8,[40][41][42][43][44][45][46][47][48][49][50] For each gene, HTSNP2 software (http://www-gene.cimr.cam.ac.uk/clayton/software/stata) was used to search all possible subsets of SNPs that best captured the full haplotype information. Specifically, the selected SNPs had a minimum allelic association of 0.8 with the unselected SNPs within a linkage disequilibrium block. A total of 33 SNPs from the 6 genes were identified.
The MassARRAY system (Sequenom, Inc., San Diego, CA), based on mass spectrometry, was used for genotyping. Using a standardized protocol, quality assurance was conducted with a missing call rate of < 1%, the number of discordant calls < 3%, and a Hardy-Weinberg Equilibrium of p ! 10 −4 .
At baseline, fasting blood samples had been collected from each participant by trained phlebotomists. Serum concentrations of glucose and insulin were measured by Medical Research Laboratories (Highland Heights, KY) using assays with sensitivities of 0.5 mg/dL and 0.26 μIU/mL; average coefficients of variation (CV) of 4.2% and 3.4%; and correlation coefficients of 0.95 and 0.98, respectively. The HOMA-IR was calculated as glucose (mg/dl) × insulin (μIU/ml) / 405. [51] Serum total and free IGF-I and IGFBP3 were measured by using enzymelinked immunosorbent assays (Diagnostic Systems Laboratories, Webster, TX) with sensitivities of 0.01 ng/mL, 0.015 ng/mL, and 0.04 ng/mL; average CVs of 8.2%, 11.2%, and 3.6%; and correlation coefficients of 0.96, 0.9, and 0.9, respectively.
Statistical analysis
Differences in baseline characteristics and allele frequencies, across strata of obesity status (body mass index [BMI], waist circumference, and waist-to-hip ratio [w/h]), level of PA, and exogenous E use, were evaluated by using unpaired two-sample t tests for continuous variables and chi-squared tests for categorical variables. If continuous variables were skewed or had outliers, Wilcoxon's rank-sum test was used.
With the regression assumptions met, multiple linear regression was performed to estimate effect sizes and 95% confidence intervals (CIs) for the exposures (IGF-I/IR-related SNPs with additive, minor-allele dominant and recessive models) to predict the outcomes (IGF-I/IR traits: fasting total and free IGF-I, IGFBP3, glucose, insulin, and HOMA-IR levels). The Cox proportional hazards regression model designed for case-cohort data was performed by using "cch" in a package "Survival" from the Comprehensive R Archive Network. After assumption testing was done via a Schoenfeld residual plot and rho, the Cox model was conducted to obtain hazard ratios (HRs) and 95% CIs for IGF-I/IR traits and IGF-I/IR-related SNPs to predict CRC.
We first focused on the mediation effects relating IGF-I/IR traits-related SNPs (exposure) and CRC (outcome), and on the role of IGF-I/IR traits (mediator) that play in this association (Fig 1). According to the models presented in Fig 1, we first obtained the magnitude of the total effect of IGF-I/IR traits-related SNPs on CRC (the overall genetic effect, without considering the effect of IGF-I/IR traits). We then evaluated how this total effect is partitioned into indirect (cancer risk associated with IGF-I/IR traits-related SNPs mediated by IGF-I/IR traits) and direct effects (cancer risk associated with IGF-I/IR traits-related SNPs via pathways other than IGF-I/IR traits). This approach allowed us to test the hypothesis that IGF-I/IR traitsrelated SNPs are associated with risk of CRC and that the relationships depend on IGF-I/IR traits. A total and direct effect size of IGF-I/IR traits-related SNPs (exposure) on CRC (outcome) was produced from the HR for IGF-I/IR traits-related SNPs predicting CRC in the Cox model that included all covariates, without (total) and with (direct) IGF-I/IR traits (mediator). The indirect effect size was produced via a traditional statistical approach [52]: the percentage change in the HRs by comparing a model that includes all covariates with a model that includes all covariates and the mediator. Next, IGF-I/IR traits are not mediators in the relationship between IGF-I/IR signaling pathways-relevant SNPs and CRC; we examined the effect of IGF-I/IR traits as an influencing factor on these SNPs-cancer associations. (Fig 2). The effect of IGF-I/IR traits on CRC risk that is associated with IGF-I/IR signaling pathways-relevant SNPs was estimated using the same algorithm as that of mediator, but it was interpreted as an influential factor.
To evaluate the role of obesity, PA, and exogenous E as effect modifiers on the pathway of IGF-I/IR SNPs, IGF-I/IR traits, and CRC, we stratified participants by those potential effect modifiers and within the strata, compared the proportions of the cancer risk contributed by IGF-I/IR SNPs through IGF-I/IR traits (indirect effect) and non-IGF-I/IR traits pathways (direct effect). A 2-tailed p value < 0.05 was considered statistically significant. The R statistical package (v 2.15.1) was used.
Results
Participants' baseline characteristics and allele frequencies of 33 SNPs by obese status, level of PA, exogenous E use, and CRC status are presented in S1-S13 Tables. The participants had been followed up through February 29, 2004, resulting in 237 cases of CRC (32% of non-obese vs. 40% of obese women; 31% of active vs. 36% of inactive women; and 40% of nonusers vs. 31% of E-only users vs. 29% of E+P users are CRC cases).
CRC risk associated with IGF-I/IR trait-related SNPs that is mediated via IGF-I/IR traits, stratified by obesity status (BMI, waist, and w/h), level of PA, and exogenous E use We partitioned the total effect of IGF-IR/IR trait-related SNPs on CRC risk into direct (not via IGF-I/IR traits) and indirect (via IGF-I/IR traits) effects (Fig 1). Each SNP associated with (Table 1), and by exogenous E use (nonuse vs. E-only or E+P use) ( Table 2).
Of 19 IGF-I/IR trait-related SNPs, a few SNPs in the INS, IGF-I, and IGFBP3 genes were significantly associated with CRC risk (Tables 1 and 2). Overall, the SNP-cancer association differed between the strata (non-obese/active vs. obese/inactive; E nonuse vs. use). In all strata, the direct effect (not via IGF-I/IR traits) of the SNP-cancer risk was dominant in each SNP over the indirect effect (via IGF-I/IR traits) regardless of the mediator. Carriers of the INS rs689 T allele had an increased CRC risk among the non-obese (BMI < 30) group (Table 1). Roughly 35% of the CRC risk owing to this SNP was mediated via insulin or HOMA-IR levels in this group. However, a different mediation effect of this trait was observed when stratified by exogenous E use status. In the nonuser group, where increased CRC risk was found, the mediation effect of insulin level on this SNP-cancer association was strong (> 80%) ( Table 2). When participants were classified by exogenous E use status, 4 SNPs in nonusers were associated with CRC risk (Table 2): the IGF-I rs10778176 T allele, with increased risk, and the IGFBP3 rs2471551 C and rs3110697 A alleles and the INS rs3842763 A allele, with decreased risk. The mediation effect of the relevant trait in this nonuse group was minimal. oophorectomy, ages at menarche and menopause, and history of pregnancy]); effect-modifier variables (physical activity, BMI, and exogenous estrogen use), when not evaluated as effect modifier variables, were adjusted as a covariate; when stratified via waist circumference or waist-to-hip ratio, body mass index was not adjusted. * Indirect effect estimated via the proportional difference between the HRs without (total effect) and with (direct effect) accounting for hormone. ¶ Not applicable due to either ! 50% difference between small effect sizes or ! 100% difference between two effect sizes. https://doi.org/10.1371/journal.pone.0186296.t002
CRC risk associated with IGF-I/IR signaling pathway-related SNPs and IGF-I/IR traits, stratified by obesity status (BMI, waist, and w/h), level of PA, and exogenous E use
Because IGF-I/IR traits are not mediators of the association between SNPs in IGF-I/IR signaling-pathway genes (IRS1 and AKT1/2, in this study) and CRC, instead of estimating the mediation effect of the traits, we estimated a proportion explained by the traits as an influencing factor of the SNP-cancer relationship (Fig 2). For each SNP, the proportion was estimated using the traits (fasting levels of total and free IGF-I, IGFBP3, insulin, glucose, and HOMA-IR), after stratification by obesity status, level of PA, and exogenous E use (Tables 3-5). Of 14 SNPs in the IGF-I/IR signaling pathway-related genes, more than two thirds were associated significantly with CRC risk. Overall, the SNP-cancer association differed by obesity, PA, and exogenous E use. In addition, the proportion explained by the traits of the SNP-cancer association differed between the strata. In relation to the SNPs in the IRS1 gene ( Table 3), carriers of the IRS1 rs1801123 G and rs1801278 T alleles had increased CRC risk in inactive (MET < 10) women, with roughly 50% of CRC risk due to each SNP that was explained by traits. Further, carriers of the IRS1 rs1801278 T allele, when stratified by exogenous E use, had increased risk of CRC in E-only users; approximately 30% of the CRC risk associated with this SNP was dependent on traits.
Several SNPs in the AKT1 and AKT2 genes were significantly associated with CRC risk (Tables 4 and 5). When stratified by obesity status and PA level, carriers of the following SNPs had decreased risk of CRC in non-obese women: AKT2 rs11673367 A allele (in waist 88, w/ h 0.85, and MET ! 10 groups); AKT2 rs3730256 T allele (in BMI < 30 and w/h 0.85 groups); AKT2 rs7247515 A allele (in w/h 0.85 group); AKT2 rs4332845 A allele (in MET ! 10 group); and AKT1 rs1130214 T allele (in w/h 0.85 group). The effects of the traits on the SNP-cancer association in those groups were negligible. Differently, carriers of one SNP (rs2304186 A allele) in the AKT2 gene in the obese (BMI ! 30) group had increased risk of CRC, and about 50% of the CRC risk due to this SNP was explained by IGFBP3 levels (Tables 4 and 5).
Further, when stratified by exogenous E use, carriers of the AKT1 rs2494738 T and rs2498789 C alleles had an increased risk of CRC among nonusers; however, in E+P users, carriers of the rs2494738 T allele had a reduced risk of CRC (with 30% of the SNP-cancer association explained by insulin level). In contrast, carriers of the AKT1 rs2494740 T allele had an increased risk of CRC among E+P users, with about 50% of this SNP-cancer association explained by insulin level (Table 5).
Discussion
This study, to our knowledge, is the first to evaluate in postmenopausal women the association between IGF-I/IR-related genetic variants and CRC risk using mediation analysis to determine the extent to which CRC-SNP relationship is explained by metabolic biomarkers (i.e., IGF/IR traits). Additionally, we examined whether lifestyle factors, such as obesity, PA, and exogenous E use, modified the pathway connecting the genetic variant, trait, and CRC risk. Our major finding was that there are a number of significant associations between the IGF/IR-axis SNPs studied and CRC risk, many of which are mediated by circulating levels of metabolic biomarkers. However, these associations would be missed unless the analyses are stratified by obesity, PA level, and use of exogenous E.
Among the 33 IGF/IR-related SNPs we evaluated, 6 (of the 19 IGF-IR traits) in the INS, IGF-I, and IGFBP3 genes and 11 (of the 14 IGF/IR signaling pathway) in the IRS1 and AKT1/2 genes were associated with CRC risk. The association of these SNPs with CRC risk differed between strata (non-obese/active vs. obese/inactive women; E nonusers vs. users), indicating that lifestyle factors (obesity, PA, and exogenous E) modified the SNP-cancer association. For most of those SNPs that were associated with CRC in this study, the direct effect on cancer risk accounted for a majority of the total effect; roughly 30% of CRC risk associated with the SNPs was explained via IGF-I/IR traits. This suggests that the traits are not the main mediators factor-I; IR, insulin resistance; MET, metabolic equivalent; SNP, single-nucleotide polymorphism. Note: Among SNPs having statistically significant association with cancer in either subgroup, the SNPs with ! 30% of indirect effect in this subgroup or its counterpart are included. Numbers in bold face are statistically significant. ‡ Lifestyle modifiers whose strata include statistically significant association between SNP and cancer only are presented.
¥ Tag attached to the SNP name indicates the SNP-analysis approach (AD: additive; D: minor-allele dominant; and R: minor-allele recessive). £ Multivariate regression was adjusted by covariates (age, education, family income, family histories of diabetes mellitus or colorectal cancer, heart failure ever, high cholesterol requiring pills ever, smoking status, alcohol intake, and reproductive history [oral contraceptive and history of hysterectomy or oophorectomy, ages at menarche and menopause, and history of pregnancy]); effect-modifier variables (physical activity, BMI, and exogenous estrogen use), when not evaluated as effect modifier variables, were adjusted as a covariate; when stratified via waist circumference or waist-to-hip ratio, body mass index was not adjusted. * Proportional difference was estimated via difference in the HRs without (total effect) and with (direct effect) accounting for hormone.
https://doi.org/10.1371/journal.pone.0186296.t003 through which IGF-I/IR SNPs are associated with CRC risk, warranting further study of the pathway (e.g., dietary or inflammatory pathways). In our study, carriers of the INS rs689 T allele had increased CRC risk in non-obese and E nonusers, with 35% and 80%, respectively, of the SNP-cancer association mediated by insulin/ HOMA-IR levels. Two previous studies [13,53] examined the association between the INS rs689 and CRC risk, finding no significant association with CRC. Our study is the first to show a significant association of this SNP with CRC risk, but only among non-obese and E nonusers, with modest and strong mediation effects of traits. These findings suggest that the carcinogenesis pathway in this SNP interacts with the glucose-intolerance system, and further study is needed to evaluate the implication of obesity and estrogen in this tumorigenesis mechanism.
The insulin receptor substrate-1 (IRS-1) is a main substrate initiating and directing IGF-I/ 1R signaling. Genetic alteration in the relevant gene is associated with impaired downstream signaling, leading to insulin resistance and probably cancer. [22] Several previous studies evaluated the association between the IRS1 rs1801278 (Gly972Arg) and CRC risk and showed inconsistent findings: no significant relationship [13,18], increased risk [11,17], and decreased risk [10] of CRC with the T allele (vs. the C allele) in this SNP were examined. In our study, those carriers had an increased risk of CRC among inactive and E only users, suggesting that an obesity-related lifestyle and exogenous estrogen play a role in modulating the effect of this SNP on carcinogenesis. Of 3 members (AKt1, AKt2, and AKt3) of the AKt family, AKt1 and AKt2 are important signaling molecules related to a diabetic phenotype such as IR; at the genomic level, each is amplified in various cancers including breast cancer [54,55]. The AKT1/2 genes are thus key components of this pathway, but studies of the association of their genetic variants with CRC have been limited. Consistent with one previous study [56] showing that genetic variants in the insulin pathway were associated with CRC by interacting with lifestyle factors (e.g., diet), we found that several SNPs in the AKT1/2 genes were associated with CRC, by interacting with obesity, PA, and exogenous E use. Carriers of several SNPs in this AKt pathway were associated with increased risk of CRC in obese/inactive women and decreased risk in non-obese/ active women; this indicates that the signaling pathway-related carcinogenesis in these SNPs communicates with adiposity.
An interesting note was that when stratified by exogenous E use status, some SNPs in the same molecular pathway had different associations with CRC among E users. For example, in E+P users, carriers of 2 SNPs in the AKT1 gene had discrepant associations with CRC: the rs2494749 T allele was associated with increased risk, and the rs2494738 T allele, with decreased risk. This suggests that estrogen's cross talk with the target gene downstream of the signaling pathway affects cancer risk [24][25][26][27][28] and that the extent of this interaction may be SNP-specific and dependent on the combination of hormone therapy, but the clear mechanism is unknown.
We did not include any multiple-testing adjustments in our analyses. We tested the hypothesis that the interactions between genetic variants and lifestyle factors influence IGFs/ IR traits, resulting in altered cancer risk. We acknowledge that, as with many analyses, we might have a few false-positive results and that the results should be interpreted with care, especially when p values are close to the level of significance. Also, our findings from the mediation and proportion approach should only be interpreted statistically and do not necessarily imply any functional connections. Some analyses after stratification had large CIs with null associations due to small sample sizes. Finally, our study analyzed the data from non-Hispanic white postmenopausal women only, so, the generalizability of our findings to other populations is limited. Despite these limitations, the potential impact of our findings clearly warrants further study.
In conclusion, our findings suggest that in postmenopausal women, the IGF-I/IR axis has a potential role in the risk for CRC. Lifestyle factors including obesity, PA, and exogenous E use modulate the association between IGF-I/IR genetic variants and cancer, partially through IGF-I/IR traits. Further studies are needed to explore these complex mechanisms. Our results provide insight into gene-lifestyle interactions and suggest data on potential genetic targets for use in clinical trials for cancer prevention and intervention strategies to reduce the risk for CRC in postmenopausal women. | 2018-04-03T00:15:29.816Z | 2017-10-12T00:00:00.000 | {
"year": 2017,
"sha1": "7a3578c6152e40a9c340e1cecfe7408609222038",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0186296&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a3578c6152e40a9c340e1cecfe7408609222038",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
237452525 | pes2o/s2orc | v3-fos-license | Magnetofluidic based controlled droplet breakup: effect of non-uniform force field
We report the breakup dynamics of a magnetically active droplet (ferrofluid droplet) in a T-shaped LOC device under the modulation of a non-uniform magnetic field. We adhere to high-speed imaging modalities for the experimental quantification of the droplet splitting phenomena in the presence of a non-uniform force field gradient, while the underlying phenomena is supported by the numerical results in a qualitative manner as well. On reaching the T-junction divergence, the droplet engulfs the intersection fully and eventually deforms into the dumbbell-shaped form making its bulges to move towards the branches of the junction. We observe that the asymmetric distribution of the magnetic force lines, acting over the T-junction divergence, induces an accelerating motion to the left moving bulge (since the magnet is placed adjacent to the left branch). We show that the non-uniform force field gradient allows the formation of a hump-like structure inside the left moving bulge which triggers the onset of augmented convection in its flow field. We reveal that this augmented internal convection developed in the left moving volume/bulge, on getting coupled with the various involved time scales of the flow field, lead to the asymmetric splitting of the droplet into two sister droplets. Our analysis establishes that at the critical strength of the applied forcing, as realized by the critical magnetic Bond number, the flow time scale becomes minimum at the left branch of the channel, leading to the formation of larger sized sister droplet therein. Inferences of the present analysis, which focuses on the simple, wireless, robust and low-cost droplet splitting mechanism, will provide a potential solution for rapid droplet breakup, typically finds significant importance in point-of-care diagnostics.
INTRODUCTION
Droplet-based microfluidics has gained widespread attention among researchers for the past two decades, owing to its capability of precise control of minute volumes (Adamson et al. 2006; Baroud et al. 2010;Manga 1996;Vladisavljević et al. 2013). Important to mention here that the paradigm of droplet-based microfluidics has shown significant technical and research potential specifically due to its suitability in the point of care diagnostics (Shamloo & Hassani-Gangaraj 2020), drug delivery (Yue et al. 2020), analyzing and screening of bio/chemical reaction products (Zheng & Ismagilov 2005) and many more related areas (Madadelahi et al. 2019;Moon et al. 2010;Santos et al. 2016). Researchers have explored several aspects of droplet behavior such as droplet generation, breakup (Hoang et al. 2013;Jullien et al. 2009;Leshansky et al. 2012;Leshansky & Pismen 2009;Link et al. 2004), and merging/coalescence (Christopher et al. 2009), in various microfluidic devices such as T-junction, flow-focusing junction, Co-flowing junctions, to name a few. Note that the breakup of a mother droplet into two sister droplets in a simple passive microfluidic device such as a T junction, has significant engineering implications.
Researchers have shown that the size of the sister droplet being splitted in a T-junction can be passively controlled by adjusting the length of the downstream channel (Link et al. 2004).
However, situations in which there exist geometric limitations, the suitability of passive breakup methodology of droplets is highly restrained. This limitation, in particular, has led to the paradigm of active droplet breakup mechanism, whereby external field modulated forcing such as electric (Xi et al. 2016), magnetic (Tan & Nguyen 2011), acoustic (Schmid & Franke 2013), temperature (Yesiloz et al. 2017), optics (Marchand et al. 2012) are used for maneuvering the fluid flow field.
It may be mentioned here that in comparison to other fields, utilization of magnetic field for controlling the droplet breakup phenomena has some serious advantageous features. The magnetic field does not induce any changes in the flow field such as pH, ionic concentration, and surface charge. The magnetofluidic based droplet manipulation is usually realized with help of a smart fluid known as ferrofluid. Ferrofluid is a colloidal suspension of ferro/ferri magnetic particles in a non-magnetic carrier medium (Odenbach 2002;Rosensweig 1984). It is worth adding here that ferrofluid exhibits superparamagnetic nature i.e., on application of magnetic field, its magnetization is comparable to any ferromagnetic particles. Whereas on removal of the applied field, the ferrofluid flow field does not display any net hysteresis (Rosensweig 1984).
Ferrofluid has been successfully used in many engineering applications such as separation (Hejazian et al. 2015), heat transfer augmentation (Shyam et al. 2019), mixing (Kitenbergs et al. 2015;Zhu & Nguyen 2012), droplet generation (Tan et al. 2010), breakup (Bijarchi et al. 2021) and many more. Although breakup of droplet in a T-junction has been widely explored, studies pertaining to ferrofluid droplet break up under the modulation of magnetic field is sparsely explored. Having a closer scrutiny of the available literature, it is found that researchers have investigated the implication of uniform magnetic field on the ferrofluid droplet splitting phenomena (Li et al. 2016;Ma et al. 2017;Wu et al. 2013Wu et al. , 2014. It has been shown both numerically and experimentally that under the modulation of uniform magnetic field, the splitting phenomena is mostly symmetric. Also from the reported analysis in this paradigm, it is apparent that the application of uniform magnetic field brings in sufficient control on the size of the sister droplets generated from the mother droplet. Albeit several underlying issues of the implication of uniform magnetic field on the splitting of the ferrofluid droplet is well explored (sec Refs (Li et al. 2016;Ma et al. 2017;Wu et al. 2013aWu et al. , 2013bWu et al. , 2014, research endeavor with an emphasis on exploring the implication of the non-uniformity of the magnetic field distribution on the overall droplet breakup phenomena has been sparsely investigated. It may be mentioned here that uniform magnetic field is a mathematical assumption, while in a realistic physical scenario, a certain amount of nonuniformity is bound to exist pertaining to magnetic field driven fluidic applications. A few researchers, however, have explored the role of non-uniformity of the applied magnetic field on the overall size of the sister droplets generated in the process and discussed the asymmetric splitting phenomena attributing to the onset of the involved forcing gradient (Aboutalebi et al. 2018;Amiri Roodan et al. 2020;Bijarchi et al. 2021).
It is worth mentioning here that literatures exploring the effect of inequality of the force field gradient on the underlying breakup dynamics in the purview of droplet train flow has not been well explored. Due to the non-uniform distribution of magnetic flux density, the sister droplets moving downstream of the left/right branch of the T-junction after the breakup phenomena, will possess different flow time scale. The imbalance of this flow time scale in both the branches is expected to affect the overall droplet splitting phenomena and may result in generation of sister droplet of unequal sizes. The non-uniform force field alterations in droplet splitting/break-up event is expected to be more fascinating in the presence of droplet train flow, attributed primarily to the imbalance between the involved flow time scales. This aspect of droplet splitting phenomena albeit interesting from a fluid dynamics point of view and would be of huge practical relevance in different applications, has not been studied in literature till date.
In the present investigation, we present a novel way of controlling the droplet break up phenomena in a T-junction divergence of LOC device, under the modulation of a non-uniform magnetic field. Droplets are generated in a T-junction of the LOC device, following which they are splitted in another T-junction divergence, located further downstream. A magnet is placed adjacent to the left branch of the T-junction divergence, thereby inducing asymmetric magnetic flux distribution. We demonstrate that the asymmetric force field distribution creates uneven flow time scale in the left and right branches of the divergence (precisely T-junction divergence).
We show that by specifically tuning the balance between the various time scales acting on the droplet flow field, we could control the size of the sister droplet. Also, we numerically simulate the flow dynamics under the influence of a magnetic field, essentially for a qualitative understanding of the droplet break-up characteristics in the asymmetric force field ambience. In what follows, we divide this study into four sections. In the first section, we explore the droplet break-up phenomena in the presence of a non-uniform magnetic field. In this section, we discuss the droplet splitting phenomena under the modulation of external force field obtained from both experimental investigations and numerical simulations. In the second section, we numerically explore internal hydrodynamics of the droplet during its splitting under the modulation of a nonuniform magnetic forcing. In the subsequent section, we experimentally investigate the morphological evolution of the droplet splitting characteristics. In the final section of this article, we attempt to develop a physical understanding of the typical droplet break-up behavior by exploiting the various involved time scales.
EXPERIMENTAL METHODS
In the present study, we use ferrofluid solution as the dispersed phase, while silicon oil is used as the continuous phase. We employ co-precipitation method for the preparation of ferrofluid solution. The prepared dispersed phase, i.e., the ferrofluid solution is composed of DI-water (De-ionized water) as the carrier phase, while iron-oxide nanoparticles form the suspended phase.
Interested readers are referred to our recent articles where the preparation of the ferrofluid solution has been aptly discussed (Shyam et al. 2020b(Shyam et al. , 2020a(Shyam et al. , 2020c(Shyam et al. , 2021. Figure 1 depicts the characterization of the prepared ferrofluid sample. The ferrofluid solution exhibits superparamagnetic characteristics as can be observed from the − curve of Figure 1(a). We show in Figure 1(b)-(c), the variation of Zeta potential and the size of the suspended nanoparticle in the ferrofluid solution. Note that the ferrofluid solution has a zeta potential of around −53 , signifying an electrostatically stable solution (Xu 2002).
FIGURE 1. (Color online) Plot depicts the magnetization curve of the prepared ferrofluid sample as was measured by VSM (vibrating sample magnetometer). The prepared ferrofluid sample exhibits no hysteresis and is superparamagnetic in nature, (b) Plot shows the electrostatic potential characteristics of the ferrofluid solution, (c) Plot illustrates the size distribution of the suspended magnetic nanoparticles in the ferrofluid sample. The average size of the magnetic nanoparticles was found to be 50 .
The density and viscosity of the ferrofluid solution are calculated to be around 1050 • −3 , and 0.00106 • respectively. While the volume fraction of the iron-nanoparticles in the ferrofluid solution was around 2 %. As already mentioned, we use silicon oil (make: Sigma-Aldrich) as the continuous phase in the present study. Accordingly, the density and viscosity of silicon oil are found to be around 930 • −3 , and 0.3 • . The interfacial tension between silicon oil and ferrofluid, as measured by tensiometer (Make: Kyowa), is found to be around 0.012 • −1 . Note that the iron-oxide nanoparticles particles are coated with surfactant (lauric acid for the present case) essentially to avoid any agglomeration, which is not unlikely to occur due to the interparticle interactions (Odenbach 2002;Rosensweig 1984;Shyam et al. 2019). The presence of surfactant in the ferrofluid solution lowers its static contact angle on a rigid substrate, as can be clearly observed from Figure 2(a).
The LOC-based device, consisting of a T-shaped fluidic channel, is fabricated by using soft lithographic technique (Whitesides & Stroock 2001). The schematic representation of the microfluidic device is shown in Figure 2(b). The microfluidic passage has a square cross-section of around 100 as width. The fabricated device has three section: a droplet generation junction (T-junction), a straight microchannel, and a T junction divergence (in which droplet splitting is taking place). The continuous phase (silicon oil) and the dispersed phase (ferrofluid) are injected from the two inlets leading to the droplet formation at the T-junction (i.e. droplet generation junction). The generated droplet, henceforth will be referred to as the mother droplet, then flows through the straight microchannel and further breaks down into smaller droplets (henceforth, these droplets will be referred to as sister droplets) at the T-junction divergence as can be seen from Figure 2 Ferrofluid droplet has a contact angle( ) of 44° and 60° on a glass and PDMS substrate, respectively, (b) Schematic representation of the working mechanism of the proposed microfluidic platform for controlled droplet splitting. Droplets on getting generated in a T-junction break down in a Tjunction divergence under the influence of a non-uniform magnetic field, leading to its asymmetric splitting. The sequence of operation in the presence and absence of a magnetic field is schematically represented, (c) Graphical representation of the inverted microscope in which the experiments are carried out, (d) Schematic of the numerically simulated two-dimensional computational domain.
We use a Gaussmeter (Make: SES instruments) for the measurement of the magnetic flux density. The experiments are conducted in an inverted microscope (Make: Leica), schematic of which is shown in Figure 2(c). We employ high-speed imaging for recording the droplet splitting phenomena. We capture images of resolution 1920 × 1280 pixels 2 at a frame rate of 1000 fps.
The recorded images are further post-processed in Matlab ® platform by using an in-house developed code. Interested readers are referred to one of our previous works wherein the involved steps of the images processing have been discussed elaborately (Shyam et al. 2020b).
System description
We also take an effort to investigate the droplet splitting phenomena numerically, essentially to understand the several intricate physical aspects involved with the underlying phenomena.
Although, the experimental flow dynamics pertains to that of a droplet train flow scenario, through the numerical simulations, the dynamics of an isolated droplet in an immiscible liquid medium are explored. This particular exercise saves computational time significantly. Moreover, the prime intention of carrying out numerical simulation is to explore the isolated droplet breakup dynamics in the presence of a non-uniform magnetic field; as such, the inferences can be
Phase-field formalism
We employ the diffused interface-based phase-field method for simulating the flow dynamics of immiscible two-phase flow systems as considered in this study. It may be mentioned here that the modeling framework of phase-field method is obtained by the minimization of the total energy of the system and is thermodynamically consistent (Cahn & Hilliard 1958). The phasefield method has been successfully used by several researchers to study the interfacial dynamics of a multi-fluid system in the presence of an external force field (Gorthi et al. 2017;Mondal et al. 2013Mondal et al. , 2015. The thermodynamics of a two-fluid flow system can be described by the Ginzburg-Landau free energy functional ( ), which is expressed as (Jacqmin 2000), where ∀, spans over the whole fluid domain, , and are the interfacial tension and the interface thickness, respectively. The first term in Eq. (1) denotes the bulk free energy of the binary fluid system, while the second term signifies the interfacial free energy due to the presence of an interface separating the two fluids. The bulk free energy density ( ( )) can be expressed as, ( ) = ( 2 − 1) 2 4 ⁄ . It may be mentioned here that the maxima and minima of ( ) corresponds to the two stable phases involved, i.e., the dispersed phase ( = −1) and the continuous phase ( = 1). The minimization of the free energy of the system ( ( )) along with the mass conservation of the respective phases, leads to the well-known Cahn-Hilliard equation. Moreover, the minimization of the free energy of the system also leads to the addition of a volumetric force term in the Navier-Stokes equation, as will be discussed in section 2.2.4.
The spatio-temporal evolution of depends on the Cahn-Hilliard equation as given by (Badalassi et al. 2003;Cahn & Hilliard 1958, 1959, where and denotes the interfacial mobility factor and chemical potential respectively.
determines the relaxation time of the interface and the time scale of the Cahn-Hilliard diffusion.
While the chemical potential ( ) is basically the variational derivative of the free energy functional with respect to the order parameter( ) (Badalassi et al. 2003), It may also be mentioned that in the phase-field framework, any generic property, may be expressed in terms of the order parameter ( ), as follows (Badalassi et al. 2003) = 1 − 2 + 1 + 2 (4)
Modeling of Magnetic Field Distribution
We calculate the magnetic field acting on the flow domain by solving the Maxwell equations as given by (Griffiths 2017): where ̅ is the magnetic flux density and ̅ is the intensity of the magnetic field. The magnetic flux density ( ̅ ) is given by (Griffiths 2017): = 0 ( + ) (7) 0 = 4 × 10 −7 / is the permeability of vacuum, is the magnetization vector. Since the magnetic field is irrotational (i.e., × = 0), it can be expressed in terms of scaler potential, = − . Therefore, using the scaler potential, Eq. (7) can be written as In the presence of a magnetic field, the total magnetic force that acts on the fluid volume is given by (Strek 2008):
Coupling of phase-field and magnetohydrodynamics
We solve the continuity equation, the Cahn Hilliard, and Navier-Stokes equations to obtain the pressure and velocity field of the two-liquid flow system, which are expressed as (DasGupta et al. 2014;Jacqmin 1999Jacqmin , 2000Mondal et al. 2015), The momentum transport equation (Eq. (10)) couples phase-field formalism with magnetohydrodynamics. Here, , is the phase-field dependent interfacial tension force, is the magnetic body force as denoted by Eq. (8). The boundary conditions considered in the flow domain for the simulations are as follows: fully developed velocity is applied at the inlet. Nonviscous, pressure constraint outflow condition is maintained at the outlet; in other words, the pressure at the outlet is fixed to zero. i.e., = 0. For the magnetic field simulations, the magnetic insulation boundary condition, i.e., • = 0, is applied at the surrounding air domain.
Normalization of the governing equations
In the present section, the aforementioned governing equations are non-dimensionalized for where , , and denotes the flow rate, relative permeability, and pressure, respectively. Using the above set of non-dimensionalized variables, the governing equations i.e. the Cahn Hilliard, continuity, and Navier-Stokes equation gets reduced to * + * .
Thus the present problem of two-liquid flow systems is characterized by the following set of dimensionless parameters: The Reynolds number, , is the ratio of the inertia force to the viscous force. The Cahn number, , is the ratio of interface thickness to the characteristics length. The capillary number, , is the ratio of the viscous force to the interfacial force. The phase-field Peclet number, , is the ratio of the advection of the order parameter ( ) to its diffusion. The magnetic Bond number, , is the ratio of the magnetic force to surface tension force ( is the magnetic susceptibility).
It is worth mentioning here that for the present analysis, the order of several non-dimensional numbers are considered as, ~ (
Grid independence study and Validation:
Here, we show the grid independence test to ensure the correctness of the numerical results presented in this analysis. In addition to that, to ensure a sharp interface limit, we also carry out the Cahn number independence test. It may be mentioned here that in our study, we have chosen the grid size near the interface equal to the Cahn number ( ). Thus, in the context of the present analysis, a grid-independence will simultaneously indicate a Cahn number independence and vice-versa. Figure 3(inset) shows the temporal evolution of the nondimensionalized width ( * ) of the droplet as it breaks up in the T junction divergence of the LOC device, obtained for different values of grid resolution. As can be seen clearly from Figure 3(a) that the numerical results become independent of mesh size below < 0.02. Although for ~0.01, a better resolution is obtained, however, considering the involved computational cost vis-a-vis the gained accuracy, we choose ~0.02 for the present study. Note that this particular value of Cahn No, ~(10 −2 ) ensures the sharp interface limits as well (Yue et al. 2010). We also compare in Figure 3(b) our numerical results quantitatively with those obtained from the analytical relation of Bretherton (Bretherton 1961). For low Reynolds number flow i.e., ≪ 1, the velocity of a droplet moving in a slender tube, with a thin film separating the droplet and the wall is given by (Bretherton 1961) We show in the inset of Figure 3, an excellent agreement between our numerical results with those obtained from Eq. (17). A closer match between the results, seen from Figure 3 vouches for the correctness of the numerical modelling framework developed in this study.
RESULTS AND DISCUSSIONS
In this section, we explore the droplet breakup phenomena and its consequences to the alterations in the various hydrodynamical parameter of the flow field. As already mentioned, the present investigation is carried out in the flow regimes whereby the viscous and interfacial forces are dominant in comparison to the inertia forces. As such, the study is divided into four parts. In the first part, we develop a physical understanding of the ferrofluid droplet splitting mechanism in the presence of a non-uniform force field. To this end, we make use of the high-speed imaging modalities and perform the numerical simulations for exploring the dynamical behavior of the droplet under the modulation of the spatially varying force field. In the second section, we numerically explore the intricate flow physics involved with the droplet splitting dynamics.
Subsequently, in the third section, we experimentally explore the morphological evolution of the mother droplet in the T-junction divergence of the LOC-device under the modulation of the nonuniform magnetic forcing. Finally, in the last part of the study, we make an attempt to develop a physical reasoning behind the characteristic behavior of the sister droplets from the perspective of the involved time scales.
Droplet Break up: Qualitative Dynamics
It may be mentioned here that depending on the capillary number ( ), and initial slug length ( 0 ), a droplet may split into two sister droplets in the T-junction divergence following permanent obstruction or partial obstruction (Jullien et al. 2009;Leshansky & Pismen 2009). In addition to that, based on the initial slug length and Capillary number, the droplet may not split at all (Chen & Deng 2017;Jullien et al. 2009). Pertaining to the permanent obstruction case, the dispersed phase totally engulfs the T-junction divergence and blocks the flow of continuous phase liquid. On the contrary, for the partial obstruction case, a tunnel develops, which allows the motion of the continuous fluid over the dispersed phase. Therefore, in the partial obstruction case, the droplet blocks the T junction divergence of the LOC-device incompletely. Henceforth, the word 'droplet' will refer to the mother droplet.
It is worth mentioning here that for the present study, the mother droplet splits into two sister droplets following the permanent obstruction of the T-junction divergence, as can be observed from the depicted images in Figure 4. We show in Figure 4 the spatio-temporal evolution of the droplet splitting phenomena in the absence of an external field. Note that droplet breaks ensuing three typical stages: squeezing, transition, and pinch-off (Ma et al. 2017).
Initially, as the droplet (i.e., dispersed phase) enters into the T junction divergence, it tries to block the whole junction, as can be seen from Figure 4(a). As the droplet occupies the whole junction ( * = 0 + ), the upstream continuous phase flow gives rise to the formation of a depression in the droplet, which further, leads to the change in the curvature of the neck (the circled region in * = 0.5 in Figure 4(a)). The depression is a resultant effect of the squeezing pressure acting on the neck of the droplet (of the dispersed phase). The squeezing pressure is the pressure that is being developed in the upstream continuous phase due to the permanent blockage of the T-Junction divergence by the dispersed phase. Since no tunnel develops for the present case (i.e., permanent obstruction case), the squeezing force becomes very large in comparison to the viscous force, thereby dictating the overall dynamics in the squeezing stage. The squeezing stage is followed by the transition stage. In the transition stage, too, no tunnel formation is seen, and thus, the splitting phenomena is dictated by the balance between the interfacial tension force and the squeezing force. Although the upstream pressure force (i.e., the squeezing force) is dominant in both the squeezing and transition stage, the temporal evolution of the rear interface of the droplet is found to be different for the two individual cases, as will be discussed in detail in the forthcoming sections. The transition zone is followed by the pinch-off zone, in which the droplet gets fully detached, leading to the formation of two sister droplets, as can be observed from Figure 4(a). In the succeeding discussions, we will demarcate all of these existing splitting zones appropriately. To attain a detailed visualization alongside to arrive at an in depth understanding of the experimental observation, we perform numerical simulations of the droplet splitting phenomena.
The details of the numerical methodology adopted in this analysis is already mentioned in section 2.2. As already mentioned before that although the present experimental study deals with droplet train, the splitting dynamics of an isolated droplet is investigated numerically. This particular exercise will give us qualitative insights into the droplet breakup dynamics in the absence/presence of a non-uniform magnetic field. We show in Figure 4 It is worth to add here that the similarity of the underlying droplet splitting dynamics between experimental observation and numerical results, observed from qualitative perspectives in Figure 4(a) vis-à-vis Figure 4(b) justifies the credibility of our experimental methods.
We show in Figure 5 Figure 4 as well, can be clearly seen for the present case (see Figure 5). The behavior of the droplet splitting in the presence of a non-uniform magnetic field becomes similar to the case where no external force acts on the droplet flow field. The only difference is the asymmetricity produced during the droplet breakup process. Note that the asymmetricity developed in the droplet splitting process is precisely due to the involved non-uniformity in the force field gradient induced due to the applied field. Readers are referred to the supplementary materials section for a detailed distribution of the magnetic field flux density in the fluid flow domain. We can clearly observe from Figure 5(a), that as the ferrofluid droplet enters the Tjunction divergence, the upstream pressure forces the advancement of the dumbbell shaped bulge in the left and right branches, respectively. Note that in the succeeding discussion, we refer these left/right dumbbells shaped bulges of the dispersed phase as the left/right moving bulge.
However, due to the non-uniform distribution of the applied magnetic flux density, the advancement (of the dumbbell-shaped bulge) becomes more aligned towards the left branch (asymmetricity is produced in the break-up phenomena) in comparison to the right branch (refer * = 0.5 and * = 0.75 of Figure 5(a)) of the T-junction divergence. Particularly, because of this reason, we observe in Figure 5(a) the presence of unevenly sized sister droplets being produced in the process. As such, the larger-sized sister droplet moves in the left branch, while the smaller-sized sister droplet moves in the right branch, as can be clearly observed from Figure 5(a). We show the numerically simulated droplet splitting phenomena in Figure 5(b). The asymmetric behavior of the droplet breakup can easily be observed from Figure 5
Droplet Breakup Phenomena: Numerical Perspectives
As seen from the preceding discussion that the numerical results agreed well with the experimental observations. This endeavour provides us the flexibility to delve deep into the droplet splitting dynamics following numerical computations. We show in Figures Figure 6(b)). This altercation is due to the presence of the non-uniform magnetic flux density. The presence of high force field gradient in the left branch ensures that the maxima (of velocity) is always aligned towards the left branch. Consequently, due to low magnetic flux density, the minima (of velocity) is aligned towards the right branch as can be clearly observed from Figure 6(b). In particular, a high force field gradient acting in the left branch induces more ferrofluid mass to flow towards it, leading to the generation/development of large-sized sister droplets moving on the left side (refer to Figure 6(b)). Also, the location of the stagnation point is altered due to the involved asymmetric stretching of the droplet (refer point E of Figure 6(b)). Quite notably, the influence of a magnetic field leads to the migration of the localized maxima and minima to the left bulge (of the droplet) as can be seen at * = 0.5, and 0.75, respectively from We will show in the succeeding sub-sections that this augmented velocity in the bulge (moving in the left branch) can be significantly maneuvered in controlling the size of the generated droplet (precisely sister droplet) particularly, for the cases involving with the droplet train. = − , is the Laplace pressure, is the pressure inside the droplet (dispersed phase), is the pressure inside the upstream continuous phase, is the order parameter. ̅ = * ⁄ , * is the instantaneous nondimensionalized pressure and is the non-dimensionalized pressure at the inlet of the channel. Negative value indicates surface tension is oriented upstream. The blue colour shaded area indicates the branched channel area. The inset indicates the respective spatio-temporal location of the droplet As already discussed before in section (3.1) that for a droplet getting splitted into sister droplets following the permanent obstruction, the balance between the upstream pressure force, magnetic force, and the interfacial tension force dictates the overall splitting/break-up phenomena. Accordingly, in order to have a comprehensive understanding on the competition of these two forces, we show in Figures 7-8, the detailed evolution of the pressure distribution and the Laplace pressure drop across the rear droplet interface during the break-up phenomenon with permanent obstruction. The discussion pertaining to this aspect follows the results obtained from the numerical simulations performed in this analysis. It may be mentioned here that due to the upstream pressure, the curvature of the rear interface of the droplet undergoes a temporal change from convex to concave shape. Therefore, we represent the entire phenomena in Figures 7-8 is the pressure inside the droplet (dispersed phase), is the pressure inside the upstream continuous phase, is the order parameter. ̅ = * ⁄ , * is the instantaneous non-dimensionalized pressure and is the non-dimensionalized pressure at the inlet of the channel. Negative value indicates surface tension is oriented upstream. The blue colour shaded area indicates the branched channel area. The inset indicates the respective spatio-temporal location of the droplet It may be mentioned here that in the absence of an external force field, when the droplet enters the T-junction, its motion becomes restrictive due to the presence of the wall of the junction. As a consequence of this restricted motion, the droplet undergoes deformation symmetrically. At this juncture, the rear interface of the droplet exhibits a convex profile, as can be observed from inset of Figure 7(a). Due to this convex profile, the capillary pressure, (= − ) ≫ 0, demonstrates a positive value (see Figure 7(a)). With the progression of temporal instances, the droplet leaves the main channel fully and occupies the whole junction, and at this stage, the upstream pressure forces the rear interface of the droplet to attain a flat profile ( * = 0, of Figure 6(a)). It is due to this flat profile, the capillary pressure assumes almost a negligible value i.e., (= − )~0, as can be observed from Figure 7(b). The upstream pressure further forces the neck of the droplet to attain a concave profile, thereby ensuring that the capillary pressure attains a negative value i.e., (= − ) < 0, as can be observed from Figure 7(c). Note that the dynamical evolution of the pressure remains the same qualitatively even in the presence of a non-uniform magnetic field, i.e., the rear interface of the droplet evolves from the convex to the concave shape as observed from Figure 8(a)-(c). Up till this section, we correlate the insight gained from the numerical simulations directly to the experimental scenarios. Following these inferences, which will be used to support the laid down the experimental observations, we explore the dynamics of the droplet break-up events focusing aptly on experimental results in the succeeding sections.
Evolution of droplet width
We have seen from the previous discussion that on application of a non-uniform magnetic field, the ferrofluid droplet tends to get stretched in the direction of the applied magnetic field. We have also observed from both experimental observations and numerical simulation that the uneven stretching of interface results in an asymmetric splitting of the droplet. In Figure 9, we show the variation of the non-dimensional thickness ( * = ⁄ ) of the droplet (dispersed phase) as it breaks both in the presence and absence of a magnetic field. It may be mentioned here that the underlying event of droplet splitting is mainly governed by the intricate competition among the interfacial force, viscous force and the magnetic force. Here, we introduce the magnetic Bond number ( ), which is used to represent the relative strength between the magnetic force and the surface tension force, respectively. The time zero indicates the moment at which the dispersed phase entirely penetrates into the T-Junction divergence. As such that the rear interface of the dispersed phase is almost flat at = 0. We demarcate, in Figure 9, the various regimes of splitting encountered by the droplet i.e., the squeezing regime, the transition regime and the pinch-off regime. The demarcation of the various regimes is identified out following the characteristic slopes exhibited by the deforming ferrofluid droplet in the T junction divergence as can be observed from Figure 9. It may be mentioned here that evolution of the neck thickness of the droplet is such that in the initial squeezing regime, a linear gradient is observed. Important to mention here that this linear variation of * is attributed to the typical role played by the two responsible forces i.e., the interfacial force and the upstream pressure force on the underlying phenomena in this regime. Due to the limited deformation of the rear interface of the droplet in the squeezing regime, the role played by the interfacial force in the pertinent regime becomes miniscule. As a result, the rear interface of the droplet moves primarily due to the squeezing force of the upstream flow, with limited to no role played by the surface tension force. It is because of this force balance, a linear relationship between * vs , can be observed during the squeezing regime as observed from Figure 9. This is followed by the transition regime, in which the interfacial tension force resists the deformation of the rear interface. This resistance leads to a delay in the overall deformation of the rear interface of the droplet. Consequently, we observe an exponential relationship between * and , as witnessed in Figure 9 (see regime B). The transition regime is followed by the pinch off stage, in which the thinning rate gets significantly amplified with time.
The top inset of Figure 9 shows the variation of the thinning rate, 1 − * vs * , for the case when magnetic field is applied. As already mentioned, in the squeezing regime, it can be observed, 1 − * ~ * , and it is essentially due to the minimal involvement of interfacial tension force. While for the transition regime, we observe, 1 − * ~ * 3 7 ⁄ , where scaling 3 7 ⁄ , agrees well with the theoretical solutions as proposed by Leshansky and Pismen (Leshansky & Pismen 2009). Therefore, it can be argued that the characteristics behavior of the droplet splitting even, as observed in the regime of permanent obstruction, is independent of the fact weather a magnetic field is applied or not. However, it can be clearly visualized from Figure 9 that magnetic field has a role to play on the overall life time of the droplet break up phenomena. This further implies that by specifically tuning the force field, we could effectively control the size of the droplet. In the forthcoming sections, we comprehensively discuss effective ways in which the size of the sister droplets can be controlled.
Effect of Magnetic Flux Density
We show in Figure 10, the variation of non-dimensionalized width ( * ) for the various magnetic flux densities (precisely, for various ). It can be clearly observed from Figure 10 that the characteristic variation of the droplet width is same irrespective of the applied magnetic field strength. Also, Figure 10 demonstrates that the required time of splitting of the ferrofluid droplet into the sister droplets is dependent on . As such, it could be observed from Figure 10 that the droplet splitting time ( ) varies in the following sequence: (= 0) < (= 7.5) < (= 6.1) < (= 15.1). In the next sections, we explore this particular aspect of the droplet splitting time as modulated by the magnetic flux density in a greater detail.
The top right inset of Figure 10 shows the variation of non-dimensionalized length ( * ) of the sister droplet flowing in the left branch. Important to mention here that the magnet is placed adjacent to the left branch of the T-junction divergence. Therefore, to explore the role of magnetic field on the droplet splitting and subsequently, on the change in the length of the sister droplets, we focus our attention on * . Note that * > 0.5, ensures that the length of the sister droplet moving to the left branch ( * ) is more in comparison to the length of the sister droplet moving to the right branch ( * ). In other words, it can also be said that the application of nonuniform magnetic field ensures that an increased fluid volume moves to the left branch when compared to the volumes of fluid moving to the right branch, precisely due to the existing the high field gradient (in the left branch). We can clearly observe from Figure 10
Effect of Flow Ratio ( )
In this subsection, we explore the effect of the dispersed phase flow rate on the droplet splitting phenomena. We have mentioned that in the present study the flow rate of the dispersed phase is changed while that of the continuous flow is kept constant. This particular exercise is carried out essentially to vary the length of the generated droplet (equivalently the volume), keeping the continuous phase flow velocity constant. We normalize the flow rate as flow ratio, given as, = ⁄ . Note that lower the value of , higher is the slug length of the dispersed phase (i.e., the droplet). We show in Figure 11, We show in Figure 12, the length of the sister droplet ( * ) moving in the left branch after the break up phenomena. Needless to mention here that the influence of non-uniform magnetic field is significant in the left branch. As can be clearly observed from Figure 12 that there exist a threshold magnetic field strength (precisely = 7.5) and flow ratio ( ) at which * is maximum. As can be observed from Figure 12. any increase in the magnetic field strength beyond the threshold value decreases the length of the sister droplet ( * ) that is migrating in the left branch,. The corresponding snapshots of the events pertaining to sister droplet generation for the various cases under investigation can be observed from the inset of Figure 12. The consequence of this particular insight can be of significant importance, since by tunning the strength of the magnetic field and the flow ratio, we could ensure the desired volume of slug to move into the various branches of the T-junction divergence (precisely the left/right branch). In the suceeding section, we explore the physical reason behind this typical behaviour of the sister droplet train in a confined microfluidic passage in the presence of a non-uniform magnetic field.
Mechanism of Splitting: A Time Scale Perspective
The results up till now has shown that there exists a threshold magnetic field strength ( ) and flow ratio ( ) for which the size of the generated sister droplet in the left branch becomes maximum. In this section, we attempt to unearth the reason behind this typical splitting behavior of droplets in the presence of a non-uniform magnetic field. As already mentioned before that in the present work, splitting takes place by virtue of permanent obstruction. Therefore, the underlying motion of the droplet (being splitted) moving in the left branch and the right branch simultaneously will dictate the overall splitting process. To develop an overall understanding of the splitting phenomena, we identify two parameters i.e., the velocity of the sister droplet advancing towards the left ( ), and the right divergences ( ), simultaneously. It is found that due to the non-uniformity in the distribution of magnetic field flux density, there exist a significant difference between and . As a consequence of this effect, the time scale of the flow in the left branch ( = ⁄ ) and in the right branch ( = ⁄ ) of the microfluidic channel becomes different. The flow time scale refers to the time required by the sister droplet to travel the characteristics length ( ) in the respective branches (left/right) of the divergence i.e. the Tjunction divergence. It may be reiterated that the dispersed phase (precisely the ferrofluid droplet) on impacting the T-junction divergence (i.e. the splitting junction) blocks the whole junction and stretches itself in the left and right branches respectively. During the stretching phase, the dumbbell shaped bulges of the droplet (being splitted) moves in both the left and right branches independently. This migration event eventually culminates into the breaking of the droplet into two sister droplets. However, in presence of a non-uniform magnetic field, the time scale of motion of the respective sister droplet is different. Asymmetry between the two time scales i.e. and , leads to the formation of sister droplets of unequal sizes. It is understandable that due to the presence of high force field gradient in the left branch, < , indicating that the time required for the dispersed phase (i.e., the sister droplet) to move a characteristics length ( ) is more in the right branch in comparison to the left branch. Moreover, we found that the flow time scale ( ) in the left branch is dependent on the simultaneous effect of the magnetic field flux density and the flow ratio ( ). In Figure 13, we show the variation of , for the different cases under consideration. It can be clearly observed from Figure 13 that the migration time scale of the sister droplet moving in the left branch is minimum for = 7.5 for all the cases under consideration. As a consequence, we have observed that * is maximum for = 7.5 (refer Figure (11)-(12) for details). Although, as already discussed before, it is comprehensible that magnetic field gradient promotes on the asymmetric splitting of the ferrofluid droplets. It cannot be ignored that the flow of ferrofluid droplet train in the left branch also faces resistance due to the non-uniform distribution of the magnetic field flux density, (since the magnet is placed adjacent to the left branch). It is worth mentioning here that it is because of this resistance, the droplets moving in the left branch and the right branch has different times scale, as aptly illustrated in Figure 14(a).
With an initial increase in the magnetic Bond number ( ), the size of the ferrofluid droplets moving in the left branch ( * ) increases. With further increase in beyond the critical value, * decreases. This decrease in * is due to the substantial increase in the restriction to the sister droplets motion in the left branch as can be observed from Figure 14(a). This arguments gets further justified, since, we observe an increase in beyond critical Bond number (i.e., , = 7.5), as depicted in Figure 13. As a consequence of this resistance, the flow rate of the continuous phase in this particular branch decreases. Primarily due to this reason we observe a decrease in the size of the sister droplet (moving in the left branch) beyond the critical Bond number ( , ).
Similar, observation can be seen with a change in the flow ratio ( ). Note that a low flow ratio ( ) ensures a high initial slug length ( 0 ) of the mother droplet and vice versa. As discussed previously, an increase in the mother droplet length will ensure a simultaneous high resistance to the flow (in the left branch), thereby further decreasing * as depicted in Figure 10(b). On the other hand, with a reduction in initial droplet length ( 0 ), * increases. However, a further reduction in 0 (i.e., increasing flow ratio, ) beyond the critical value( 0, ), limits the amount of magnetization realized by the droplet that is being splitted. This limited realization of the magnetic force by the ferrofluid droplets restricts the asymmetricities involved in the droplet splitting phenomena and leads to a decreases in * further. This phenomena can be clearly observed from the graphical representation of 14(b).
CONCLUSION
In summary, we have systematically investigated the ferrofluid droplet breakup dynamics in a T-junction divergence of LOC device in the presence of a non-uniform magnetic field. The study is especially limited to the "breakup with permanent obstruction" regimes. Firstly, we have methodically explored the droplet breakup behavior under the modulation of a non-uniform magnetic field. With the help of numerical simulations, we have investigated, the internal hydrodynamics of the droplet under the influence of a non-uniform forcing. We found out the presence of a "hump-like structure" developed inside the left moving bulge ( ACKNOWLEDGEMENT SS and PKM gratefully acknowledge the financial grant obtained from NEWGEN IEDC. Also, PKM gratefully to acknowledges the financial support provided by the DSIR, Govt. of India, through project no. DSIR/PRISM/170/2020-21. The authors also acknowledge the CIF, IIT Guwahati for the support in characterization of ferrofluid. | 2021-09-10T01:16:26.826Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "c64f213182d21bd9142a8535d700e99c40e34a64",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c64f213182d21bd9142a8535d700e99c40e34a64",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
16509491 | pes2o/s2orc | v3-fos-license | Alleviating alpha quenching by solar wind and meridional flow
We study the ability of magnetic helicity expulsion to alleviate catastrophic $\alpha$-quenching in mean field dynamos in two--dimensional spherical wedge domains. Motivated by the physical state of the outer regions of the Sun, we consider $\alpha^2\Omega$ mean field models with a dynamical $\alpha$ quenching. We include two mechanisms which have the potential to facilitate helicity expulsion, namely advection by a mean flow ("solar wind") and meridional circulation. We find that a wind alone can prevent catastrophic quenching, with the field saturating at finite amplitude. In certain parameter ranges, the presence of a large-scale meridional circulation can reinforce this alleviation. However, the saturated field strengths are typically below the equipartition field strength. We discuss possible mechanisms that might increase the saturated field.
Introduction
Mean field dynamo models have provided an important framework for studying the generation of large-scale astrophysical magnetic fields and their spatio-temporal dynamics. However, these widely used models have been presented with a serious challenge -namely the so called catastrophic α quenching (Gruzinov & Diamond, 1994). In the mean field (MF) context this effect, which is a consequence of the conservation of magnetic helicity (Krause & Rädler, 1980;Zeldovich, Ruzmaikin & Sokoloff, 1983), manifests itself as the decrease of the α-effect with increasing magnetic Reynolds number Re M (Vainshtein & Cattaneo, 1992;Cattaneo & Hughes, 1996) at finite field strength. In models without magnetic helicity fluxes, the quenching of α can become severe, with α decreasing as Re −1 M -truly catastrophic for dynamo action in the Sun, stars and galaxies where the Reynolds numbers are all very large (> 10 9 ). This catastrophic quenching is captured by mean-field models which use dynamical alpha quenching, such as that considered by Blackman & Brandenburg (2002). This catastrophic quenching is independent of the details of the dynamo mechanism and is a direct effect of the conservation of magnetic helicity, see, e.g., Brandenburg & Käpylä (2007) who have demonstrated catastrophic quenching for nonlocal alpha effect or Chatterjee et al. (2011) who have demonstrated the occurrence of catastrophic quenching in distributed dynamos. It has been suggested that the quenching may be alleviated by the expulsion of magnetic helicity through open boundaries (Blackman & Field, 2000;Kleeorin et al., 2000). At least three different physical mechanisms may help in the expulsion of small scale magnetic helicity: (a) large scale shear (Vishniac & Cho, 2001;Subramanian & Brandenburg, 2004;Brandenburg & Sandin, 2004;Moss & Sokoloff, 2011); (b) turbulent diffusion of magnetic helicity (Mitra et al., 2010a); (c) non-zero mean flow out from a boundary of the domain, e.g. a wind. A number of recent studies have demonstrated the possibility of this alleviation of quenching for solar (Chatterjee et al., 2011 and galactic dynamos (e.g. Shukurov et al., 2006).
In this paper we study the effects of a number of mechanisms which may facilitate the expulsion of magnetic helicity from the dynamo region. Initially we consider the effects of advection by a mean flow in a similar manner to Shukurov et al. (2006); see also the recent study in a one dimensional model by Brandenburg et al. (2009). We envisage that in the Sun the wind could be loaded with magnetic helicity through coronal mass ejections (Blackman & Brandenburg, 2003). Another potentially important mechanism is meridional circulation. The presence of such a circulation in the Sun is supported by a number of observations which have found evidence for a near-surface poleward flow of 10 − 20 ms −1 . Even though the corresponding compensating equatorward flow has not yet been detected, it is however assumed it must exist because of mass conservation. Substantial effort has recently gone into the construction of flux transport dynamo models which differ from the usual αΩ dynamos by including an additional advective transport of magnetic flux by meridional circulation. (see e.g. Dikpati & Gilman, 2009, for a recent summary). If magnetic flux is advected by meridional circulation, it can be expected that such a circulation will also transport magnetic helicity to the surface layers, which might thus facilitate its subsequent expulsion by the wind. We therefore study the effects of meridional circulation on the quenching.
The structure of the paper is as follows. In Section 2 we introduce our model and its various ingredients. Section 3 contains our results, and we give a short summary here. First we consider our model with an imposed wind but no meridional circulation. We show that a strong enough wind that penetrates deeply enough into the convection zone can indeed alleviate quenching. We then make a systematic study of the alleviation of quenching as a function of the two parameters specifying the wind, namely the maximum velocity and the depth down to which the wind penetrates the convection zone. Next we select a particular set of these two parameters such that for large Re M there is no alleviation of quenching. We then introduce a meridional circulation and show that a combination of a wind and circulation is able to limit the quenching in cases where the wind alone cannot. We further study the effect of the characteristic velocity of meridional circulation on quenching. Our conclusions are presented in Section 4.
The model
We study two-dimensional (axisymmetric) mean field models in a spherical wedge domain, r 1 ≤ r ≤ r 2 , θ 1 ≤ θ ≤ π/2, where r, θ, φ are spherical polar coordinates. The choice of this "wedge" shaped domain is motivated by recent Direct Numerical Simulations (DNSs) of forced and convective dynamos in spherical wedges cut from spheres (Mitra et al., 2010b;Käpylä et al., 2010), and the intention to make a similar development of this work.
We consider an α 2 Ω mean field model with a "dynamical alpha" in the presence of an additional mean flow U . In the simplest case, where we consider no wind and no meridional circulation, the mean flow is in the form of a uniform rotational shear given by U = U shear =φS(r − r 0 ) sin θ. For the more realistic cases we use where U wind and U circ are respectively the large-scale velocity of the wind and circulation. The particular forms we use are given in Sections 2.1 and 2.2 below. Thus, we integrate where E = αB − η t J , and α = α M + α K is the sum of the magnetic and kinetic α-effects respectively. The magnetic Reynolds number, Re M /3 ≡ η t /η and B eq is the equipartition field strength. We take η t = 1, B eq = 1 and k f = 100 in our simulations. Here Eq.
(2) is the standard induction equation for mean field models and Eq. (3) describes the dynamical evolution of α; see Blackman & Brandenburg (2002). The last term in the right hand side of Eq. (3) models the advective flux of magnetic helicity. We solve Equations (2) and (3) using the PENCIL CODE 1 which employs a sixth order centered finite-difference method to evaluate the spatial derivatives and a third order Runge-Kutta scheme for time evolution.
Our aim here is to study the effects of the various mechanisms discussed above in alleviating the catastrophic quenching of the magnetic field as Re M increases.
The wind and the "corona"
In order to include the effects of the solar wind we must include an outer region in our model through which the wind flows, by extending the outer boundary beyond the convection zone to radius r 3 > r 2 . We shall refer to the region r 2 ≤ r ≤ r 3 as the 'corona'. We take the wind to be strong in the corona and to grow 1 http://pencil-code.googlecode.com/ weaker as we go into the convection zone. This is represented by choosing the following form for U wind , where U 0 and w are control parameters which determine the strength of the wind speed and its depth of penetration into the convection zone respectively. Larger values of w correspond to deeper penetration. We let the kinematic α-effect to go to zero in the corona by choosing with α 0 = 16.
The meridional circulation
We consider the effects of a meridional circulation, by including a velocity U circ given by Here v amp is a parameter controlling the magnitude of circulation speed and w circ determines the effective depth of penetration of the circulation into r > r 2 . As a characteristic speed of circulation, v circ , we take the maximum absolute magnitude of the θ component of U circ at r = r 2 , i.e. at the surface of the Sun. Helioseismology shows this velocity to be about 10 to 20 metres per second in the Sun. A typical velocity field is shown in Fig. 1, and the profile of α K is shown in Fig. 2. In these Figures, the parameters are chosen to be α 0 = 16, w α = 0.2, U 0 = 2, r 2 = 1.5, w = 0.3, v amp = 75, r circ = 0.98, w circ = 0.02.
Boundary conditions
For the magnetic field we use perfect conductor boundary conditions both at the base of the convection zone (at r = r 1 ) and at the lateral boundary at the higher latitude (θ = θ 1 ). We assume the magnetic field to be antisymmetric about the equator (θ = π/2), and at the outer radial boundary of the corona (r = r 3 ) we use the normal field condition. In terms of the mag-
Fig. 1.
Plot of the velocity field: the arrows show the meridional circulation and the wind, and the contours show the angular velocity. The solar radius is taken to be unity. Although our domain extends out to 5 solar radii, for clarity only a part of it is shown here. The curve at unit radius denotes the surface of the Sun.
Fig. 2.
The kinetic alpha effect, α K , and wind radial velocity, U r , as a function of radial coordinate r for three different latitudes, equator (upper curve), mid-latitude (middle curve) and latitude of upper boundary (lower curve). Note that the curves for the radial velocities differ only in r < 1, where the meridional circulation is non-zero.
For α M , on those boundaries where the boundary condition on the magnetic field is "perfect-conductor" (i.e. at the bottom of the convection zone and at the higher latitude), we choose At the other two boundaries, we recall that since the PDE being solved is of first order in space we only need to specify one condition, which we have already imposed at the lower boundary. To calculate the derivative at the outer boundary we therefore just extrapolate the solution from inside to outside by a second or- der polynomial extrapolation. This is equivalent to using second order one sided finite difference at these boundaries.
As the initial condition for the magnetic field we choose our seed magnetic vector potential from a random Gaussian distribution with no spatial correlation and root-mean-square value of the order of 10 −4 times the equipartition field strength. Also, initially we take α M = α − α K = 0.
Results
In order to demonstrate that our dynamo is excited, and displays both oscillations and equatorward migration, we first use the velocity field and the kinetic α profile shown in Fig. 1, with Re M = 3 × 10 2 and solve Eqs. (2) and (3) simultaneously. The resulting space-time diagram for the three components of the magnetic field is shown in Fig. 3. This is a typical example of the "butterfly" diagrams that are obtained with this model.
As mentioned above, an important feature of MF dynamos in the absence of wind and meridional circulation (i.e. when U 0 = 0 and v amp = 0), is that they are severely quenched as Re M increases. To show this we have plotted in Fig. 4 (a) the time-series of the total magnetic energy E M = 1 2 B 2 for several values of Re M . Here, ... denotes averaging over the domain r 1 ≤ r ≤ r 2 . Clearly the total magnetic energy decreases with Re M . Similar quenching, as a result of the dynamical evolution of the alpha term, has been seen in many different models of the solar dynamo (see, e.g., Chatterjee et al., 2011Chatterjee et al., , 2010, for some recent examples), and also in models of galactic dynamos (Shukurov et al., 2006). To substantiate this further we plot in Fig. 5 the timeaveraged magnetic energy E M t as a function of Re M , where the time averaging is done over several diffusion times (T ) in the saturated nonlinear stage (i.e. after the kinematic growth phase is over). Time averaging is here indicated by the subscript t after the averaging sign. As can be seen in the absence of wind, i.e. with U 0 = 0, such time-averaged energy falls off approximately as Re −1 M . This gives a quantitative measure of the quenching. (The point at Re M = 2×10 6 appears anomalous; we believe this is because we have not run the code for long enough to achieve the final saturated state.) To demonstrate the ability of the wind and circulation to act together to alleviate quenching, we have plotted in Fig. 4 (b) the time-series of E M for several different values of Re M , in the presence of the wind (with U 0 = 1, v amp = 75 and depth parameter w = 0.3). The dependence of the time-averaged magnetic energy E M t on Re M in this case is also plotted in Fig. 5. Comparing Fig. 4 (b) with Fig. 4 (a) and also comparing the two lines in Fig. 5 we clearly see that with the parameters chosen the wind in conjunction with the circulation is capable of alleviating quenching. This is one of our principal results. Note that the saturated mean field energy that we observe at large Re M is still rather small, only slightly exceeding 10 −4 of the equipartition value.
Next we attempt to isolate the role of each parameter in our model. First we make a detailed systematic study of how quenching depends on the two parameters U 0 and w of our model, for a fixed value of Re M = 10 7 and zero circulation, v amp = 0. For each pair of parameters we ran our code for up to 50 diffusion times. In some cases the time series of E M declines as a function of time initially, but at larger times recovers to unquenched values, e.g. U 0 = 1 in Fig. 6. In some other cases we observe that the recovery is merely temporary and at large times E M goes to zero. As an example we first show in Fig. 6 the time-series of E M for various values of U 0 , for a fixed w = 0.3. Clearly, as the wind velocity increases the transport of magnetic helicity out of the domain at first becomes more efficient and we observe less quenching. But this alleviation of quenching must have its limits because for a large enough wind speed the magnetic field itself will be advected out of the domain faster than it is generated, thus killing the dynamo (see, e.g., Shukurov et al., 2006;Brandenburg et al., 1993;Moss et al., 2010). However with penetration factor w = 0.3 we did not find this effect, even when U 0 = 100, but with w = 0.5, winds with U 0 ≥ 20 kill the dynamo. We deduce that it is necessary to advect large-scale field from a substantial proportion of the dynamo region for the dynamo to be killed by advection.
Then we consider the parameter w which controls the depth of penetration of the wind into the convection zone. The dependence of the time-series of magnetic energy on this parameter is shown in Fig. 7, for Re M = 10 7 and U 0 = 2. We also note that there is a subset of parameters for which the transients are so long that it is difficult to decide whether the asymptotic state is a quenched dynamo or not, within reasonable integration times. In our parameter space, i.e. in the U 0 − w plane, the positions of the quenched and unquenched runs are shown in Fig. 8; summarizing the dependence of quenching on these parameters. For all the runs we label as unquenched the butterfly diagram is also restored at large times.
The effect of circulation
Next we consider the effect of meridional circulation on the quenching. If the wind penetrates inside the convection zone too deeply then we expect that circulation will have either no effect, or just a marginal effect, because the wind by itself will be efficient enough in removing small-scale magnetic helicity from deep within the domain. But if the wind does not penetrate so deeply, circulation may play an important role in dredging magnetic helicity from deep inside the domain to near the surface from where the wind can remove it. To see whether this idea can work, we select one point in the phase diagram in Fig 8, where we obtain the quenched solution marked by the arrow. Then we turn on the meridional circulation. The comparison between the time-series of E M with and without circulation is shown in Fig. 9. It can be seen that the final magnetic energy reached does not depend on the amplitude of circulation if the amplitude of circulation is greater than a critical value. Note that this alleviation of quenching by the circulation only works for those points in the U 0 − w parameter space which lie close to the boundary between the quenched and non-quenched states in the phase diagram. For points with very small w, i.e. in cases where the wind penetrates very little into the convection zone, even a very strong circulation cannot remove the quenching.
Another possible mechanism that can transport magnetic helicity from the bulk of the convection zone to its surface is the diffusion of magnetic helicity. This can be described by adding the term κ t ∇ 2 α M to the right hand side of Eq. (3), where κ t is an effective turbulent diffusivity of the magnetic helicity. Numerical simulations have estimated κ t ∼ 0.3η t (Mitra et al., 2010a). We have checked that such a diffusive flux of magnetic helicity can alleviate quenching at least as effectively as the meridional circulation, in the presence of the wind.
Finally we note that the alleviation of quenching as described here is independent of some details of the underlying dynamo model. In particular it does not depend on whether we have an α 2 dynamo or an α 2 Ω dynamo. To check this assertion explicitly we also solved the same problem but with U shear = 0 in Eq. (1). The results are shown in Fig. 10 where we compare the alleviation of quenching for the α 2 dynamo (top panel) against the corresponding α 2 Ω dynamo (bottom panel).
Conclusions
We have introduced two observationally motivated effects that may help reduce the catastrophic quenching found in mean field dynamo models. An outward flow from the dynamo region ("wind") is found to be effective in allowing the quenching to saturate at finite values of the field strength. The wind alone is, however, only effective when it penetrates quite deeply into the convection zone. These effects are modified to some extent by the presence of a meridional circulation which has the ability to transport small scale helicity from deep in the convection zone to near the surface, from where the wind can more effectively remove it. However, the effects of circulation in our model are not dramatic. It is also true that the saturation fields in our model are rather small compared to the equipartition field strength. This was also observed in the model of Shukurov et al. (2006); see also Moss & Sokoloff (2011). One possibility, that we have not explored, is that the neglected inhomogeneity of the solar convection zone may be important. Fig. 10. The behaviour of the time-averaged magnetic energy as a function of magnetic Reynolds number Re M , which shows the alleviation of quenching, with wind speed U 0 = 2 and depth parameter w = 0.8 (upper panel). Also shown is the corresponding plot in the absence of a wind, which clearly shows a catastrophic quenching. The lower panel is for a α 2 Ω dynamo.
It is interesting to try to estimate the various parameters of our model in physical units and to compare them with the solar values. We have taken the solar radius as the unit of length (7 × 10 10 cm). The α effect can be taken to be a measure of the small scale velocity in the Sun, α ∼ (1/3)|u|, where u = U − U . The Baker & Temesvary (1966) tables give estimates for small-scale velocities in the convection zone of the Sun of between 4 × 10 3 -2 × 10 5 cm s −1 , in regions where convection is efficient. As we have considered the convection zone to be homogeneous we consider 10 4 cm s −1 to be a reasonable estimate. Then, as α = 16 in our units the unit of velocity is ∼ 10 4 /(3 × 16) cm s −1 ∼ 2 × 10 3 cm s −1 , and the unit of time, obtained from length and velocity units given above, is ∼ 10 8 s ≈ 10 yrs. Thus our characteristic cycle period, T ≈ 1, corresponds to approximately 10 years. Then the maximum wind speed we have used (U 0 = 10) would correspond to 2 × 10 3 cm s −1 . The speed of the meridional circulation at the surface in our units is v surf = 0.47 for v amp = 75. Translated to physical units this becomes v surf ≈ 1 m s −1 , which is of the same order of magnitude as the solar meridional velocity. If in the estimates above we use the maximum and minimum values of the smallscale velocity as given by the Baker & Temesvary tables, instead of the mean, the maximum surface speed of meridional circulation will be between 0.4 m s −1 and 20m s −1 . The speed of the solar wind that we have used is significantly smaller than that of the actual solar wind, but on the other hand the real solar wind is a highly fluctuating turbulent flow, whereas we have considered a constant outflow.
To summarise, we have presented a very simplified model, in order to explore some basic ideas relevant to the solar dynamo. We cannot claim to have "solved" the quenching problem, but feel we have identified, and to some extent quantified, mechanisms of potential interest. We appreciate that there are a number of desirable improvements, even in this MF formulation. These include using a more realistic solar-like rotation law, investigation and comparison of the effects of other fluxes of magnetic helicity (e.g. Zhang et al., 2006), the diffusive magnetic helicity flux (Mitra et al., 2010a), the inclusion of compressibility in some form, but most importantly perhaps, using a more realistic model for the solar wind allowing for magnetic helicity loading via coronal mass ejections. Notwithstanding these possible shortcomings, we do feel that our results provide motivation for further investigations in the context of solar and stellar dynamos. Investigations using DNS (e.g. Warnecke & Brandenburg, 2010) appear likely to be especially interesting, and we hope to pursue this approach. | 2011-01-29T16:06:48.000Z | 2010-08-25T00:00:00.000 | {
"year": 2010,
"sha1": "03590d738ecb0bb52f57fa94bac7907d284373a0",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2011/02/aa15637-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0aed5ab4a75f9ba57855b5519c69c46dc7f318ca",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
53222951 | pes2o/s2orc | v3-fos-license | Prenatal Exposure to Benzophenone-3 Impairs Autophagy, Disrupts RXRs/PPARγ Signaling, and Alters Epigenetic and Post-Translational Statuses in Brain Neurons
The UV absorber benzophenone-3 (BP-3) is the most extensively used chemical substance in various personal care products. Despite that BP-3 exposure is widespread, knowledge about the impact of BP-3 on the brain development is negligible. The present study aimed to explore the mechanisms of prenatal exposure to BP-3 in neuronal cells, with particular emphasis on autophagy and nuclear receptors signaling as well as the epigenetic and post-translational modifications occurring in response to BP-3. To observe the impact of prenatal exposure to BP-3, we administered BP-3 to pregnant mice, and next, we isolated brain tissue from pretreated embryos for primary cell neocortical culture. Our study revealed that prenatal exposure to BP-3 (used in environmentally relevant doses) impairs autophagy in terms of BECLIN-1, MAP1LC3B, autophagosomes, and autophagy-related factors; disrupts the levels of retinoid X receptors (RXRs) and peroxisome proliferator-activated receptor gamma (PPARγ); alters epigenetic status (i.e., attenuates HDAC and sirtuin activities); inhibits post-translational modifications in terms of global sumoylation; and dysregulates expression of neurogenesis- and neurotransmitter-related genes as well as miRNAs involved in pathologies of the nervous system. Our study also showed that BP-3 has good permeability through the BBB. We strongly suggest that BP-3-evoked effects may substantiate a fetal basis of the adult onset of neurological diseases, particularly schizophrenia and Alzheimer’s disease.
Introduction
The ultraviolet (UV) absorber benzophenone-3 (2-hydroxy-4methoxybenzophenone, oxybenzone, 2OH-4 MeO-BP, or BP-3) is the most extensively used chemical substance as a UV filter, especially (but not only) in various personal care products [1]. Additionally, it is a plastic and textile ingredient as well as a component of inks and lacquers mainly used for protection from sun-induced fragility. The production and consumption of BP-3 is enormous; it has been identified that European-only production reaches 100-1000 metric tons per year and shows an upward trend in response to the increasing demand for health protection against skin cancer [2]. In September 2017, according to scientific reports, the European Union Commission has limited the use of BP-3 from 10 to 6% in cosmetic sunscreen products [3]. In 2017, the US Centers for Disease Control and Prevention (CDC) demonstrated that approximately 97% of people are exposed to BP-3 [4].
Prenatal and early postnatal exposures to BP-3 seem to be undeniable. Current data provide evidence that BP-3 easily crosses through the placental barrier since it has been observed in amniotic fluid, placental tissue, cord blood, and fetal blood in research studies with human participants [5][6][7]. Moreover, the course of pregnancy has been shortened by BP-3, and suboptimal fetal growth has been revealed after maternal exposure to BP-3 [8][9][10]. Even after birth, newborns and infants are exposed to BP-3 since this chemical has been identified in human breast milk samples from Switzerland and Spain [11,12]. Furthermore, the blood-brain barrier (BBB) does not seem to be an obstacle for BP-3 either; accumulated BP-3 has been demonstrated in post-mortem adult brains [13]. Additionally, prenatal BP-3 exposure has been linked to improper migration of enteric neural crest cells during embryogenesis, resulting in Hirschsprung's disease in offspring [14]. Despite the fact that BP-3 exposure is extremely widespread and affects prenatal features, knowledge of the impact of BP-3 on the development of the nervous system is negligible.
The mechanism of neurogenesis (either during neural development or in the adult brain) is complex. It requires homeostasis of different signaling pathways and is tightly controlled by epigenetic and post-translational modifications. The histone deacetylase superfamily is a class of enzymes consisting of histone deacetylases (HDAC) and sirtuins. They are primarily responsible for removing acetyl groups from an amino acid on a histone, thus permitting the transcription of DNA. Moreover, sirtuins are also regulators of metabolic pathways, DNA repair, and the stress response. Small ubiquitin-like modifiers (SUMOs) are proteins that take part in post-translational modifications primarily to provide protein stability. An impairment or disruption of embryonic/ postnatal neural development contributes to the majority of neuropsychiatric disorders (e.g., autism, schizophrenia, attention-deficit hyperactivity disorder, depression, and bipolar disorder). Furthermore, recent studies indicated that dysfunction of neonatal neurogenesis is associated with the etiology of neurodegeneration, e.g., Parkinson's disease [15].
Autophagy is a process of cellular digestion of toxic cytoplasmic material (e.g., misfolded proteins or dysfunctional organelles), and it is a mechanism that provides energy during starvation or cellular stress. Autophagy also plays an essential role in the central nervous system during neurogenesis, and its dysregulation is associated with the etiology of neurodevelopmental and neural degeneration [16]. Autophagy is regulated by a series of autophagy-related genes (Atg) and other crucial factors (such as BECLIN-1, MAP1LC3, or AMBRA1). Autophagy is important during the development and maturation of axons, dendrites and synapses. Knockout mice without Atg5, Atg7, Beclin-1, or Ambra1 exhibit early lethality and neurodegeneration. Mice lacking Atg7 in the central nervous system (CNS) revealed abnormalities in the cerebral and cerebellar cortices, indicating that autophagy is responsible for the homeostasis of neural cells [17]. Moreover, embryonic disruption of autophagy has an adverse impression on adult neurogenesis throughout the lifespan when adult neurons exhibit dramatic aging [18].
Nuclear receptors participate in a majority of life-dependent processes, including embryogenesis, neural development, and lipid metabolism. For most of the non-steroid nuclear receptors (class II nuclear receptors), the retinoid X receptor (RXR) is an obligatory heterodimerization partner. In mammals, RXRs are encoded by three distinct genes located on different chromosomes-RXRα, RXRβ, and RXRγ. RXRα has been identified primarily as a heterodimerization partner of the majority of nuclear receptors [19]. Studies have shown that retinoid signaling is implicated in the health and disease of the nervous system. Knockout mice with RXRα or RXRβ deficiencies are embryo-lethal, and RXRγ-knockout mice show dysfunction in oligodendrocyte differentiation, spatial learning, and memory function [20,21]. Moreover, dysfunctional retinoid signaling has been involved in cognitive impairments, schizophrenia, and depression [22][23][24]. One of the heterodimerization partners of RXR is peroxisome proliferator-activated receptor gamma (PPARγ), which takes part in a wide-range of cellular processes, including lipid and glucose metabolism, apoptosis, and autophagy. PPARγ is mainly expressed in fat tissue and the brain.
The present study aimed to explore the mechanisms of prenatal exposure to BP-3 in neuronal cells, with particular emphasis on autophagy and nuclear receptor signaling as well as the epigenetic and post-translational modifications occurring in response to BP-3. To observe the impact of prenatal exposure to BP-3, we administered BP-3 to pregnant mice, and next, we isolated cells from pretreated embryos for primary neocortical culture. The influence of BP-3 on the autophagic process was assessed by the expression of autophagy-related factors (BECLIN1, ATG7, NUP62, MAP1LC3A, MAP1LC3B), as well as based on the detection of autophagosomes. The impact of BP-3 on RXRs and PPARγ was analyzed via measurement of specific mRNA and protein levels in neocortical cells (by qPCR, ELISA, and western blot). Additionally, to establish the involvement of prenatal BP-3 on microRNA (miRNA) expression, microarray and qPCR analyses were employed. Epigenetic and post-transcriptional modifications were assessed by measuring activity including histone acetyltransferase (HAT), HDAC, sirtuins, and sumoylation. Overall, the effect of prenatal exposure to BP-3 was also evaluated by microarray analysis of the expression profiles of neurogenesis-related genes and neurotransmitter receptors. BBB permeability for BP-3 was assessed by the BBB Kit™ and mass spectrophotometry.
Animals and Treatment
In this study, 12 pregnant Albino Swiss mice (Charles River Laboratories, Sulzfeld, Germany) at 7-16 days of gestation were housed individually in a controlled environment (i.e., 21 ± 1°C, 40-50% humidity, water, and food-ad libitum; natural sequence of day and night-12 h of light and 12 h of darkness with lights on at 7 a.m.) as previously described [25,26]. Pregnant mice were administered for 10 days, once a day with 50 mg/kg BP-3 at 7-16 days of gestation as subcutaneous injections. The chosen dose was environmentally relevant and did not cause any visible unwanted effects. A single whole human body application of sunscreen cream (2 mg/cm 2 of cream) provides 40 g for an average body area of 2.0 m 2 [27]. Until September 2017, the maximum authorized concentration of BP-3 as a UV filter was 10% in Europe, which caused a single application of the cream to provide 4 g of BP-3 and resulted in a 52-61 mg/kg exposure to BP-3 for women (the average weight of European Union female residents is 65.8 kg, and the average US female resident weighs 76.4 kg). The BP-3 used for experiments was dissolved in peanut oil and subcutaneously injected in a volume of 10 ml/kg body weight. Control pregnant mice were injected with an equal volume of the solvent. In this study, 6 pregnant mice were injected with peanut oil only, and another 6 mice were treated with BP-3 (50 mg/kg). All procedures were performed in accordance with the National Institutes of Health Guidelines for the Care and Use of Laboratory Animals and the European Communities Council Directive for the Care and Use of Laboratory Animals (86/609/EEC) and were approved by the Committee for Laboratory Animal Welfare and the Ethics Committee of the Institute of Pharmacology PAS in Krakow, Poland (resolution nos. 1155/2015 and 489/2015). Animal care followed official governmental guidelines, and all efforts were made to minimize suffering as well as the number of animals used.
Primary Neocortical Cell Cultures
Neocortical tissue for primary cultures was prepared from Swiss mouse embryos, which were exposed to BP-3 or peanut oil between 7 and 16 days of gestation. Afterwards, embryonic offspring were subjected to the isolation of cerebral tissue at 17 days of gestation. As described previously, the neocortical cells were suspended in estrogen-free neurobasal medium supplemented with B27 on poly-ornithine (0.01 mg/ml)-coated multiwell plates at a density of 2.0 × 10 5 cells/cm 2 and maintained at 37°C in a humidified atmosphere containing 5% CO 2 for 7 days in vitro (DIV) prior to experimentation. The number of astrocytes, as determined by the content of intermediate filament glial fibrillary acidic protein (GFAP), did not exceed 10% for all cultures as previously described [25,[28][29][30].
Observation During the procedure of brain tissue isolation from embryos prenatally exposed to BP-3, oil droplets on the surface of isolated brain structures and in the isolation buffer were noticed.
qPCR Analysis of Autophagy-Related Genes, Nuclear Receptors, and miRNAs Total RNAwas purified from 7 DIV neocortical cells using the RNeasy Mini Kit or the miRNeasy Mini Kit (Qiagen, Valencia, CA) according to the manufacturer's instructions as previously described [25,[28][29][30]. The RNA quantification was spectrophotometrically determined at 260 nm and 260/ 280 nm (ND/1000 UV/Vis; Thermo Fisher NanoDrop, USA). cDNA was synthesized using the High Capacity cDNA-Reverse Transcription Kit (Thermo Fisher Scientific, USA) or the miScript II RT Kit (Qiagen, Valencia, CA). Both the reverse transcription reaction and qPCR were performed on a CFX96 Real-Time System (Bio-Rad, Hercules, CA, USA). The products of the reverse transcription reaction were amplified using TaqMan Gene Expression Master Mix containing TaqMan primer probes specific to the genes encoding Hprt, Becn1, Nup62, Atg7, Map1lc3a, Map1lc3b, Rxrα, Rxrβ, Rxrγ, Pparγ, SNORD95, miR-19b, miR-33, miR-489, and miR-509. Amplification was performed in a total volume of 20 μl containing 10 μl of TaqMan Gene Expression Master Mix and 1.0 μl of reverse transcription product as the PCR template. A standard qPCR procedure was utilized: 2 min at 50°C and 10 min at 95°C followed by 40 cycles of 15 s at 95°C and 1 min at 60°C. The threshold value (C t ) for each sample was set during the exponential phase, and the delta C t method was used for data analysis. To evaluate the reference gene expression, RefFinder web-based comprehensive tool has been used. Hprt (the gene encoding hypoxanthine phosphoribosyltransferase) was selected to use as a reference gene against Becn1, Nup62, Atg7, Map1lc3a, Map1lc3b, Rxrα, Rxrβ, Rxrγ, and Pparγ. SNORD95 (the small nucleolar RNA, C/D box 95) has been chosen to be a reference gene in the cases of miR-19b, miR-33, miR-489, and miR-509. The results were obtained from three independent experiments.
The ELISA Analyses of Autophagy-Related Factors, RXRs, and PPARγ
Briefly, the levels of BECLIN-1, NUP62, MAP1LC3A, MAP1LC3B, RXRα, RXRβ, RXRγ, and PPARγ in neocortical cells were detected with the use of ELISA assays, according to the manufacturer's instructions [25,[28][29][30]. The standards and denaturated cell extracts were added to the precoated with monoclonal antibodies 96-well plate (specific for BECLIN-1, NUP62, MAP1LC3A, MAP1LC3B, RXRα, RXRβ, RXRγ, and PPARγ). After washing, the substrate solution was added to the wells. The enzymatic reaction yielded a blue product. The absorbance was measured at 450 nm and was proportional to the amount of specific proteins in the sample. The absorbance measurements were performed on Infinite M200pro microplate reader (Tecan, Männedorf, Zürich, Switzerland). The protein concentration of each sample was determined using Bradford reagent (Bio-Rad Protein Assay). The protein levels of the experimental samples are expressed as a percentage of control ± SE and in pg/μg protein. The results were obtained from three independent experiments.
Profiling of miRNAs Using Microarray Assays
The miRNeasy Mini Kit was used for RNA purification (including RNA from approx. 18 nucleotides) using spin columns to extract RNA from neocortical cells that were cultured for 7 DIV. The entire procedure was conducted according to the manufacturer's protocol. The concentration of RNA was determined by measuring the absorbance at 260 nm and 260/ 280 nm (ND/1000 UV/Vis; Thermo Fisher NanoDrop, USA). Total RNA containing miRNA was required for miScript miRNA PCR Arrays (Qiagen, CA, USA). The reverse transcription reaction was conducted using miScript II RT Kit (Qiagen, CA, USA) according to the manufacturer's instruction. Thus, the material obtained enabled qPCR profiling of mature miRNA using miScript miRNA PCR Arrays (Qiagen, CA, USA). The C t values for all wells were exported to a blank Excel spreadsheet and were analyzed with web-based software (pcrdataanalysis.sabiosciences.com/mirna). To normalize miRNA expression, the sn/snoRNA was used (SNORD61, SNORD68 SNORD95, SNORD96A). The results were obtained from two independent experiments. The reverse transcription reaction and qPCR profiling were performed on a CFX96 Real-Time System (Bio-Rad, Hercules, CA, USA).
Pathway-Focused Gene Expression Analysis Using Microarray Assays
RT 2 Profiler PCR Arrays is a qPCR-based analysis for multiple gene expression profiling. The total RNA was extracted from neocortical cells cultured for 7 DIV using the RNeasy Mini Kit (Qiagen, CA, USA) according to the manufacturer's instructions and spectrophotometrically measured. A total of 1 μg of RNA was reverse-transcribed to cDNA using the RT 2 First Strand Kit (Qiagen, CA, USA) and suspended in a final volume of 20 μl as previously described [25,29,30]. Each cDNA sample was prepared for further use in qPCR with RT 2 SYBR Green Mastermix. To analyze the signaling pathway, the RT 2 Profiler™ PCR Array System (Qiagen, CA, USA) was used according to the manufacturer's protocol. The C t values for all wells were exported to a blank Excel spreadsheet and were analyzed with a web-based software (www. SABiosciences.com/pcrarraydataanalysis.php). Actb (βactin) and Gapdh (glyceraldehyde-3-phosphate dehydrogenase) were used as reference genes. The results were obtained from two independent experiments.
Measurement of Permeability Through the Blood-Brain Barrier
The BBB Kit™ (PharmaCo-Cell Company Ltd., Nagasaki, Japan) was used for the evaluation of BP-3 permeability through the BBB. The BBB Kit™ (RBT-24) is a new in vitro model of the BBB that consists of primary culture of rat (Wistar rat) brain capillary endothelial cells, pericytes, and astrocytes. The whole procedure was conducted according to the manufacturer's protocol. In brief, BP-3 was added to the upper (luminal, blood-side) insert. After 30 min, the samples were collected from the lower (abluminal, brain-side) compartment. Subsequently, the BP-3 concentrations were measured. To each sample of 900 μl of abluminal compartment solution, 300 μl of brine and 500 μl of ethyl acetate were added. Then, vials were stirred on a vortex, and the phases were allowed to separate. The concentration of BP-3 in the organic phase was evaluated by measuring the total ion current (TIC) for the molecular mass of BP-3 on a TQD Waters mass spectrometer with ESI+ ionization, coupled with an H-class UPLC. Each sample was measured in triplicate. The samples were separated on an ACQUITY UPLC BEH C18 1.7 μm 2.1 × 50 mm column, using a 4-min gradient (0.3 ml/ min) increasing from 80% H 2 O-20% ACN to 100% ACN at 2.5 min then 100% ACN for 0.5 min and decreasing back to 80% H 2 O-20% ACN at 4 min. The BP-3 was eluted at 3.3 min. The parameters of ionization were as follows: cone voltage 30 V, capillary voltage 3.95 kV, extractor voltage 2.2 V, RF 0.1 V, source temperature 150°C, desolvation temperature 250°C. The number of replicates was 6. The apparent permeability coefficient (P app ) was calculated as described below: where A is the culture area (cm 2 ); V a is the volume of assay buffer in the luminal side; Δ[C] abluminal is the concentration of sample in the abluminal side; [C] luminal is the initial concentration of sample added into the luminal side; and Δt is the assay period (min).
Detection of Autophagosome Formation
Neocortical cells on 96-well plates were used to detect the autophagosome formation according to the manufacturer's instruction for the ELISA-based format-Autophagy Assay Kit as previously described [29]. Measurement of the autophagy in the neocortical cells was performed using a proprietary fluorescent autophagosome marker (λ ex = 333 nm/λ em = 518 nm). The autophagosomes were detected using an Infinite M200pro microplate reader (Tecan, Männedorf, Zürich, Switzerland).
Measurement of HDAC and HAT Activity
The Histone Deacetylase (HDAC) Assay Kit and the Histone Acetyltransferase (HAT) Activity Fluorometric Assay Kit (Sigma-Aldrich, St. Louis, MO, USA) were used to detect enzyme activity. The procedures were performed according to the manufacturer's protocol as previously described [29].
Regarding the HDAC kit, the measured fluorescence at λ ex = 365 nm/λ em = 460 nm was proportional to the deacetylation activity. In the HAT assay, the generated product of histone acetyltransferase activity was detected fluorometrically at λ ex = 535 nm/λ em = 587 nm. The HAT kit included an active nuclear extract to be used as a positive control and in the HDAC assay contained HeLa cell lysate as a positive control. The abovementioned assays provided positive and negative controls, as well as all the reagents required for analysis. Measurements were performed using an Infinite M200 pro microplate reader (Tecan, Männedorf, Zürich, Switzerland).
Measurement of Sirtuin Activity
The sirtuin activity was measured using the Sirtuin Activity Assay Kit (BioVision, CA, USA) according to the manufacturer's instructions. The acetylated p53-AFC substrate was deacetylated by sirtuins in the presence of NAD + . The developer provided in the assay cleaved the deacetylated p53-AFC substrate and released the fluorescent group, which was detected fluorometrically at λ ex = 400 nm/λ em = 505 nm. Since HDAC are also able to deacetylate p53-AFC substrate, trichostatin A was added to the reaction to specifically inhibit HDAC activity in the samples. The fluorescence was detected using an Infinite M200pro microplate reader (Tecan, Männedorf, Zürich, Switzerland).
Measurement of Global Protein Sumoylation
The Global Protein Sumoylation Assay Kit (Abcam, Cambridge, UK) was performed to quantify sumoylated protein levels in the samples. The whole procedure was conducted according to the manufacturer's protocol. The assay provided all necessary reagents to detect sumoylated protein with an anti-SUMO antibody. The kit included positive control and thus allowed the quantification of protein sumoylation. The absorbance was measured at 450 nm using an Infinite M200pro microplate reader (Tecan, Männedorf, Zürich, Switzerland).
Data Analysis
Statistical tests were performed on raw data that were expressed as the mean arbitrary absorbance or as the fluorescence units per well containing 50,000 cells (measurements of autophagosome formation); the fluorescence units per 1.5 million cells (qPCR, microarray RT 2 Profiler™ PCR, HDAC, and sirtuin activities); the absorbance units per 1.5 million cells (HAT activity and global protein sumoylation), the mean optical density per 40 μg of protein (western blotting); or pg of BECLIN-1, NUP62, MAP1LC3A, MAP1LC3B, RXRα, RXRβ, RXRγ, and PPARγ per μg of total protein (ELISA).
One-way analysis of variance (ANOVA) was preceded by Levene's test of homogeneity of variances and was used to determine overall significance. Differences between the control and experimental groups were assessed using a post hoc Newman-Keuls test, and significant differences were designated *p < 0.05, **p < 0.01, and ***p < 0.001 versus control cultures. The results were expressed as the mean ± SE of two to three independent experiments. The number of replicates in each experiment ranged from 2 to 3, except for the measurements of autophagosomes formation, which contained 8 replicates and BBB permeability with 6 replicates.
Effects of Prenatally Administered BP-3 on the mRNA and Protein Expression Levels of Autophagy-Related Factors
According to our study, prenatal administration of BP-3 impaired the mRNA levels of autophagy-related factors in embryonic neurons. Exposure of the offspring to BP-3 caused a decrease in Becn1, Nup62, Atg7, Map1lc3a, and Map1lc3b mRNA by 20-34% (Fig. 2a). ELISA kits revealed that embryos exposed to BP-3 expressed decreased levels of BECLIN-1 protein (0.0723 pg/μg of total protein; 78% of the control) and MAP1LC3B protein (0.47 pg/μg of total protein; 81% of the control) in neocortical cells, but no effect on the levels of NUP62 and MAP1LC3A proteins was noticed (Fig. 2b, c). Western blot analysis indicated that prenatal exposure to BP-3 decreased the relative BECLIN1 and ATG7 protein levels by 26% and 19%, respectively. No changes were observed with respect to the relative protein levels of NUP62, MAP1LC3A, and MAP1LC3B (Fig. 2d, e).
In our study, BP-3 evoked a decrease in mRNA expression of all the abovementioned autophagy-related factors that was paralleled by a decrease in the protein levels of BECLIN-1, ATG7, and MAP1LC3B. The protein levels of the other factors remained unchanged. ELISA unraveled the BP-3-evoked attenuation of MAP1LC3B, but it was not supported by western blot analysis.
Effects of Prenatal Exposure to BP-3 on the mRNA and Protein Expression Levels of Retinoid X Receptors (RXRα, RXRβ, RXRγ) and PPARγ
Our data demonstrated that prenatal exposure to BP-3 altered the mRNA levels of Rxrα, Rxrβ, Rxrγ, and Pparγ. The exposure decreased levels of Rxrα and Rxrβ mRNA by 33% and 36%, respectively, but it increased levels of Rxrγ and Pparγ mRNA by 43% and 62%, respectively (Fig. 3a). ELISA kits revealed that prenatal treatment with BP-3 decreased levels of RXRα and RXRβ protein by 19 and 34% (equal to 0.62 and 1.06 pg/μg of total protein, respectively), whereas the RXRγ and PPARγ protein levels increased to 137% and 144% of the control value (equal to 1.04 and 17 pg/μg of total protein, respectively), Fig. 3b, c. Western blot analysis determined a decrease in the relative protein levels of RXRα and RXRβ of 14-34% and an increase in RXRγ and PPARγ protein levels of 76% and 51%, respectively in (Fig. 3d, e).
Effects of Prenatal Exposure to BP-3 on HAT, HDAC, and Sirtuin Activities as well as Global Protein Sumoylation
Prenatal exposure to BP-3 reduced HDAC and sirtuin activities to 14 μM (i.e., 82% of the control) and 560 pM (i.e., 86% of the control), respectively (Fig. 5b, c). In mouse embryonic neurons prenatally treated with BP-3, a decrease in global protein sumoylation to 41 ng/μl (i.e., 78% of the control value) was observed (Fig. 5d). BP-3 administered during pregnancy did not affect HAT activity in the neocortical neurons of embryonic offspring (Fig. 5a).
Measurement of BP-3 Permeability Value Using the BBB Kit™
The evaluation of BP-3 (25 μM) permeability with the use of the BBB Kit™ (RBT-24) revealed that BP-3 is able to cross the BBB. The permeability assay, which calculates a P app value (× 10 −6 cm/s) for tested compounds, showed that BP-3 has mean value of 10 (Table 1). This value positioned BP-3 with a good permeability coefficient, since compounds with P app < 2, 2-10, 10-20, and > 20 can be classified as having a very low, low, good, and very good permeability capacity, respectively [31].
Discussion
Our research presented here revealed for the first time that prenatal exposure to BP-3 impairs autophagy, disrupts the levels of retinoid X and peroxisome proliferator-activated receptors, alters epigenetic status (i.e., attenuates HDAC and sirtuin activities), inhibits post-transcriptional modifications in terms of global sumoylation, and dysregulates expression of neurogenesis-and neurotransmitter-related genes and specific miRNAs involved in developmental and degenerative pathologies of the nervous system. Considering current population studies, there is no doubt that BP-3 easily reaches the fetus and affects its development. Indeed, based on our research, BP-3 is able to cross the BBB Fig. 4 Prenatal BP-3 exposure modified the expression of miRNAs engaged in neuronal development or in the progression of neurological diseases in mouse embryonic neurons. Gene expression patterns of miRNAs showing that the expression of 36 genes was significantly different between the control and BP-3treated groups. Among altered miRNAs, 23 genes were upregulated (red color), and 13 genes were downregulated (green color) in the BP-3-treated samples compared to the controls (a). Each bar represents the mean of two independent experiments ± SE. The qPCR technique was employed to validate the expression levels of the most dysregulated miRNAs from the microarray assay (b). Each bar represents the mean of three independent experiments and the number of experimental replicates ranged from 2 to 3 ± SE. **p < 0.01 and ***p < 0.001 versus control cultures (with a good permeability factor), which may directly influence the developing brain, conditioning it for cerebral damage. Our previous study demonstrated that prenatal exposure to BP-3 causes severe apoptosis and neurotoxicity, evokes global DNA hypomethylation, alters methylation status of apoptosis-related and estrogen receptors genes, and disrupts estrogen receptors expression [25]. Taking these data into account, we strongly suggest that BP-3 can significantly affect the neural development, which may be the fetal basis of the adult onset of nervous system disease.
In the present study, administration of an environmentally relevant dose of BP-3 (50 mg/kg) to pregnant mice evoked significant autophagy inhibition in neocortical cells from their embryonic offspring. The impairment of autophagic processes has been confirmed by a decrease in autophagosomes formation and a significant downregulation of 22 autophagyrelated genes measured by microarray analysis and validated by qPCR. Furthermore, autophagy inhibition has been detected by decreased levels of autophagy-involved proteins, i.e., BECLIN-1, ATG7, and MAP1LC3B. This is in line with our previous study, in which 25 μM BP-3 in vitro was able to reduce the autophagy process in neuronal cells via downregulating specific genes as well as reducing the MAP1LC3 ratio and diminishing autophagosome formation [29]. BECLIN-1 plays a critical role in autophagy induction and its lowered , has been demonstrated to cause a decrease in BECLIN-1 expression, which was associated with significant dysfunction in the nervous system in hens [32,33]. Taking into account that prenatal exposure to BP-3 caused substantial reduction of BECLIN-1, we suggest that BP-3 may increase the risk of schizophrenia since in brains of schizophrenia patients autophagy is impaired, particularly BECLIN-1 expression level is decreased by 40% [34][35][36]. Weakened autophagy has also been postulated to be involved in the origins of several neurodegenerative diseases, such as AD, PD, HD, amyotrophic lateral sclerosis (ALS) and multiple sclerosis (MS), being responsible for β-amyloid/tau, α-synuclein, or mHtt clearance [37]. In our study, in addition to inhibition of autophagy, prenatal exposure to BP-3 caused a decrease in the mRNA and protein expression levels of RXRα and RXRβ, whereas RXRγ and PPARγ showed an increased expression pattern. Our current data on the prenatal exposure of mouse brains to BP-3 confirm our previous observation based on in vitro treatment of mouse neurons with 25 μM BP-3 [29,30]. Previously, we showed the importance of RXRs signaling in the propagation of dichlorodiphenyldichloroethylene (DDE) and nonylphenol apoptotic and neurotoxic effects [28,38]. The RXRs, mainly RXRα, has been classified as a heterodimerization partner of other nuclear receptors. Thus, a deficiency of RXRα and RXRβ, as occurred in the current study, can have farreaching consequences regarding the number of RXR- Fig. 6 Prenatal BP-3 exposure modified the expression of neurogenesis-related and neurotransmitter receptor genes in mouse embryonic neurons. Gene expression patterns of neurogenesis demonstrated the 24 genes that were significantly differentially expressed between the control and BP-3-treated groups. Among these genes, 22 of them were upregulated (red color), and 2 genes were downregulated (green color) in the BP-3-treated samples compared to the control (a). Neurotransmitter receptor gene expression displayed 23 genes significantly differentially expressed between the control and BP-3-treated groups; 16 genes were upregulated (red color), and 7 genes were downregulated (green color) in the BP-3-treated samples compared to the controls (b) dependent dimerization partners (such as nuclear receptorrelated 1 protein (Nurr1), nerve growth factor IB (Nur77 or NR4A1), retinoid acid receptor (RAR), and PPARs) involved in the coordination of neurogenesis, neuronal cell differentiation, and lipid signaling [39][40][41][42]. A decrease in the level of RXRβ has already been correlated with schizophrenia, and RXR agonist-bexarotene-is currently in the third clinical trial phase for the reduction of positive symptoms of schizophrenia [22,[43][44][45]. The RXRγ signaling pathway has been implicated in modulation of despair behaviors and working memory, controlling the affective behaviors by regulation of dopaminergic signaling and accelerating the CNS remyelination [20,46,47]. Furthermore, PPARγ has been demonstrated to be involved in cerebral development and peripheral nervous system myelination [48,49]. In addition, activation of RXRs or/and PPARγ by receptor agonists has been associated with neuroprotection from several diseases such as AD, PD, ALS, MS, and stroke [50][51][52][53][54][55][56][57]. Although, in our study, BP-3 stimulated PPARγ expression, it inhibited expression of PPARγ heterodimerization partner, i.e., RXRα that suggests an impairment of PPARγ neuroprotective capacity in response to BP-3. Previously, we showed that prenatally administered BP-3 (50 mg/kg) induced apoptosis and caused neurotoxicity that was accompanied by impaired ESR1/ESR2 expression, enhanced GPER1, and altered methylation status in the mouse neuronal cells [25]. Taking into account our previous and present data, we hypothesize that prenatal exposure to BP-3 may be linked to developmental abnormalities and the etiology of neural degeneration.
In the present study, this hypothesis has been partially confirmed by the microarray analyses of neurogenesis-and neurotransmitter-related genes. These involve changed expression of brain-derived neurotrophic factor (BDNF), epidermal growth factor (EGF), glial cell-derived neurotrophic factor (GDNF), Notch, myocyte-specific enhancer factor 2C (MEF2C), and apoE as well as altered levels of adrenergic, cholinergic (e.g., cholinergic receptor nicotinic alpha 4 subunit (CHRNA4)), dopaminergic, GABA-ergic, glutamatergic (e.g., glutamate ionotropic receptor NMDA type subunit 2A (GRIN2A)), and serotoninergic receptors. In this study, we demonstrated that prenatal exposure to BP-3 caused substantial increase in Mef2c, Grin2a, and Chrna4 which corresponds to upregulation of these genes in brains of schizophrenia patients as detected in post-mortem [58,59]. In the present study, during the procedure of brain tissue isolation from embryos prenatally exposed to BP-3, oil droplets on the surface of isolated brain structures and in the isolation buffer were noticed. These droplets may be due to inappropriate metabolism resulting from abnormal expression of nuclear receptors including RXRs and PPARγ. Lipid metabolism deficiency resulting from the downregulation of Rxrs and Ppar genes has been noticed in mouse models of the prodromal state of schizophrenia [60]. Interestingly, a similar effect to this observed in our present study has been seen in zebrafish embryos exposed to benzophenone-2 (BP-2), i.e., lipid droplets were accumulated in the yolk region [61].
The abovementioned results demonstrating that prenatal BP-3 administration impaired autophagy and altered RXRs and PPARγ expression levels could be at least partially related to disturbed epigenetic and post-transcriptional modifications as well as miRNAs expression. We found that embryonic offspring of dams treated with BP-3 during pregnancy exhibited decreased HDAC and sirtuin activities and a diminished level of sumoylated proteins. HDAC and sirtuin diminished activities could be responsible for RXRγ and PPARγ increased expression. Moreover, inhibition of sirtuin activity during neurodevelopment may result in inappropriate axonal differentiation, dendritic arborization and synapse formation as well as abnormal memory formation by modulating synaptic plasticity [62]. Furthermore, decreased sumoylated proteins in embryos prenatally exposed to BP-3 may be the reason for impaired autophagy, since SUMOs are modulators of chaperonemediated autophagy (CMA) and macroautophagy [63].
Our study demonstrated that neocortical neurons derived from BP-3-exposed embryos differentially expressed 36 miRNAs which were related to neuronal development or the progression of neurological diseases. miRNAs are small noncoding RNAs that are mainly engaged in post-transcriptional mRNA regulation, and the expression levels of certain miRNAs have been postulated as biomarkers of neurological disorders [64]. In our current study, the biggest miRNA expression differences (at least 2-fold change) were noticed between the downregulation of miR-19b, miR-33, and miR-509 and the upregulation of miR-489. miR-19b has been found to participate in neural lineage differentiation of embryonic stem cells, and the reduction in miR-19b expression has been recently observed in cerebrospinal fluid in AD and PD [65][66][67][68][69]. miR-33 is known to inhibit cholesterol efflux and control of apoE lipidation and β-amyloid metabolism as well as stimulation of macroautophagy [70][71][72]. In our study, impaired autophagy and oil droplets observed during isolation of brain structures could be due to downregulation of miR-33 that would be a prerequisite of AD. Additionally, abnormal miR-33 and miR-509 patterns have been connected to AD, major depression, psychosis, and anxiety disorders [73][74][75].
Conclusions
Our study revealed that prenatal exposure to BP-3 used in environmentally relevant doses impaired autophagy in terms of BECLIN-1, MAP1LC3B, autophagosomes formation, and autophagy-related factors, disrupted the levels of RXRs and PPARγ, altered epigenetic status (i.e., attenuated HDAC and sirtuin activities), inhibited post-transcriptional modifications in terms of global sumoylation, and dysregulated expression of neurogenesis-and neurotransmitter-related genes as well as miRNAs involved in pathologies of the nervous system. Our study also showed that BP-3 has good permeability through the BBB. Taking these data into account, we strongly suggest that BP-3 can significantly affect the neural development, which may be the fetal basis of the adult onset of nervous system diseases, particularly schizophrenia and AD-like neurodegenerations. Noteworthy, a recent paper by Philippat et al. demonstrated alteration in behavior of male infants in response to prenatal BP-3 exposure [80].
In our study, an involvement of prenatal exposure to BP-3 in etiology of schizophrenia is supported by impaired autophagy including lowered expression of BECLIN-1, downregulated levels of RXRα and RXRβ, elevated expression levels of neurogenesis-related factor Mef2c, and neurotransmitter receptors Grin2a and Chrna4, as well as by dysregulation of 24 miRNAs, particularly upregulation of miR-489. Less correlation is observed between the effects of prenatal exposure to BP-3 and the AD-related changes. In our study, the link between prenatal exposure to BP-3 and AD is evidenced by impaired autophagy and expression levels of RXRα and RXRβ, upregulation of apoE, and dysregulation of 26 miRNAs, mainly miR-19b and miR-33. | 2018-11-07T15:15:35.990Z | 2018-11-06T00:00:00.000 | {
"year": 2018,
"sha1": "270359540f59b238475e9fe066bd013024becd55",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-018-1401-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e75e630fe809efda839958669c7aed7a15ae955f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
4647155 | pes2o/s2orc | v3-fos-license | ZVZCS PWM DC-DC CONVERTER WITH CONTROLLED SECONDARY RECTIFIER FOR ARC WELDING
A zero-voltage, zero-current switching (ZVZCS) PWM DC-DC power converter with secondary active rectifier tested as a dc current source for arc welding is described in this paper. The soft switching DC-DC converter consists of high frequency inverter, high frequency power transformer and controlled output rectifier with new secondary energy recovery turn-off snubber. Circulating current in the converter is reduced by using active rectifier and rms value of the current in secondary switches is decreased by utilizing a novel control algorithm. The experimental results of a 4.5 kW DC-DC converter working at switching frequency of 100 kHz are presented.
INTRODUCTION
Generally, voltage-current characteristics of the arc welder depend generally on the type of welding.Typical rated voltage is about 50-80 V and the rated current falls within the range 60A up to 1000 A.
Modern welders are required to have many features like reliability, safety, lightness, robustness, flexibility of operation, high efficiency, wide current control range, good power factor, electrically isolated output, fast response, fault tolerance, good price, adaptation to various operating conditions and so on [1] - [2].
In the quest for smaller size and lighter weight of the power source it is necessary to operate the inverter at as high switching frequency as possible.This high frequency allows considerably reducing the size and weight of the transformer and it makes the filtering at the dc output of the DC-DC converter easier and cheaper.Moreover, it allows using a physically small inductor to reduce the output current ripple to very low values.However, higher switching frequency results in increased switching losses in power semiconductor devices at turn-on and turn-off.
All these requirements can be easily met by welders consisting of a high frequency transformer fed by highfrequency soft-switching converter.
There are various types of dc-dc converters for arc welders.All of the converters have certain pros and cons.Resonant converters working below or above resonance frequency are often used for construction of electronic welders.Utilizing resonant components, however, introduces additional circulating energy in the converter circuits.This circulating energy increases voltage or current stress of the semiconductor devices and converter passive components often show increased conduction losses [3] - [5].Moreover, the switching frequency range is often limited so that the converters would not lose the soft switching.Almost the same holds for other types of resonant converters e.g.quasi-resonant converters [6], [7], multiresonant converters [8] - [9], and so on.Moreover, the frequency modulation, mostly used for controlling these converters, is disadvantageous due to EMI suppressing.
The other possibility is to use PWM converters.The circulating energy is minimum compared with the resonant converters resulting in minimum conduction losses.These converters are controlled either by traditional PWM or by phase shifted PWM method at constant switching frequency.The semiconductor devices in conventional PWM power converters suffer from high switching losses.However, by introducing the auxiliary circuits and proper control modes, zero-voltage and/or zero-current switching can be achieved [10] - [12].Several types of dissipative, non-dissipative and energy recovery more or less complex snubbers, clamps and auxiliary circuits were developed to resolve the problem of reducing the switching losses [10] - [12] and circulating currents [10] - [12].
Using an active output rectifier in the secondary side of the converter is another very effective way, how to decrease circulating currents in the PWM converters and, at the time, to achieve reduction of switching losses [13] - [19].
ZVZCS PWM DC-DC high power converter with active rectifier on the secondary side used as a current source is presented in this paper.The paper is linked to article [19], where the principle of the converter working as a voltage source is explained and basic equations for rated load are derived.The application and control features of the converter operating as a current source are presented here.Moreover, the converter analysis, presented in [19] for rated load, is extended also for light load in this paper.This is important for the converters operating at full load range -from no-load to short circuit, e. g. at converters for arc welding.
Power circuits
Power circuit of the converter is shown in Fig. 1.The input voltage is converted by a full-bridge high frequency inverter (IGBT transistors T 1 -T 4 and freewheeling diodes D 1 -D 4 ) to the high frequency alternating voltage across primary winding of the high frequency step-down transformer T R .The secondary center-tapped winding of this transformer is connected through an active rectifier between the welding rod and the material to be welded.The controlled rectifier consists of a series connection of MOSFET transistors T 5 , T 6 and fast recovery diodes D 5 , D 6 .Behaviour of this type converter is the result of a new way of controlling (Fig. 2, Fig. 3).Full-bridge inverter T 1 -T 4 is controlled with constant switching frequency and 50% duty cycle.Value of the output current is controlled via pulse-width modulation of the secondary switches T 5 , T 6 .
The commutation of the freewheeling diodes to transistors in high frequency inverter is soft, without large reverse recovery current spikes.So, there are not high requirements on the freewheeling diode dynamic properties.The transistors of the inverter are turned-on under zero-voltage and zero-current, and so the turn-on losses are negligible.Turn-off losses of the primary transistors are negligible as well.They turn off only a small magnetizing current of the power transformer.
To reduce turn-off losses of secondary switches T 5 , T 6 , and accumulation of leakage energy of the power transformer the new turn-off snubber was designed [17], consisting of snubber capacitors C C5 , C C6 , snubber inductors L S5 , and L S6 and snubber diodes.From the point of view of efficiency the non-dissipative snubbers are the most promising ones.Therefore, in the suggested solution used is this option -see Fig. 1.
Leakage inductance of the transformer acts as a turn on snubber for secondary switches.Therefore, zero current turn-on is achieved.
Snubber capacitors C C5, and C C6 reduce the rate of rise of the drain-source voltage of secondary transistors T 5 ,and T 6 and thus zero voltage turn off is ensured [18], [19].
Beside reducing turn off losses in secondary switches the snubber capacitors C C5 , C C6 ensure also accumulation of the leakage inductance energy of the power transformer and consequently its transfer through snubber inductors L S5 , L S6 and diodes D S5 , D S6 to the load.
Operation at light load
As was mentioned above, operation principle of this converter at rated load (Fig. 2 -left side) is described in detail in [19].
In this paper, the description of the proposed converter is focused on conditions at light load.
Operation of the proposed converter at light load is analyzed in the below section.The basic light load operation of the proposed soft switching converter has four operating modes within each half cycle.Because of the leg symmetry, the transistors T 1 , T 2 and T 3 , T 4 work under the same operating conditions.Operation of the secondary switches is also symmetrical.It is assumed that all components and devices are ideal.
If the initial voltage of snubber capacitor C C5 is smaller than double rectified voltage The discharging time is Magnitude of the discharging current is derived from (1) Borderline between rated load and light load is determined by the condition (3).Light load of the converter starts when condition (3) is met.
In t´3 1 before secondary transistor T 6 turning off, commutation occurs and output freewheeling diode D O starts to lead the current.Rectified voltage ud drops to zero, and therefore total discharging of the snubber capacitor C C5 is enabled.
At the same time the leakage inductance energy of the transformer is absorbed by snubber capacitor C C6 .( ) 2 sin ( ) 2 cos Snubber capacitor discharging lasts a quarter period of resonance between snubber capacitor C C5 and snubber inductance L S5 i.e. until snubber capacitor voltage reaches zero.
Capacitor discharging time can be derived from ( 9) as follows
Interval t´3 2 -t´3 3 :
At the beginning of this interval the current of snubber capacitor C C5 commutates to snubber diode D C5 .At the same time current of snubber inductor L S5 decays through diode D C5 .
Since within this interval the rectified voltage is zero, the snubber inductor current is maintained constant.Simultaneously, the leakage inductance energy of the transformer is stored in opposite snubber capacitor C C6 during this interval.
Directions of the currents at the beginning of this interval are shown in Fig. 3c.
Interval t´3 3 -t´3 4 :
After turn on of inverter transistors T 1 , T 2 the rectified voltage u d is increased to value of U I /n and energy of snubber inductor L S5 is transferred to the load.
At the same time the energy is transferred from primary side through transistors T 1 , T 2 , T 5 and diode D 5 to the load.
Simultaneously snubber capacitor C C6 is discharging in a resonant way to the load through opposite secondary transistor T 6 (Fig. 3d). 2 For time dependency of the snubber inductor current i Ls5 the following expression holds ( ) 2 Decay time t´L s5dch of the snubber inductor current i Ls5 can be determined from 2
CONTROL CIRCUITS
The converter is controlled by novel pulse-width modulation of secondary transistors.The pulse length is changed from t ON (MIN) to t ON(MAX) , which means ca.from T/2 to T, if disregarded are the dead time and capacitor ) as shown in Fig. 4).
T ,T The block diagram of discrete PI regulator with limiter is shown in Fig. 6.
The equation that expresses the output parameter of regulator in step k is: By varying the pulse width of the secondary transistors, the conduction time of transistors is accordingly increased or decreased and thus regulated is the output current.The control circuits are completely isolated from power circuits.Arrangement of the basic control board is displayed in Fig. 7.The turn-off snubber described in the previous section was implemented using the components outlined in Table 1.
In the inverter, four ultra-fast IGBTs were used.Special high-frequency planar transformer with a very low value of leakage inductance was designed and made using EI ferrite cores.The transformer parameters are shown in Table 2. Arrangement of the high-frequency power planar transformer is shown in Fig. 9.
Foil windings of the power planar transformer are interleaved, and the primary is sandwiched closely between a split secondary winding in order to achieve tight coupling and thus very low leakage inductance.Properties of this laboratory model of the converter working as a current source were verified for output currents of up to 120 Amps.
The voltage and current of primary switch T 1 are shown in Fig. 10.Transistor T 1 turns on under zero voltage and zero current.Only small magnetizing current is turned off and thus nearly zero current turn off is achieved.Because of the inverter symmetry, the waveforms for all primary switches T 1 -T 4 are identical.
Drain-source voltage and drain current of the secondary MOSFET transistor T 5 , (T 6 ) at turn-on and turn off are shown in Fig. 11.
Rate of rise of the drain-source voltage u T5 at turn off is limited by snubber capacitor C C5 and thus zero voltage turn off of the secondary transistor is achieved.
At the turn on of transistor T 5 , capacitor C C5 is discharged through transistor and snubber inductance L S5 to the load in a resonant way.Therefore the rate of rise of the discharging current is limited and zero current turn on is reached.Some waveforms at light load are shown in Fig. 12.At low load current, snubber capacitor C C5 is discharged in two intervals as it is seen in Fig. 12.This fact has no influence on the soft switching and output parameters of the converter.At no-load the pulse width is pushed by control circuits to the maximum value and no-load output voltage achieves value of some 70V.When the electrode touches the work piece, the output current is increased rapidly to its predetermined value, while the load voltage drops to zero.During the following welding process the average arc voltage is approximately 24 V at welding current of about 90A.When the distance between electrode and work piece rises, the arc voltage is increased and consequently the output current falls to zero.Afterwards the output voltage returns to the no-load value.
The welder has very high dynamic properties.Response to the current and voltage fluctuations is very fast due to used high frequency of 200 kHz at the rectifier output (voltage u d in Fig. 1).The efficiency was measured at conditions nearly similar to those at welding process -at output voltage of 25 volts (Fig. 15).The efficiency is approximately 90 %.Measurement was also performed for 45 V output only for comparison with other ZVZCS systems.The efficiency is about 94% under such conditions.This is quite a high value for a converter with low output voltage a high output current.
CONCLUSION
Welding applications call for dc current sources with small weight and size, good efficiency and acceptable price.One of the options is to build the welders using high frequency soft switching DC-DC converter, which has been discussed in this paper.
The converter presented in this paper has very good behaviour for wide range of the output current.
Soft switching is achieved for all power switches of the converter.Especially primary IGBT transistors operate at almost ideal switching conditions -at zero-voltage and zero-current turn-on and zero-current turn-off.
The certain disadvantage of this topology is the fact that additional switches in series with output rectifier add supplemental conduction losses in the controlled rectifier.In spite of this fact, the controlled rectifier has important, hardly changeable function.As a result of the used active rectifier, the circulating currents in primary switches and power transformer are totally suppressed and thus conduction losses in these devices are considerably reduced.Moreover, by using of the active rectifier the turn-off losses of the primary switches are also substantially decreased.
In addition, main parasitics of the power transformer are integrated into the converter topology.
The non-dissipative snubber consists of passive components only.It is very small a so is additional cost.
Experimental tests on the laboratory model demonstrated feasibility of the proposed converter at operating as a current source e.g. for arc welding applications.
Fig. 3a
Fig. 3a Operation in interval t´0 -t´1Current and voltage of capacitor C C5 can be, for the discharging process, calculated from the following equations then the snubber capacitor keeps discharging until capacitor voltage u Cc5 (2) reaches minimum
Fig. 3c
Fig. 3c Operation in interval t´3 2 -t´3 3 Value of the inductor current i Ls5 is equal amplitude of the snubber capacitor C C5 discharging current
Fig. 4
Fig. 4 Control pulses for the converter Digital signal controller TMS320F28335 was used for controlling the converter.Control circuits have been designed to attain operating conditions similar to those in a conventional electric arc welder.It includes several functions needed to ensure correct welder behaviour under any operating conditions.Two discrete PI regulators R I and R U ensure controlling of output voltage U O and smoothing inductor current I LO and thus also output current of the welder I O .The PI regulators are connected in parallel (Fig.5).The control logic selects the regulator with a smaller regulating variable (shorter pulse width).
Fig. 5
Fig. 5 Block diagram of control structure
Fig. 7
Fig. 7 Basic control board with TMDSCNCD28335 card
Fig. 8
Fig. 8 Laboratory model of the converter
Fig. 9
Fig. 9 Power planar transformer arrangementIn output rectifier, ultrafast soft recovery diodes were used.Inductance of the smoothing choke for smoothing of the output rectified current is L S = 5 μH only as a result of output rectified voltage frequency of 200 kHz.In practical applications this value of smoothing inductance can be partially achieved by utilizing parasitic inductances of the connecting wires.The measurements were made at nominal input voltage of U I = 325V.
Fig. 10
Fig. 10 Collector-emitter voltage u T1 and collector current i CT1 of primary transistor T 1
Fig. 13
Fig. 13 Output voltage u O and output current i O during welding at output current of 90 A
Fig. 14
Fig. 14 Measured output characteristics of the welder
Fig. 15
Fig. 15 Measured efficiency of the converter
Table 2
Planar transformer parameters | 2018-04-07T07:32:54.426Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "ad94a868dcf3c95644a4c9a55e92dedf957a8883",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15546/aeei-2016-0013",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ad94a868dcf3c95644a4c9a55e92dedf957a8883",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
237781848 | pes2o/s2orc | v3-fos-license | Optimization of rotate mode at constant change of departure in the level-luffing crane with geared sector
The results of optimization of the rotation mode of the level-luffing boom system of the crane at the launch site, with the steady-state mode of departure change. The object of the study is a boom system with a sector drive of the mechanism of change of departure. The mechanism of rotation consists of an electric motor, a planetary mechanism and an open gear. Variation calculus methods were used to optimize the mode of rotation of the boom system. In this case, a variational problem is formed, which includes the equation of motion of the boom system when turning and changing the departure, the optimization criterion and boundary conditions of motion. The optimization criterion has the form of an integral functional that reflects the root mean square value of the driving torque of the drive mechanism of rotation during start-up. The study was carried out at the starting point of the electric motor of the turning mechanism from the state of rest to reach the nominal speed and at a steady speed of rotation of the electric motor of the mechanism of change of departure. The solution of the problem is presented in the form of a polynomial with two terms, the first of which provides boundary conditions of motion, and the second minimizes the criterion of optimization through unknown coefficients. To do this, use a software package. Graphs of change of kinematic characteristics of cargo and boom system at work of mechanisms of turn and change of departure, and also the driving moment in the course of start of the mechanism of turn which correspond to an optimum mode of movement are constructed. The resulting mode of movement allowed to eliminate the oscillations of the load on the suspension. Based on research, recommendations for the use of the obtained optimal start-up mode have been developed.
Introduction
The level-luffing boom system is the basis of many designs of boom systems in modern cranes. Such boom system was created on the basis of the hinged four-link Chebyshev's mechanism. These level-luffing boom systems most often used in gantry cranes to perform unloading and reloading operations in ports [1].
It is well known that the delay of ships in ports is an undesirable phenomenon, as it leads to significant financial costs for both the carrier and the customer. Therefore, reducing the duration of loading and unloading of transport vessels is an urgent task. This issue is especially acute when unloading bulk cargo. This is due to the fact that in parallel with the unloading of the ship, these cargoes are loaded into railway cars or trucks.
Two schemes of unloading bulk cargo from ships and loading into wagons are most often used: ship, crane grab, collar, crane grab, wagon (truck); ship, crane grab, wagon (truck). Each of the described systems has its disadvantages and advantages.
In the case of using the first scheme, the speed of unloading the vessel itself increases. However, this significantly increases the total duration of the unloading-loading cycle. In addition, this scheme cannot be used in small ports due to the lack of space for intermediate storage of bulk cargo. In the case of using the second unloading-loading scheme, the unloading time of the vessel increases, but the total duration of work with the cargo decreases [2].
In these cases, there is a need to combine several movements of the crane at the same time. Most often, the combination is observed during the operation of the mechanisms of changing the departure of the boom system and the crane rotation.
Horizontal movement of cargo by means of the mechanism of change of departure is a separate working movement of cranes with level-luffing boom system. This working movement can be performed independently or by combining with other working movements, depending on the technological needs during the operation of the crane.
Important problems at using of cranes during handling are the reduction of the duration of the working cycle of overloading, as well as increasing the maintenance cycle of the metal structure of the jib system and the crane as a whole. These tasks can be solved by minimizing the oscillations of the load on a flexible rope suspension.
The largest oscillations of the load on the flexible suspension are observed during the operation of the motor of the crane rotation mechanism in transient modes (start, braking) [1,2].
Oscillations of load on a flexible suspension have a negative impact on such performance indicators of cranes as: productivity, efficiency, reliability, maneuverability, etc. [1]. The magnitude of the deviation of the cargo rope from the vertical depends on the following factors: weight of the load, speed of rotation, duration of the motor mechanism, the position of the center of mass of the load relative to the suspension point, wind loads, etc. [3,4]. Therefore, there is a need to optimize the mode of movement of the boom system during the operation of the mechanisms of rotation and change of departure. In this case, as a rule, the operation of one mechanism is considered in the steady state of motion, and the other -in transient (start or brake) [5].
Analysis of publications
Thorough studies of the kinematics and dynamics of such a boom system were conducted in the monograph [1]. In particular, the results of studies of the movement of the boom system under different equations, corresponding to the minimization of standard deviations of displacements, speeds, accelerations and jerks of the load and the end point of the trunk. It is important that these studies were conducted when moving the cargo from the minimum value of departure to the maximum. However, the process of starting the boom system when changing the departure of the cargo was not studied.
In the article [3] the modes of movement of the mechanisms of rotation of cranes are optimized. Graphical dependences of change of kinematic and force parameters during operation of the mechanism of rotation on transient modes of movement are constructed.
In [4] the models of possible cases of operational loading of the boom system are analyzed and constructed. The results on the operation of the mechanism of change of departure during different distribution of loads on the links of the boom system of the crane are given.
The authors of articles [5,6] describe the ways and means of optimal control of the electric drive of the mechanism of rotation of jib cranes. In this case, the operation of electric motors is considered both during transient modes and at steady state.
In [7] the problem of optimization of loads on the links of the boom system in order to reduce the power consumption of the drive motors of the mechanism of change of departure was considered. However, the above method incompletely reveals the change of inertial forces in the unstable sections of the crane boom system.
The analysis of literature sources on research topics showed that different approaches to improving the dynamic characteristics of boom systems are proposed. However, for the most part, two ways of improving the characteristics of cranes are proposed -changing the design parameters of boom systems of cranes and means of controlling the electric motors of the actuators of cranes. In this case, the overall goal is to improve the following indicators of crane efficiency: productivity, efficiency, reliability, maneuverability, ergonomics, etc. [8…11].
Purpose and research task statement The purpose of this study is to develop a method for optimizing the process of starting the mechanism of rotation of the level-luffing boom systems of the crane at a steady state change of departure by reducing the existing loads.
Research results
There is level-luffing boom system of a gantry crane with a toothed sector drive of the mechanism of change of departure of cargo and a planetary drive of the mechanism of turn is given ( Fig. 1).
At constructing a dynamic model of the level-luffing boom system, the following assumptions are made: It is considered that all parts of the system are solids, except for the load, which performs pendulum oscillations on a flexible suspension; When changing the departure, the load moves horizontally, because the cargo rope runs along the trunk and extension and when changing the departure does not change its own length; We consider that the change in the departure of the boom system is carried out in a steady state, ie the angular velocity of the boom 0 is a constant value; we neglect the deviation of the cargo rope from the vertical in the plane of change of departure, only the deviation in the plane of rotation of the crane along the tangent to the trajectory of the cargo is taken into account; It is considered that the boom system is completely balanced by a movable counterweight.
Consider the combined movement of two mechanisms to change the departure of the load and the rotation of the crane. The boom system is presented as a holonomic mechanical system with three degrees of freedom. The angular coordinates of the boom in the plane of change of departure α and the angular coordinates of the rotation of the boom φ and the load ψ in the horizontal plane are taken as generalized coordinates (Fig. 2).
An elm is superimposed on the angular velocity of the boom in the plane of change of departure, as a result of which the system moves with a constant velocity . Therefore, a system with three degrees of freedom is transformed into a system with two degrees of freedom, in which the generalized coordinates will be the coordinates φ and ψ. The angular coordinate of the boom α varies according to a linear law t is the time, α 0 is the initial position of the boom, and 0 is the angular velocity of its rotation in the plane of change of departure.
For such a dynamic model of motion of a level-luffing boom system, we compose differential equations of motion using the Lagrange equations of the second kind: where T -the kinetic energy of the system; П -potential energy of the system; Q -generalized component of non-potential forces reduced to the coordinate φ. Determine the kinetic energy of the boom system with the combined movement of the mechanisms of change of departure and rotation of the crane where X B m , m , m -respectively, the mass of the jib, tieback and cargo; J 0the moment of inertia of the drive elements of the departure change mechanism, which is reduced to the axis of rotation of the boom; J Р -moment of inertia of the drive of the turning mechanism, reduced to the axis of rotation of the crane; J С , J Х , J В -moments of inertia about their own axes of rotation, respectively, main jib, jib and tieback; L, R -respectively, the length of the main jib and the tieback; l, r -respectively the length of the jib and counter jib; f -the displacement of the axis of rotation of the crane relative to the lower axis of the boom hinge; a, -respectively, the length of the strut and its angle of inclination to the horizon; z -the horizontal coordinate of the position of the center of mass of the load relative to the lower hinge of the boom; X B , -angular coordinates of rotation, respectively, the jib and tieback.
The potential energy of a fully balanced boom system is determined by the potential energy of the load where g -the acceleration of free fall; H -height of the load suspension relative to the lower hinge of the boom; y -the vertical coordinate of the center of mass of the cargo. The non-potential component of the generalized force of the turning mechanism is determined by the following dependence where M -reduced to the axis of rotation of the crane driving moment of the rotation mechanism; P M -driving torque on the motor shaft of the crane rotation mechanism; u -the gear ratio of the drive of the turning mechanism; η -the efficiency of the drive in the turning mechanism.
Since the tieback has little effect on the dynamics of the boom system, therefore, 0 We will also assume that the axis of rotation of the crane coincides with the lower hinge of the boom, so f = 0.
After substituting expressions (2…4) in the system (1), we obtain a system of differential equations of compatible motion of the mechanisms of change of departure and rotation of the crane ; X z Lcos l cos Consider the process of starting the rotation mechanism and determine its optimal mode with a steady movement of the mechanism of change of departure. According to the criterion of the mode of movement of the turning mechanism with compatible steady motion with the mechanism of change of departure, we choose the root mean square value of the driving torque of the drive, reduced to the axis of rotation of the crane where t -the time; t 1 -the duration of the start-up process.
From the first equation of the system (5) we express the driving moment of the rotation mechanism reduced to the axis of rotation of the crane Also from the second equation of the system (5) we express the coordinate of the main motion of the rotation mechanism φ through the function ψ and its time derivatives Differentiating the obtained expression (12) twice over time, we obtain: When determining the optimal mode of movement of the turning mechanism at the steady-state mode of change of departure of cargo it is necessary to set initial conditions of movement at t = 0: In this case, the final starting conditions, which ensure the absence of oscillations of the load at a steady movement of the turning mechanism [12], when where P -the established value of the angular velocity of the crane rotation mechanism.
After substituting expressions (12…14) into equation (11), it is seen that the subintegral expression M will depend only on the unknown function and its derivatives in time up to the fourth order. Therefore, the M CK functionality will, in fact, have one unknown function ( t ) as its argument.
We rewrite the boundary conditions (15) and (16) using only the function and its time derivatives. To do this, use the relations (12…14) and obtain: Therefore, to optimize the mode of movement of the turning mechanism at a steady state change of departure of the load, an optimization problem is formulated. It includes criterion (10) in the form of an integral functional with a subintegral function (11) taking into account expressions (6…9) and (12…14) and boundary conditions of motion during start-up (17).
To approximate the solution of the nonlinear variational problem, we will represent the desired function (optimal start mode) in the form of a polynomial. Moreover, this function is divided into two terms Here, the first term is a selected polynomial (has an explicit form) that satisfies the boundary conditions (17), and the second is a polynomial that includes free coefficients and satisfies zero boundary conditions similar to (17): Choose 0 in the form of a polynomial of degree 7 to ensure conditions (17): Therefore, a polynomial 0 of the form (20) with coefficients (21) satisfies the boundary conditions (17). We will write a polynomial 1 in a kind 4 The multiplier .., C . Therefore, the approximate solution of the variational problem (10) taking into account (6.…14) and boundary conditions (17) is reduced to finding the minimum of the function of many variables, for this we can use one of the approximate methods [13,14]. In this work, an application package was used to solve this problem, in which methods based on the simplex method were used to find the minimum function of many variables.
To determine the derivatives , , , approximate formulas of numerical differentiation were used, namely, symmetric difference derivatives of the first and second orders, and to approximate the integral (1) -the trapezoidal formula. The selected in (22) maximum exponent n=5. For the required functions , , their derivatives and for the driving moment M (11) calculations are performed, the results of which are shown in Fig. 3…6. These calculations were performed for the crane boom system with the following parameters [ fig. 3 shows graphs of changes in the angular coordinates of the rotation of the boom system and the load. These graphs show a smooth change of angular coordinates, but there is a deviation of the coordinates of the boom system and the load, which is eliminated before the start of the start-up process, and when entering the steady state coordinates coincide. In fig. 4 shows the dependences of the angular velocities of the boom system and the load when turning the crane. From these graphs it is seen that the speed of the load during the start-up process gradually increases, and in the speed of the boom system there are some fluctuations. At the end of the startup process, the angular velocities of the boom system and the load coincide, as in their movements. This indicates that there will be no pendulum oscillations of the load on the flexible suspension in the area of steady movement of the turning mechanism. In fig. 5 shows the graphical dependences of the angular accelerations of the load and the boom system, which shows that the acceleration of the load increases smoothly and decreases from zero initial value to a small value at the end of the start. However, the acceleration of the boom system at the beginning of the movement increases rapidly to the maximum value with subsequent change with oscillations.
A similar situation is observed when changing the dynamic component of the driving moment of the drive mechanism (Fig. 6). At the initial moment of start the driving moment of a drive of the mechanism of turn sharply increases to the maximum value with its subsequent decrease with some fluctuations. A sharp change, at the beginning of the movement, leads to oscillations in the system, to reduce which it is necessary to ensure a smooth change of driving momentum. However, this mode of movement increases the start-up time, which reduces the performance of the crane. Conclusions 1. In the considered article the optimization problem of joint movement of mechanisms of change of departure and turn of a boom system of the crane is set. In this case, the change of load departure is carried out in a steady state at a constant angular velocity of the motor shaft, and rotation during start-up, when the motor shaft changes its angular velocity from zero to a fixed value.
2. The optimization problem includes a mathematical model of the joint movement of the mechanisms of change of departure and rotation of the crane, the optimization criterion, which is the RMS value of the driving torque of the rotation mechanism during start-up and boundary conditions of movement that eliminate load oscillations on a flexible suspension. process.
3. The nonlinear optimization problem is solved by an approximate method, where the solution is represented as a polynomial with unknown coefficients, which are determined using a package of applications based on the simplex method.
4. As a result of solving the optimization problem, the graphical dependences of the kinematic characteristics of the boom system and the load, as well as the driving moment of the drive of the turning mechanism during start-up are constructed. The obtained optimal mode of crane rotation during start-up at the steady-state mode of departure change allowed to eliminate load oscillations on a flexible suspension and to minimize dynamic loads in the drive mechanism. 5. Recommendations for the possible application of a certain optimal mode of joint movement of the mechanisms of change of departure and rotation of the jib system of the crane in practice in limited operating conditions. руху стрілової системи при повороті та зміні вильоту, критерій оптимізації та крайові умови руху. Критерій оптимізації має вигляд інтегрального функціоналу, що відображає середньоквадратичне значення рушійного моменту приводного механізму повороту за час пуску. Дослідження проведено на ділянці пуску електродвигуна механізму повороту від стану спокою до досягнення номінальної частоти обертання та при усталеній швидкості обертання електродвигуна механізму зміни вильоту.
OPTIMIZATION OF ROTATE MODE AT CONSTANT CHANGE OF DEPARTURE IN THE LEVEL-LUFFING CRANE WITH GEARED SECTOR
The results of optimization of the rotation mode of the level-luffing boom system of the crane at the launch site, with the steady-state mode of departure change. The object of the study is a boom system with a sector drive of the mechanism of change of departure. The mechanism of rotation consists of an electric motor, a planetary mechanism and an open gear. Variation calculus methods were used to optimize the mode of rotation of the boom system. In this case, a variational problem is formed, which includes the equation of motion of the boom system when turning and changing the departure, the optimization criterion and boundary conditions of motion. The optimization criterion has the form of an integral functional that reflects the root mean square value of the driving torque of the drive mechanism of rotation during start-up. The study was carried out at the starting point of the electric motor of the turning mechanism from the state of rest to reach the nominal speed and at a steady speed of rotation of the electric motor of the mechanism of change of departure. The solution of the problem is presented in the form of a polynomial with two terms, the first of which provides boundary conditions of motion, and the second minimizes the criterion of optimization through unknown coefficients. To do this, use a software package. Graphs of change of kinematic characteristics of cargo and boom system at work of mechanisms of turn and change of departure, and also the driving moment in the course of start of the mechanism of turn which correspond to an optimum mode of movement are constructed. The resulting mode of movement allowed to eliminate the oscillations of the load on the suspension. Based on research, recommendations for the use of the obtained optimal start-up mode have been developed.
Key words: turning mechanism, reach change mechanism, cargo swing, steady change of reach, integrated functionality, turn mode optimization. | 2021-08-19T19:51:19.452Z | 2021-05-24T00:00:00.000 | {
"year": 2021,
"sha1": "0f3bb87a6d0d7597db60dbf4d8eca5b3e35031ea",
"oa_license": "CCBY",
"oa_url": "http://omtc.knuba.edu.ua/article/download/235466/233899",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "960549586523b4930ad5f61c873b7220a98207c9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
158468209 | pes2o/s2orc | v3-fos-license | Conservation Agriculture for Environmental Sustainability in A Semiarid Agroecological Zone under Climate Change Scenarios
: Using the Mann–Kendall Test to analyze data from a survey of 400 farmers, this study compared the rate of adoption of conservation agriculture (CA) in two contrasting villages of Mnyakongo and Ugogoni locating in the Kongwa District, a semi-arid zone in central Tanzania. Results exhibited that the level of CA adoption was <10% of the total households. The trend of CA adoption was determined at the coefficient of R 2 = 0.95, 0.90, 0.68 and 0.57 for mulching, crop rotation, agroforestry and little tillage, respectively. Despite little tillage and crop rotation having high acreage under CA, the rate of mulching adoption was significantly higher than that of others. Furthermore, there were significant correlations between the CA adoption and crop yields or environmental sustainability ( p < 0.05). Maize, sorghum and millet yields were significantly greater under CA (1.7 t ha − 1 ) than no-CA (0.7 t ha − 1 ). Particularly, maize yields were increased from 1.3 to 2.3 t ha − 1 from 2000 to 2015 under CA when it was intercropped with legumes. The majority farmers (>70%) asserted that CA had optimized their yields for both food and economic incentives. Thus, this study recommends the adoption of CA in the semi-arid agro-ecological zones.
Introduction
The significance of soil or environmental conservation to limit soil degradation has been advocated since 1903 [1]. The current situation of a rapid population increase and global climate change has necessitated the practicability of such soil conservation [2]. For example, the USA government has invested millions of dollars yearly to support conservation related projects [3,4]. To increase fruit production and environmental conservation in organic Australian vineyards, both mulching and composite are used as conservation agricultural practices [5]. This practice increases crop yields and environmental conservation in various areas of the country.
The increasing extreme of weather changes especially for temperature and precipitation has significantly impacted the nutrient cycling and soil moisture in most of Sub-Saharan Africa [6,7]. These weather changes could intensify in the future as various climate models have been predicting further climate alterations. As a result, a crop production system and its productivity are to worsen with possibly high outbreak of diseases, pests and pathogens.
To intervene with these authentic and potential consequences, we need to develop resilient agricultural systems through rational and affordable strategies that maintain the ecosystem functions Figure 1. Development of conservation agriculture over the last 20 years by world region in total area (ha) and as an average percentage across the adopting countries of the respective region. African countries seem to adopt conservation agriculture slowly. It needs more emphasis for a good take off. Afterwards, the continent will increase crop yields in terms of quantity and quality as well as conserving the ecosystems (Source: Adopted from FAO [17]).
Most Sub-Saharan countries recently started the adoption of CA. For example, Malawi, Zambia and Zimbabwe intensified their CA adoption in the 1990s [6,7,37]. Having been informed on the significance of CA, there is an immediate need to emphasize the optimal adoption and utilization of CA in the region (Sub-Sahara Africa) where most farmers are destitute and marginalized.
An Overview of Conservation Agriculture in Tanzania
Most Tanzanian communities adopted indigenous agricultural conservation practices, e.g., the Matengo pits (terraces) in Ruvuma, Chagga garden (agroforestry) in Kilimanjaro and Ngitiri (enclosed pasture) in Shinyanga regions to intervene the challenges associated with environmental stress. These practices have shown promise for optimizing crop yields, increasing fodder, controlling soil erosion and conserving moisture and fertility [27,39]. However, CA has only been operated in few regions including Dodoma, Manyara, Arusha and Southern highlands of Tanzania [40]. Few CAs have been practiced in the aforementioned regions that involve agroforestry, crop cover and crop rotation and are mostly influenced by private or government organizations [41][42][43][44][45][46][47][48][49][50][51][52][53]. Thus, reliable policy could greatly affect the CA adoption in the country. At present, CA receives little attention from the Tanzania Agricultural Policy [54] as the existing policy advocates the green revolution that emphasizes more on conventional tillage and chemical fertilization.
At the household level, the adoption of CA or any other agricultural technology is reached after the adopter is satisfied with the decision. In most cases, the household adopts new agricultural technology (i.e., CA) whose net benefits are significantly greater than those of an existing technology. In this approach, prospective new technology adopters observe the utility gained by the early adopters before adopting that technology. This can be described in various models as follows.
For example, the Heckman model [43] specified the decision and extent of technology adoption as follows: Ci = Ziφ + εi (CA adoption) (1) Figure 1. Development of conservation agriculture over the last 20 years by world region in total area (ha) and as an average percentage across the adopting countries of the respective region. African countries seem to adopt conservation agriculture slowly. It needs more emphasis for a good take off. Afterwards, the continent will increase crop yields in terms of quantity and quality as well as conserving the ecosystems (Source: Adopted from FAO [17]).
Most Sub-Saharan countries recently started the adoption of CA. For example, Malawi, Zambia and Zimbabwe intensified their CA adoption in the 1990s [6,7,37]. Having been informed on the significance of CA, there is an immediate need to emphasize the optimal adoption and utilization of CA in the region (Sub-Sahara Africa) where most farmers are destitute and marginalized.
An Overview of Conservation Agriculture in Tanzania
Most Tanzanian communities adopted indigenous agricultural conservation practices, e.g., the Matengo pits (terraces) in Ruvuma, Chagga garden (agroforestry) in Kilimanjaro and Ngitiri (enclosed pasture) in Shinyanga regions to intervene the challenges associated with environmental stress. These practices have shown promise for optimizing crop yields, increasing fodder, controlling soil erosion and conserving moisture and fertility [27,39]. However, CA has only been operated in few regions including Dodoma, Manyara, Arusha and Southern highlands of Tanzania [40]. Few CAs have been practiced in the aforementioned regions that involve agroforestry, crop cover and crop rotation and are mostly influenced by private or government organizations [41][42][43][44][45][46][47][48][49][50][51][52][53]. Thus, reliable policy could greatly affect the CA adoption in the country. At present, CA receives little attention from the Tanzania Agricultural Policy [54] as the existing policy advocates the green revolution that emphasizes more on conventional tillage and chemical fertilization.
At the household level, the adoption of CA or any other agricultural technology is reached after the adopter is satisfied with the decision. In most cases, the household adopts new agricultural technology (i.e., CA) whose net benefits are significantly greater than those of an existing technology. In this approach, prospective new technology adopters observe the utility gained by the early adopters before adopting that technology. This can be described in various models as follows.
For example, the Heckman model [43] specified the decision and extent of technology adoption as follows: where Ci is a dummy variable for CA adoption; Zi is a vector of determinants of CA adoption; Yi is the extent of CA adoption (proportion of land area under CA); Xi is a vector of determinants of CA extent of adoption; θ and β are vectors of parameters to be estimated: and εi and µi are error terms.
Based on the Heckman model [43], for the estimated parameters of Equation (b) to be efficient, there should be no correlations between the two error terms (εi and µi). Nevertheless, the sample selection bias has resulted in a non-zero correlation between the two errors. To correct for this selection bias, the Heckman model firstly estimates Equation (a) to obtain a sample selection indicator, i.e., the Inverse Mills Ratio (IMR). This is suitable for measuring the covariance between the two errors.
The study by Rogers [9] had agreement with Heckman model, as it informed that "the innovation-decision process can lead to either adoption, a decision to make full use of an innovation as the best course of action available, or rejection, a decision not to adopt an innovation". A few CA studies have been conducted to evaluate the extent of CA in Tanzania including on maize yields in the Uluguru Mountains (Eastern Arc Mountains) [26].
The present study focused on the semi-arid agroecological zone in Tanzania, where maize, sorghum and millet are the major crops, because this semi-arid area is the most vulnerable to climate change impacts and environmental degradation. In these areas, CA is the most reliable way of limiting these two major challenges. The dominant farming systems in the most semi-arid regions include: cropping, pastoralism and agro-pastoralism (integration of crops with livestock). Such integrations increase the biomass inputs of perennial plants and the optimization of nutrient amendments in the soil [30,39]. These systems can also increase the mutual interactions between plants roots and mycorrhizae fungi, and eventually nutrients uptake and resistant to pathogens by plants.
Although the science of CA and its significance at the global level is progressing rapidly, various knowledge gaps still exist, particularly in developing countries. This present study was geared to assess the rate of CA adoption and its ecological significance in the semi-arid areas of central Tanzania. Although several FAO-CA projects have been conducted in several Tanzanian regions, their impacts have remained trivial [41,42]. This study was hypothesized as follows: The rate of CA adoption is influenced by the desire of farmers to have optimal yields, preservation of soil moisture and fertility, control of soil erosion, and reduction of labor work.
The rate of CA adoption is not influenced by the desire of farmers to have optimal yields, preservation of soil moisture and fertility, control of soils erosion, and reduction of labor work.
The above-mentioned items in the hypotheses were widely known to the respondents, thus it was possible to differentiate and/or synthesize one another during data collection and interpretation.
Here, we investigated the level of CA adoption in the semi-arid agroecological zone of central Tanzania. It was quite important to conduct such a study because most semi-arid areas experience frequent food-shortage associated with environmental degradation and extreme climate change impacts. Thus, the present study explored the rate of CA adoption and is socio-ecological significance to the community and environment. To achieve this objective, the rate of CA adoption was hypothesized against the factors for its adoption. The findings of such a study are expected to have significant contribution to the establishment of CA promotion policies in Tanzania with an earmark to the vulnerable communities and ecosystems. The policy advocacy on CA is a significant move toward sustainable adoption of CA in all countries' agroecological zones.
At present, the adoption of CA is determined by personal characteristics (i.e., knowledge and experience), physical factor (land availability), and social and financial factors. Despite the insignificant willingness in the CA adoption, the practice has numerous fruitful results, as it improves biological functions of the soil through mycorrhizae fungi, ants and worms that can enhance nutrients uptake by plants. The basis of the study's findings was fieldwork with a constructed conceptual framework ( Figure 2) enlisting the significant aspects of the CA. uptake by plants. The basis of the study's findings was fieldwork with a constructed conceptual framework ( Figure 2) enlisting the significant aspects of the CA. The framework portrays the major roles of CA as a tool firstly for increasing crop yields and secondly for environmental conservation to improve soil fertility through no-or reduced-tillage, mulching, agroforestry and crop rotation [50][51][52][53][54][55][56][57][58][59][60][61]. It also increases crop yields in terms of quality and quantity. By so doing, it curbs food insecurity and abject poverty. It further preserves biodiversity and mitigates the emission of greenhouse gases (GHGs), e.g. CO2, CH4 and N2O.
Profile of the Study Site
This study was carried out in the Kongwa District, a semiarid zone of Central Tanzania between June and September 2016 ( Figure 3). This district is located on the leeward side of Ukaguru Mountains with an area of ~4041 km 2 and a varying elevation between 900 and 1000 m a.s.l. (6°19′660" Latitude The framework portrays the major roles of CA as a tool firstly for increasing crop yields and secondly for environmental conservation to improve soil fertility through no-or reduced-tillage, mulching, agroforestry and crop rotation [50][51][52][53][54][55][56][57][58][59][60][61]. It also increases crop yields in terms of quality and quantity. By so doing, it curbs food insecurity and abject poverty. It further preserves biodiversity and mitigates the emission of greenhouse gases (GHGs), e.g. CO 2 , CH 4 and N 2 O.
Profile of the Study Site
This study was carried out in the Kongwa District, a semiarid zone of Central Tanzania between June and September 2016 ( Figure 3). This district is located on the leeward side of Ukaguru Mountains with an area of~4041 km 2 and a varying elevation between 900 and 1000 m a.s.l. (6 • 19 660" Latitude (S) and 36 • 15 36"Longitude (E)). The typical vegetation in the Central Tanzania is bush or thicket. The mean annual precipitation is 400-600 mm (most between December and April) and the mean annual temperature is 26 • C. The soil is classified as Chromic Luvisols (the FAO Soil Taxonomic System) with a sandy loam texture. The silt contents of the soils at different farms were not significantly different (p > 0.05) and ranged 170-255 g kg −1 soil with a bulk density of 1.25-1.65 Mg m −3 [37,[44][45][46][47][48].
Sustainability 2017, 9, x FOR PEER REVIEW 6 of 19 (S) and 36°15′36"Longitude (E)). The typical vegetation in the Central Tanzania is bush or thicket. The mean annual precipitation is 400-600 mm (most between December and April) and the mean annual temperature is 26 °C. The soil is classified as Chromic Luvisols (the FAO Soil Taxonomic System) with a sandy loam texture. The silt contents of the soils at different farms were not significantly different (p > 0.05) and ranged 170-255 g kg -1 soil with a bulk density of 1.25-1.65 Mg m -3 [37,[44][45][46][47][48].
Agricultural Systems in the Area
With 3637 km 2 of arable land, the study area is dominated by cropping systems, pastoral systems and mixed farming. About 80% of the cropping systems are under smallholder farmers (~2.5 hectares per household) who use a hand hoe as the main farming tool. Medium scale farmer (about 17%) use power tillers while large scale farmers (about 3%) use tractors. The dominant food and cash crops include maize, sorghum, millet, common beans, cassava, sweet potatoes, chick peas, sesame, cashews, sunflower and groundnuts. The dominant animals are cattle, sheep, pigs, donkey and goats. In addition, one ranch (~250 km 2 ) and one pasture (~150 km 2 ) are owned by the National Ranching Company and Livestock Research Center, respectively.
Data Collection and Sampling Design
A simple random sampling was employed in selecting the study area. We picked one the Kongwa district among numerous districts of the semi-arid zone of Tanzania that are severely impacted by climate change and frequent food shortage. Purposive sampling was employed in
Agricultural Systems in the Area
With 3637 km 2 of arable land, the study area is dominated by cropping systems, pastoral systems and mixed farming. About 80% of the cropping systems are under smallholder farmers (~2.5 hectares per household) who use a hand hoe as the main farming tool. Medium scale farmer (about 17%) use power tillers while large scale farmers (about 3%) use tractors. The dominant food and cash crops include maize, sorghum, millet, common beans, cassava, sweet potatoes, chick peas, sesame, cashews, sunflower and groundnuts. The dominant animals are cattle, sheep, pigs, donkey and goats. In addition, one ranch (~250 km 2 ) and one pasture (~150 km 2 ) are owned by the National Ranching Company and Livestock Research Center, respectively.
Data Collection and Sampling Design
A simple random sampling was employed in selecting the study area. We picked one the Kongwa district among numerous districts of the semi-arid zone of Tanzania that are severely impacted by climate change and frequent food shortage. Purposive sampling was employed in selecting two representative villages, i.e., Mnyakongo and Ugogoni. Priority was given to villages that have been practicing CA. A reconnaissance survey was done in April 2016, two months before the actual data collection. During this phase, data collection tools were tested to determine their effectiveness. We also used this phase to process the required research permits and determined some key informants. All discrepancies raised during this phase were fixed before the actual process of data collection.
Data collection including household surveys, group discussions, informative interviews and physical observation was conducted from June to September 2016. These activities were simple and suitable because they optimally involved relevant stockholders. A simple random sampling was applied when selecting households while a systematic sampling was used to form groups for discussions. In addition, purposive sampling was employed in selecting the interviewees.
We also conducted intensive interviews with agricultural experts, extension officers and few elders. Data on CA (i.e., acreage under CA) and crops yields were gathered from the Kongwa District Agricultural and Livestock Development Officer (DALDO) and the Ministry of Agriculture, Livestock and Fishery, Tanzania.
The acreage data about areas under irrigation were gathered from the Dodoma Region Zonal Irrigation Office (in which Kongwa is affiliated). A total of 400 questionnaires were collected from household heads of smallholders (farmer/livestock households), as shown in Table 1. The questionnaires involved both closed and open questions. The selection of households was done by dividing the total number of households in each village by the required sample size (about 10%). The household lists were obtained from the village's government leaders in the study area. Interviews and household surveys were used to collect socio-ecological data at a society level. We collected both quantitative and qualitative data at the field and farm household level. About 258,219 ha (71%) of the total arable land (363,690 ha) in the district was cultivated by 45,271 households. The two representative villages had 4500 farming households with about 16,000 ha. Since we aimed to explore the rate of CA adoption, we selected 400 households (farmers) from the two villages on a random basis. In total, these 400 households had 1600 ha (an average of 2.0-4.0 ha per household) under crop production. We determined the overall farmers' perception on CA, and its types and benefits. In the process, we also determined the availability of extension services. Finally, information on soil characteristics was mainly obtained from the Kongwa District Land Use and Planning office and literature review. The Participatory Rural Appraisal method (PRA) was also employed to collect socio-economic data at the field level. These PRAs include informative interviews, group discussions, physical observation, etc. The application of the PRA method has been used to explore perceptions of rural communities on environmental issues that affect their lives [49][50][51]. One group discussion with 15 people was convened in each village, and interviews were conducted with 20 agricultural experts, farmers, livestock keepers and village government leaders.
Data and Statistical Analyses
We analyzed the quantitative data using the Mann-Kendall Test (at 95% level of confidence), and Microsoft excel (window 13) software. In this regard, the p-values less than 0.05 were supposed to be statistically significant (p < 0.05). The qualitative data from the household surveys were analyzed using theme content methods, whereas qualitative information was summarized and inserted in the text during discussions.
Recent Adoption of Conservation Agriculture
Results showed that, despite the recently increased rate of CA adoption (Figures 4-6), <10% of households had adopted it. For instance, in the two representative villages, 400 households cultivated an area of about 1600 ha for crop production, while 200 ha of CA were practiced by 10% of these households. At the district level, there were 45,271 farming households who had cultivated 258,219 ha ( Table 2), but only 4300 households had adopted the CA for an area of 20,000 ha.
CA practices had been more adopted in Ugogoni than in Mnyakongo. The former had a higher averaged land size and total cultivated lands ( Table 2). This brought significant differences in terms of socio-ecological benefits to both the community livelihoods and the environment.
Data and Statistical Analyses
We analyzed the quantitative data using the Mann-Kendall Test (at 95% level of confidence), and Microsoft excel (window 13) software. In this regard, the p-values less than 0.05 were supposed to be statistically significant (p < 0.05). The qualitative data from the household surveys were analyzed using theme content methods, whereas qualitative information was summarized and inserted in the text during discussions.
Recent Adoption of Conservation Agriculture
Results showed that, despite the recently increased rate of CA adoption (Figures 4-6), <10% of households had adopted it. For instance, in the two representative villages, 400 households cultivated an area of about 1600 ha for crop production, while 200 ha of CA were practiced by 10% of these households. At the district level, there were 45,271 farming households who had cultivated 258,219 ha ( Table 2), but only 4300 households had adopted the CA for an area of 20,000 ha.
CA practices had been more adopted in Ugogoni than in Mnyakongo. The former had a higher averaged land size and total cultivated lands ( Table 2). This brought significant differences in terms of socio-ecological benefits to both the community livelihoods and the environment. In addition, the major types of the CA in the study area were agroforestry, mulching, crop rotation and minimum tillage. The lands allocated to CA were corrected by the total lands under farming (p < 0.05). Hence, predictions and extrapolations could be done based on such dimension. In addition, the major types of the CA in the study area were agroforestry, mulching, crop rotation and minimum tillage. The lands allocated to CA were corrected by the total lands under farming (p < 0.05). Hence, predictions and extrapolations could be done based on such dimension. Figure 4 indicates the temporal trend of CA adoption with special focus on little tillage, crop rotation, agroforestry and mulching. There was a slight increase in almost all CA practices, while reduced tillage and crop rotation were more adopted compared to others. Figure 5 shows the adoption disparities between crop rotation and reduced tillage. More lands with CA practices were under reduced tillage (5000 ha) than under crop rotation (4500 ha). Meanwhile, crop rotation was significantly more adopted (R 2 = 0.90) than reduced tillage (R 2 = 0.57). Under such premises, it was evident that reduced tillage had less new adopters than crop rotation did, probably because it had already been adopted by the laggards (late adopters). These results agree with reduced tillage being the leading CA in Tanzania, although it is integrated with mulch, crop cover and legumes [26].
In addition, mulching and agroforestry ( Figure 6) had smaller lands (1000 ha) with a high rate of CA adoption. Of these two, mulching appeared to have higher adoption rate (R 2 = 0.95) than agroforestry did (R 2 = 0.68) while the former received more new adopters. This was because most of the adoption was done within the past 20 years (1995-2015). (means ± SD, n = 5) with different letters denote significant differences between averaged years for the same CA practice (a, b, c, d, e) and between different CA practices for the same averaged year (w, x, y, z) at p < 0.05. Note: Little tillage involves shallow cultivation (minimum tillage) of the farm (i.e., non-conventional). Source: Field Survey Data, 2016. Figure 4 indicates the temporal trend of CA adoption with special focus on little tillage, crop rotation, agroforestry and mulching. There was a slight increase in almost all CA practices, while reduced tillage and crop rotation were more adopted compared to others. Figure 5 shows the adoption disparities between crop rotation and reduced tillage. More lands with CA practices were under reduced tillage (5000 ha) than under crop rotation (4500 ha). Meanwhile, crop rotation was significantly more adopted (R 2 = 0.90) than reduced tillage (R 2 = 0.57). Under such premises, it was evident that reduced tillage had less new adopters than crop rotation did, probably because it had already been adopted by the laggards (late adopters). These results agree with reduced tillage being the leading CA in Tanzania, although it is integrated with mulch, crop cover and legumes [26].
In addition, mulching and agroforestry ( Figure 6) had smaller lands (1000 ha) with a high rate of CA adoption. Of these two, mulching appeared to have higher adoption rate (R 2 = 0.95) than agroforestry did (R 2 = 0.68) while the former received more new adopters. This was because most of the adoption was done within the past 20 years (1995-2015).
In fact, there is an immediate need to establish compelling efforts to make CA understandable and sustainable to farmers (Table 3). An intensive adoption of CA would therefore optimize sustainable livelihoods, especially for vulnerable and deprived smallholder farmers. Table 3 indicate that most farmers (50%-70%) asserted that the effectiveness of CA had been either very high or moderate. Most farmers (71%) asserted that crop rotation (e.g., maize, sorghum, millet, groundnuts, etc.) has been very effective, while 7% of them were not sure if the In fact, there is an immediate need to establish compelling efforts to make CA understandable and sustainable to farmers (Table 3). An intensive adoption of CA would therefore optimize sustainable livelihoods, especially for vulnerable and deprived smallholder farmers.
Results in
Results in Table 3 indicate that most farmers (50%-70%) asserted that the effectiveness of CA had been either very high or moderate. Most farmers (71%) asserted that crop rotation (e.g., maize, sorghum, millet, groundnuts, etc.) has been very effective, while 7% of them were not sure if the practice was effective. Likewise, most farmers (76%) asserted that the effectiveness of agroforestry has been moderate, while 6% did not think it was effective. Figure 7 and Table 4 indicate that most farmers essentially adopted a CA practice to optimize yields. This notion was also observed in various studies and models [9,10,43]. The improvement of agro-ecosystems such as soil moisture and fertility retention, and control of soil erosion were other substantial reasons for adopting the CA in the area. practice was effective. Likewise, most farmers (76%) asserted that the effectiveness of agroforestry has been moderate, while 6% did not think it was effective. Table 4 indicate that most farmers essentially adopted a CA practice to optimize yields. This notion was also observed in various studies and models [9,10,43]. The improvement of agro-ecosystems such as soil moisture and fertility retention, and control of soil erosion were other substantial reasons for adopting the CA in the area. Figure 7 presents the farmers assertion based on questionnaires survey while Table 4 shows the findings from PRAs (i.e., mostly from discussion and interviews), although the results from these two sources correlate with each other.
Crop Yields
CA has proven to significantly contribute to crop yields. In the present study, there were Figure 7 presents the farmers assertion based on questionnaires survey while Table 4 shows the findings from PRAs (i.e., mostly from discussion and interviews), although the results from these two sources correlate with each other.
Crop Yields
CA has proven to significantly contribute to crop yields. In the present study, there were significant differences (p < 0.05) between the yields from farms with and without CA (Figure 8). We calculated the yields in tons per hectare from different farmers with and without CA. These results agree with those of the FAO [33]. The findings of the present study revealed that maize, sorghum and millet yields were significantly greater (1.7 t ha −1 ) under CA than (0.7 t ha −1 ) without CA (Figure 8). The yields of maize, a preferred food crop in Tanzania, increased from 1.3 t in 2000 to 2.3 t ha −1 in 2015. Further, its yields were even more when intercropped with leguminous crops. In farms without CA, maize yields trailed by 0.8 t ha −1 in 2000 and more than 1 t ha −1 in 2015. Thus, there are significant differences (p < 0.05) between the two scenarios in terms of yields.
Conservation Agriculture
The results from analyses revealed that, despite the significance of CA to both crop yields and environmental conservation, its adoption at both local and national level was very low (Figures 4-6 and Table 2). This reflects the African trend where CA is less adopted and predominantly under small scale (Figure 1). Crop rotation and reduced tillage were more optimally adopted than other forms in terms of land hectares ( Figure 5).
However, the rate of adoption was significantly greater for mulching ( Figure 6), with R 2 = 0.95, followed by crop rotation, agroforestry and little tillage at 0.90, 0.68 and 0.57, respectively ( Figures 5 and 6). The CA with high land hectares ( Figure 5) had been in practice for a couple of years while mulching and agroforestry ( Figure 6) appeared to be new to the farmers, although the latter received high attention. Thus, time factor had significant influence in the CA adoption.
The findings indicate that households always adopt new agricultural technology when the benefit of a new technique is significantly greater than that of the existing technology. This utilitybased adoption approach was also observed by Rogers [9], Mwaseba et al. [10] and Heckman [43]. In the study area, 150 respondents specifically adopted CA because they wanted higher yields, while 65 and 60 did so to conserve soil fertility and soil moisture and control erosion, respectively. In addition, 45 and 20 farmers adopted CA to reduce frequent labor work and related activities, respectively The findings of the present study revealed that maize, sorghum and millet yields were significantly greater (1.7 t ha −1 ) under CA than (0.7 t ha −1 ) without CA (Figure 8). The yields of maize, a preferred food crop in Tanzania, increased from 1.3 t in 2000 to 2.3 t ha −1 in 2015. Further, its yields were even more when intercropped with leguminous crops. In farms without CA, maize yields trailed by 0.8 t ha −1 in 2000 and more than 1 t ha −1 in 2015. Thus, there are significant differences (p < 0.05) between the two scenarios in terms of yields.
Conservation Agriculture
The results from analyses revealed that, despite the significance of CA to both crop yields and environmental conservation, its adoption at both local and national level was very low (Figures 4-6 and Table 2). This reflects the African trend where CA is less adopted and predominantly under small scale (Figure 1). Crop rotation and reduced tillage were more optimally adopted than other forms in terms of land hectares ( Figure 5).
However, the rate of adoption was significantly greater for mulching ( Figure 6), with R 2 = 0.95, followed by crop rotation, agroforestry and little tillage at 0.90, 0.68 and 0.57, respectively ( Figures 5 and 6). The CA with high land hectares ( Figure 5) had been in practice for a couple of years while mulching and agroforestry ( Figure 6) appeared to be new to the farmers, although the latter received high attention. Thus, time factor had significant influence in the CA adoption.
The findings indicate that households always adopt new agricultural technology when the benefit of a new technique is significantly greater than that of the existing technology. This utility-based adoption approach was also observed by Rogers [9], Mwaseba et al. [10] and Heckman [43]. In the study area, 150 respondents specifically adopted CA because they wanted higher yields, while 65 and 60 did so to conserve soil fertility and soil moisture and control erosion, respectively. In addition, 45 and 20 farmers adopted CA to reduce frequent labor work and related activities, respectively (Figure 7 and Table 4).
Most households adopted CA after a careful decision based on the trade-off. These results agree with Thierfelder et al. [6], Ngwira et al. [7] and Kimaro et al. [26] who had similar observations in other Sub-Sahara countries. Since the rate of CA was mainly hypothesized against the desire for higher yields, preservation of soil moisture and fertility, control of soils erosion, and reduction of labor work, this study has confirmed that the majority farmers adopted CA for higher yields.
Despite not being an extremely labor demanding practice, it was noted that labor shortages hampered CA adoption to some extent. This was critically caused by the rural-urban migration amongst the working class in search of employment and wage labor. As a result, agricultural practices in most rural areas remain under the dependent class (children and old people) who are less energetic. To some extent, agroforestry was less affected by this migration compared to other CA forms as it mostly involved perennial crops that would not demand frequent labor.
Through interviews and discussions, most agricultural officers acknowledged proposing several CA practices, however, the financial constraint has been a major limiting factor to regularly visit farms, especially in remote areas. Despite the financial constraints, this study realized that the adoption of CA practices may solely depend on skills, interest, awareness and priorities of the adopters (i.e., early, moderate and laggards). However, to enhance CA adoption in the study area and the country at large, agricultural and extension officers should pay adequate visit to advise farmers on different agronomic practices that can optimize yields and elevate environmental services. This is important because, at a farm level, some farmers blamed agricultural experts for not giving substantial extension services. This means, despite the farmers' willingness to adopt the CA, if there will be no extension support, obviously, the adoption rate will slow down. This is because their indigenous knowledge may not be enough to cope with climate change. During the series of discussions, an anonymous farmer claimed that agriculture is meant for the poor. He further justified that most of the officers/experts had not engaged in it, instead they preferred other clerical jobs. His claims were associated with numerous government reports that indicated that 70% of agricultural industry in the country was under smallholder farmers who were mostly economically deprived.
Based on such premise, this study suggests that agricultural and extension officers should instill a sense of awareness and confidence among farmers thatagriculture is a respectable industry and, thus, anyone can do it, regardless his/her economic status. This can contempt the long standing local joke "mkulima", a Swahili word that means "poor" as associated to agriculture. Here, they mean that agriculture is for the lowest class and jobless people. The government is always asking and requesting jobless people, especially in towns, to join agriculture industry for their survival and development.
Crop Yields
The influence of CA on crop production has been substantially revealed. From 2000 to 2015, the yields of sorghum and millet were significant greater under CA (1.5-1.8 ha −1 and 1.3-1.6 t ha −1 , respectively) than under no-CA (0.2-0.5 n ha −1 and 0.5-0.7 t ha −1 , respectively) ( Figure 8). Of the two crops, sorghum yields had significant variation between the farms with (1.8 t ha −1 ) and without CA (0.2 t ha −1 ) compared to millet, i.e. 1.6 t ha −1 with CA and 0.5 t ha −1 without CA (Figure 8).
Thus, for optimal yields, the CA practices need to be more integrated with sorghum than with millet production. These results agree with and Kimaro et al. [26], Glaser et al. [37] and Dixon et al. [52]. The yields increase has been significant in limiting the level of hunger in the area. Moreover, CA gave optimal yields when integrated with irrigation and organic fertilization. Thus, the incorporation of these aspects in CA is worthwhile to increase crop yields in the study area.
On the other hand, the increasing demand of organic food at the global market may increase the adoption of CA (organic farming) in various countries [33]. While other parts of the globe have considerably adopted CA, Africa has not yet done well in that aspect (see Figure 1). Thus, compelling measures and emphasis are required in the continent to increase this adaption [44,45]. This will enhance the adaptive capacities among the smallholder farmers and limit the level of vulnerability from the global and local environmental change [15,16,45]. Thus, while proposing the increased CA adoption at a local level and in various agroecological zones, we also advocate its adoption at national, regional, continental and global levels because mitigation measures can have both global and local impacts [52][53][54][55][56][57][58][59][60][61].
Irrigation
With respect to the semi-arid areas, CA observed to work properly through irrigation. It mainly safeguards soil fertility and moisture, thus improving agroecosystems tenable for crop production and environmental conservation. Even though irrigation was limited to small area (about 5814 ha, i.e., <5% of the total area) located near the Mseta, Mzeru, Mlanga, Ikoka and Chelwe Rivers (Figure 3), it had significant contribution to yields and conservation.
However, Water User Association (UWA) and Irrigators Organization (IO), which control irrigation operation in the area, have encountered multiple challenges: water use conflicts, destruction of irrigation infrastructures and financial constraints. All these challenges posed some consequences.
The small area under irrigation denies the exploitation of a wide range potentials associated with irrigation. This problem is also acute at a national level where less than 4% of the total irrigable land potential has been harnessed [53,54]. According to the zonal irrigation engineers, the area has abundant ground water that could be the best option, but this potential has not yet made into use.
It was further clarified that various geophysical surveys have indicated that most ground water is located at less than 60 m depth. This is contrary to countries such as China where this depth can exceed 200 m [52]. As a way forward, the substantial investment in technology is very important for exploitation of both ground and rain water.
During rains, some farmers collect running water from seasonal rivers for spate irrigation. This water is intended to be reserved and used during water stress. However, due to weak infrastructure, the loss of this water is critically high. Thus, suitable mechanisms are required to boost their local innovations.
Fertilization
Fertilization has significant contribution to CA as in increases crop yields. CA gives optimal advantages when integrated with fertilizations [26,61]. The present study found that the majority (62%) of farmers did not use any fertilization on their farms; 80% of those (38%) applied organic fertilizers while few (20%) used chemical fertilizers.
It was realized that most organic fertilizers come from straw and animal manure. Most animal manure comes from goats, sheep, cattle, pig and donkeys. In terms of amounts, most farmers were assertive that, for optimal crops yields, they required at least 5000-10,000 kg ha −1 of organic fertilizers whose fertility could remain in the farm for five years. However, organic fertilization was insignificantly applied in the study area.
On the other hand, chemical fertilization such as DAP (Diammonium Phosphate), NPK (Nitrogen Phosphorus Potassium), SA (Ammonium Sulphate), TSP (Triple Super Phosphate) and UREA were applied in few areas especially under irrigation schemes. This is because the Ministry of Agriculture, Livestock and Fishery (MALF) recommends chemical fertilizers under irrigation schemes, as it does better under constant soil moisture than in mere drought areas (i.e., where evaporation is high).
The MALF has been providing inadequate share of chemical fertilizers to Dodoma Region, i.e., 1 × 10 4 . This meant that only 4000 ha could be fertilized in the whole region. In that share, Kongwa District received <2000 vouchers that fertilized very few hectares. However, it increased crop yields to 7.2 t ha −1 (from less than 3 t ha −1 under no-CA).
Overall, this study realized that, for a successful CA, there is an immediate need to attach irrigation and fertilization for sustainable conservation of soil moisture and fertility. It was also realizable that CA offered economic and socio-ecological advantages to the farmers depending on the biophysical environment. Areas under irrigation schemes that received chemical fertilization provided favorable conditions for crops production.
Environmental Sustainability
The study by FAO [46] has showed that chromic luvisols with a sandy loam texture is the dominant soil type in the area. Soil types are among the most important factors that control several biological processes in a particular locality. It was observed that silt contents in different farms were not significantly different (p > 0.05) and ranged between 170 and 255 g kg −1 soil while bulk density was 1.11 and 1.35 Mg m −3 with and without CA, respectively [47]. Soil carbon ranged from 1 to 1.22 Mg C ha −1 (0-20 cm deep) under CA and declined in farms under no-CA, while calcium, magnesium and sodium ranged from 0.5 to 4 Mg ha −1 under CA [37].
In most areas, the soils had neutral pH values ranging 5.40-6.10 on the top soils [46][47][48]. In addition, it had moderate high cation exchange capacity and high base saturation [48][49][50][51][52][53][54]. The CA practices appeared to optimize important soil nutrients, i.e., soil quality in the area. Accordingly, soil nutrients were significantly greater (p < 0.05) under farms with CA than under no-CA due to such soil amendments. The former was better in the optimization of environmental sustainability than the latter.
Many studies conducted in similar agroecosystems, i.e., climate and soil types, have also endorsed the positive roles of soil fertility and moisture (under CA) in elevating the biological functions of microorganisms in balancing the ecosystems [29,[46][47][48]. Mycorrhiza fungi also operate well under proper soil organic managements where they then increase the capacity of nutrients uptake and resistance to pathogens by plants [48]. This situation improves the interaction between roots and microorganisms Moreover, a study by Haoa et al. [29] further indicated that no-tillage, crops rotation, mulching and agroforestry were the sinks of the top three greenhouse gases, i.e., carbon dioxide (CO 2 ), methane (NH 4 ) and nitrous oxide (N 2 O), and, thus, confer adaptation and mitigation. It is thus obvious that the CA practices have multiple benefits to environmental sustainability [30,[55][56][57][58][59]. Hence, CA practices have significant contributions to sustainable environmental conservation focusing on lithosphere, hydrosphere, biosphere and atmosphere.
As a result, it is advisable to build the capacity of the smallholder farmers (who form 70% of Tanzanian agriculture) so that they can effectively integrate AC in their farming. If this integration reached at least 20% of the total farm size of every household, it would definitely increase crop yields and environmental services [33,[55][56][57][58][59][60][61]. This would have a long-term positive impact in serving the present needs of the people and environment without compromising the needs of future generations [55][56][57][58][59][60][61].
Conclusions
This study sought to explore the adoption rate of CA in the Kongwa District, a semi-arid agroecological zone in the central Tanzania. It accepted alternative hypothesis H 1 because CA adoption was greatly influenced by the famers' desire to achieve higher yields. Besides, the retention of soil moisture, conservation of soil fertility, and control of soil erosion were among the contributing factors that attracted farmers to adopt CA. In both villages, it was realized that there was correlation (p < 0.05) between the size of land under CA and the total land under farming (Table 2).
Time factor was another determinant factor for the adoption trend as there has been an increase in hectares under CA over time. However, the present study showed that CA is insignificantly practiced in study area, as <10% of households were involved. Among the CA practices, mulching appeared to receive high attention to adopters as its adoption was determined at the coefficient of R 2 = 0.95. Little tillage appeared to dominate others in total hectares (about 6000 ha). In addition, CA practices appeared to be more beneficial when supported with organic fertilizations and irrigation.
Animal manure and straw were the main source of organic fertilization. Further, this study found the significance difference between the areas with and without CA. Crops yields and environmental sustainability were better with CA than without it (Figure 8). Therefore, despite of being understandable that CA could improve the agricultural systems, it is recommendable to quantify such environmental potentials. The present study proposes the adoption of CA practices in various agroecological zones in Tanzania to manage agricultural soils, and attain socio-economic and ecological advantages. This is because CA confers adaptation and mitigation advantages. Likewise, effective livestock keeping should be integrated into various CA practices for mutual benefits.
Subsequently, planners, policy makers, agricultural experts and other agricultural stakeholders and practitioners should consider these findings as a baseline for their future endeavors. Lastly, we call for more proactive interventions and efforts from different stakeholders to join in this agenda. These efforts should mostly target areas with extreme weather stresses.
There are several research priorities for further investigations to tackle: (1) characterization of people who are involved in CA (small scale or large scale) and the policy implications; (2) the drivers that can influence CA adoption in small-scale farming; (3) how climate variability influences positively or negatively the adoption of CA (especially during the extreme wet period as compared to the extreme dry period); and (4) how crops yield harvested from CA contribute to food security and economic welfare. | 2019-05-20T13:04:53.289Z | 2018-05-04T00:00:00.000 | {
"year": 2018,
"sha1": "8736135b4456762ec9884ecdfcb25b863f640ff7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/5/1430/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f2101a43d5b77dadb892dc681887ea2bd87db23f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
22744680 | pes2o/s2orc | v3-fos-license | The Osmotic Activation of Transporter ProP Is Tuned by Both Its C-terminal Coiled-coil and Osmotically Induced Changes in Phospholipid Composition*
Transporter ProP of Escherichia coli (ProPEc) senses extracellular osmolality and mediates osmoprotectant uptake when it is rising or high. A replica of the ProPEc C terminus (Asp468–Arg497) forms an intermolecular α-helical coiled-coil. This structure is implicated in the osmoregulation of intact ProPEc, in vivo. Like that from Corynebacterium glutamicum (ProPCg), the ProP orthologue from Agrobacterium tumefaciens (ProPAt) sensed and responded to extracellular osmolality after expression in E. coli. The osmotic activation profiles of all three orthologues depended on the osmolality of the bacterial growth medium, the osmolality required for activation rising as the growth osmolality approached 0.7 mol/kg. Thus, each could undergo osmotic adaptation. The proportion of cardiolipin in a polar lipid extract from E. coli increased with extracellular osmolality so that the osmolality activating ProPEc was a direct function of membrane cardiolipin content. Group A ProP orthologues (ProPEc, ProPAt) share the C-terminal coiled-coil domain and were activated at low osmolalities. Like variant ProPEc-R488I, in which the C-terminal coiled-coil is disrupted, ProPEc derivatives that lack the coiled-coil and Group B orthologue ProPCg required a higher osmolality to activate. The amplitude of ProPEc activation was reduced 10-fold in its deletion derivatives. The coiled-coil structure is not essential for osmotic activation of ProP per se. However, it tunes Group A orthologues to osmoregulate over a low osmolality range. Coiled-coil lesions may impair both coiled-coil formation and interaction of ProPEc with amplifier protein ProQ. Cardiolipin may contribute to ProP adaptation by altering bulk membrane properties or by acting as a ProP ligand.
Bacteria respond to changes in medium osmolality by modulating cytoplasmic composition (1)(2)(3). Osmoregulatory transporters and biosynthetic enzymes mediate the accumulation of K ϩ , glutamate, and selected organic solutes as extracellular osmolality increases. Mechanosensitive channels release solutes as osmolality decreases. Three osmoprotectant transporters were shown to act as both osmosensors and osmoregulators after purification and reconstitution in proteolipo-somes: H ϩ symporter ProP of Escherichia coli (4), Na ϩ symporter BetP of Corynebacterium glutamicum (5), and ATP-binding cassette transporter OpuA of Lactococcus lactis (6). Each was activated as electrolytes were concentrated in the lumen of proteoliposomes, with or without osmotically induced proteoliposome shrinkage (7). In addition, the osmotic upshift required to activate BetP (5) or OpuA (8) increased with the mole fraction of the anionic phospholipid phosphatidylglycerol (PG) 4 in the proteoliposome membrane. It was thus proposed that osmosensing occurs when osmotically induced changes in cytoplasmic ionic strength or K ϩ concentration alter transporter-lipid interactions (7,9).
Transporter ProP of E. coli (ProPEc) mediates the uptake of zwitterionic osmoprotectants such as proline and glycine betaine (N-trimethyl glycine) when osmolality is rising or consistently high (10). ProPEc is a 500-amino acid integral membrane protein and a member of the major facilitator superfamily (11). Our homology model of ProPEc (12) is based on the crystal structure of 12-transmembrane helix transporter GlpT (13), which shares a common fold with major facilitator superfamily members OxlT and LacY (14). Protein ProQ of E. coli amplifies ProPEc activity by acting post-translationally (15,16). ProQ is a basic, cytoplasmic protein that may act directly or indirectly on ProPEc (17).
The central cytoplasmic loop (C3) and C terminus of ProPEc are longer than those of its paralogues (7,12) (Fig. 1A). The latter terminates in six or seven of the heptad repeats that characterize ␣-helical coiledcoil-forming proteins (11). Studies of synthetic peptides corresponding to residues 456 -500 (18) or 468 -497 (19,20) of ProPEc showed that residues 468 -497 (encompassing four heptads) form an antiparallel, homodimeric ␣-helical coiled-coil of low stability (Fig. 1, B and C). The low stability of this structure was expected, because basic residues are present at core heptad "a" positions His 495 and Arg 488 (11) (Fig. 1C). Unexpectedly, replacement R488I disrupted coiled-coil formation by this peptide replica, providing the first evidence that the orientation of the coiled-coil might be antiparallel (18). The antiparallel orientation was substantiated by the NMR solution structure, which appears to be stabilized by interactions of Arg 488 with Asp 475 and Asp 478 on the opposing monomer strand (Fig. 1B) (20). This antiparallel structure was also detected in intact ProPEc, in vivo, by chemical cross-linking of introduced Cys residues (21). A higher osmolality was required to activate the R488I variant than wild type ProPEc, and the R488I variant was activated only transiently, whereas activation of wild type ProPEc was sustained indefinitely (18). These results suggested that the C-terminal coiled-coil of ProPEc plays a role in its osmotic activation.
Amino acid sequence comparisons reveal two groups of bacterial ProP orthologues. All have longer C termini than ProP paralogues with known functions not related to osmosensing or osmoregulation (e.g. ShiA and KgtP) (7) (Fig. 1C). Group A orthologues (typified by ProPEc) include a C-terminal ␣-helical coiled-coil domain and Group B orthologues (typified by C. glutamicum ProP (ProPCg)) do not. The coiledcoil domain unique to Group A orthologues is not essential for osmosensing, since ProPCg was found to act as an osmosensor and osmoregulator after expression in E. coli (22). The initial aim of this study was to further elucidate the role of the C-terminal coiled-coil in the osmoregulation of ProP activity. Here we show that the coiled-coil structure shared by Group A ProP orthologues tunes the transporter so that it can be activated in media of low osmolality. We have also discovered that the osmolality required to activate ProPEc in vivo is modulated by the osmolality at which E. coli is cultured. This osmotic adaptation, which correlates with changing membrane cardiolipin (CL) content, ensures that ProP is poised to respond to ambient osmotic conditions. Thus, both the coiled-coil domain shared by Group A orthologues and the membrane cardiolipin content are involved in tuning the functional response range of ProP.
EXPERIMENTAL PROCEDURES
Culture Media-E. coli strains were cultivated at 37°C, whereas the Agrobacterium tumefaciens strain was cultivated at 30°C, in LB medium (23) or in NaCl-free MOPS medium, a variant of the MOPS medium described by Neidhardt et al. (24), from which all NaCl was omitted. MOPS medium was supplemented with NH 4 Cl (9.5 mM) as a nitrogen source and glycerol (0.4% v/v) as a carbon source. L-Tryptophan (245 M) and thiamine hydrochloride (1 g/ml) were added to meet auxotrophic requirements, and NaCl or sucrose was added to adjust the osmolality as indicated. Ampicillin (100 g/ml) was included to maintain plasmids, and arabinose was added as specified to adjust ProP expression.
Bacteria, Plasmids, and Molecular Biological Manipulations-Basic molecular biological techniques were as described by Sambrook and Russell (25). Chromosomal DNA was isolated as described by Bayliss et al. (26). The PCR was carried out as described by Brown and Wood (27). Site-directed mutagenesis was performed using the QuikChange Mutagenesis Kit (Stratagene, La Jolla, CA) as described by Culham et al. (28). Oligonucleotides were purchased from Cortec DNA Services (Kingston, Canada). Each recombinant plasmid was recovered from a ligation mixture by transformation of E. coli DH5␣ (29), and the entire sequence of the encoded proP variant was confirmed (GenAlyTiC, Guelph, Canada) before the plasmid was expressed in E. coli WG350.
Isolation of Genes proPCg and proPAt-A BglII site overlapping the proPEc stop codon in pDC79 was introduced by site-directed mutagenesis, the resulting plasmid (pYT6) was cleaved with EcoRI and BglII to excise proPEc, and the vector fragment was purified. The gene encoding ProPCg was PCR-amplified using plasmid pHP5 (22) as template, and the gene encoding the putative Group A ProP orthologue from A. tumefaciens (ProPAt) was PCR-amplified using the linear chromosome of A. tumefaciens C58 (ATCC number 33970) (American Type Culture Collection (Manassas, VA)) as template. During amplification, an EcoRI site was introduced 5Ј to each open reading frame, and a BglII restriction site was introduced overlapping the stop codon. Each PCR product was digested with EcoRI and BglII, purified, mixed with the vector fragment of pYT6, and ligated with T4 DNA ligase to create plasmids pYT12 (encoding ProPCg) and pYT13 (encoding ProPAt).
C-terminal Truncation of ProPEc-The proPEc gene encoded by pYT1 was PCR-amplified so that the introduced EcoRI site 5Ј to the open reading frame was included, and a stop codon, with an overlapping BglII site, was introduced after the codon for Ala 482 or Thr 489 . The resulting PCR products and plasmid pYT6 were cleaved with EcoRI and BglII. The desired DNA fragments were purified, mixed, and ligated, creating plasmids pMD2 (encoding His 6 -ProPEc-⌬11, truncated at Ala 482 ) and pMD3 (encoding His 6 -ProPEc-⌬18, ProPEc truncated at Thr 489 ).
Transport Assays-Bacteria were cultivated, and assays were performed as described by Culham et al. (10) using buffers prepared as described by Racher et al. (33). Osmolalities of culture media and buffers were adjusted with NaCl or sucrose, as specified, and measured with a Wescor vapor pressure osmometer (Wescor, Logan, UT). Initial rates of proline uptake were measured using L-[U-14 C]proline (Amersham Biosciences) at 0.2 mM. Protein concentrations were determined by the bicinchoninic acid assay (34), using the BCA kit from Pierce with bovine serum albumin as a standard. All assays were done in triplicate, and all experiments were performed at least twice. The rates are cited as mean Ϯ S.D. Regression lines were obtained by fitting the data to empirical Equation 1 (10) using nonlinear regression performed by Sigma Plot 8.0, where ⌸ is the osmotic pressure of the transport assay medium, a 0 is the initial rate of proline uptake measured with medium osmolality ⌸/RT, A max is the uptake rate that would be observed at infinite medium osmolality, R is the gas constant, T is the temperature, 1 ⁄ 2 /RT is the medium osmolality yielding half-maximal activity, and B is a constant inversely proportional to the slope of the response curve. This process yielded estimates for parameters A max , ⌸1 ⁄ 2 /RT, and B.
Base pairs 1318 -1500 of proPEc (encoding ProPEc (Glu 440 -Glu 500 )) were PCR amplified using primers that created flanking BamHI and HindIII restriction sites. The primers were designed to facilitate insertion of the resulting oligonucleotide in vector pQE82L (Qiagen Inc., Valencia, CA), fusing the proP-derived open reading frame to the upstream vector sequence encoding the MRGSH 6 tag (32). The amplicon and vector were cleaved, purified, mixed, and ligated, and recombinant plasmids were recovered. A primer-encoded mutation (P496T) was corrected by site-directed mutagenesis, yielding the desired plasmid (pJKK2) (32), which was introduced to E. coli MG1655 (35) to create E. coli WG864.
To produce peptide MRGSH 6 -ProPEc (Glu 440 -Glu 500 ), E. coli WG864 was cultivated in LB medium supplemented with ampicillin (100 g/ml). Isopropyl--D-thiogalacto-pyranoside (final concentra-tion 1 mM) was added at a culture A 600 of 0.5-0.6, and the cells were harvested by centrifugation when the A 600 reached 2. The resulting pellet was washed twice with 0.1 M potassium phosphate, pH 7.4, and resuspended in 5 ml of lysis buffer (50 mM sodium phosphate, 0.3 M NaCl, 5 mM imidazole, 1 mM Na-EDTA, pH 8) per g, wet weight. All subsequent steps were performed at 4°C. The cells were disrupted by two passages through a French pressure cell (AMINCO, Silver Spring, MD) at 1600 bars pressure. The lysate was centrifuged in the Sorvall SS34 rotor at 12,100 ϫ g for 20 min and then in the Beckman Ti45 rotor at 100,000 ϫ g for 2 h. MRGSH 6 -ProPEc (Glu 440 -Glu 500 ) was purified from the resulting supernatant by Ni 2ϩ -nitrilotriacetic acid affinity chromatography (Qiagen Inc., Valencia, CA) according to the manufacturer's instructions for protein purification under nondenaturing conditions. It was further purified by size exclusion chromatography using a Superdex-75 HR 10/30 column and an Amersham Biosciences fast protein liquid chromatography component system. The resulting 8.5-kDa peptide was homogeneous as determined by Tricine SDS-PAGE (36).
Western Immunoblotting Analysis-Whole cell proteins were prepared for Western Immunoblotting, and it was performed as described above (see "Transport Assays") and by Culham et al. (18), using the procedure of Towbin et al. (37) and selective anti-ProP antibodies, prepared as follows. Anti-ProP antibodies were recovered from 4 ml of adsorbed anti-ProP serum (4) by affinity purification as described by Salamitou et al. (38) using ProP-His 6 (0.3 mg) as ligand. Peptide MRG-SHis 6 -ProPEc (Glu 440 -Glu 500 ) (0.6 mg) was bound to Ni 2ϩ -nitrilotriacetic acid affinity resin (0.2 ml; Qiagen Inc.) in lysis buffer containing 10 mM imidazole and no EDTA. The loaded resin was recovered in a Micro Bio-Spin chromatography column (Bio-Rad), establishing a 0.1-ml column bed. The purified anti-ProP (1 ml) was added to the column, mixed with the resin by pipetting, transferred to a 2-ml vial, and incubated at 20°C, shaking, for 60 min. It was transferred back to the chromatography column, and the column flow-through was collected as selective anti-ProP. The recovered antibodies recognized full-length ProPEc but not MRGSH 6 -ProPEc (Glu 440 -Glu 500 ) on a Western blot.
Determination of Phospholipid Head Group Composition-E. coli cells expressing ProPEc (strain WG350 pDC79) were grown as described above (see "Transport Assays") in MOPS medium supplemented with [ 32 P]phosphate at 5 Ci/ml. Polar lipids were extracted with chloroform/methanol, and thin layer chromatography was performed as described by Wikstrom et al. (39). The relative amounts of the lipid species, identified by comparison with standards, were determined with a Bio-Rad Fluor-S MultiImager.
RESULTS
Osmotic Adaptation of ProPEc-We previously reported that the transport assay medium osmolality required to activate ProPEc-His 6 was independent of the osmolality of the medium in which E. coli was grown (NaCl-supplemented MOPS media with osmolalities in the range 0.12-0.32 mol/kg; Fig. 1B of Ref. 10). A more complex picture emerged when the bacteria were grown at higher osmolalities (up to 0.7 mol/kg). For cells grown in the higher osmolality range, the osmolality required to activate ProPEc depended on the osmolalities of both the growth medium and the assay medium (Fig. 2).
The response of ProPEc to assay medium osmolality fits an empirical relationship (see "Experimental Procedures") that supports extraction of parameters quantitatively describing its osmotic activation (10). Measurements of the initial rate of proline uptake (a 0 ) as a function of osmolality (⌸/RT) are used to determine the uptake rate that would be observed at infinite osmolality (A max ), the osmolality yielding half-max- DECEMBER 16, 2005 • VOLUME 280 • NUMBER 50 imal activity (⌸1 ⁄ 2 /RT), and the slope of the activation curve (inversely proportional to parameter B). Such analysis showed that, for bacteria grown in NaCl-supplemented MOPS media, the osmolality required to activate ProPEc and parameter B were direct functions of growth medium osmolality (Fig. 3, circles) whereas the ProPEc activity attained upon full osmotic activation (A max ) was not (Fig. 2).
Tuning and Osmotic Activation of Transporter ProP
To determine whether growth medium salinity or osmolality determined the osmolality at which ProPEc would activate, cells expressing ProPEc were grown in a sucrose-supplemented, high osmolality medium (0.72 mol/kg). A higher osmolality was required to activate ProPEc in cells grown in this sucrose-supplemented medium than in those grown without added osmolyte (Fig. 2, inset). Furthermore, the assay medium osmolalities yielding half maximal ProPEc activity (⌸1 ⁄ 2 / RT) and the slopes of the activation curves (indicated by B) were similar for cells cultivated in NaCl-and sucrose-supplemented media of similar osmolality (compare circles and triangles in Fig. 3). An osmotic adaptation process appeared to modulate the osmosensory range of ProPEc so that its activity would vary over an osmolality range relevant to ambient conditions.
Osmotic Activation and Adaptation of ProP Orthologues-Two groups of ProP orthologues are found in bacteria, all with extended C termini (7) (Fig. 1C). Group A orthologues, typified by ProPEc, include a C-terminal ␣-helical coiled-coil domain, and Group B orthologues, typified by ProPCg, do not (Fig. 1C). Putative Group A orthologue ProPAt and Group B orthologue ProPCg were expressed in E. coli, and their osmotic activation profiles were examined to determine whether the ability to undergo osmotic adaptation was correlated with the structure of the C-terminal domain.
To assess the function and osmotic sensitivity of ProPAt, E. coli cells in which that transporter was expressed from plasmid vector pBAD24 were cultivated, harvested, and resuspended in low osmolality medium (0.14 mol/kg). Proline uptake rates of these bacteria increased substantially from base line levels as the assay medium osmolality approached 0.51 mol/kg (0.2 M NaCl) (data not shown), with a maximum proline uptake rate of 5.6 nmol min Ϫ1 mg Ϫ1 protein. This suggested an osmotic response for ProPAt, although the absolute activity observed was low. Arabinose was used to induce ProPAt expression so that its activity could be more directly compared with that of ProPEc. The proline uptake rate increased with increasing arabinose concentration, as expected, and a rate of 67 nmol min Ϫ1 mg Ϫ1 protein (comparable with the activity of ProPEc without arabinose induction) was achieved at an arabinose concentration of 0.4 mM (data not shown). When the impact of osmolality on ProPAt activity was again determined after such arabinose induction, the resulting activity profile was similar to that of ProPEc (Fig. 4A).
To determine whether ProPAt would undergo osmotic adaptation, bacteria expressing ProPAt were grown at culture osmolalities of 0.14 and 0.62 mol/kg, and proline uptake rates were measured. As for FIGURE 2. The osmolality required to activate ProPEc increases with growth medium osmolality. E. coli strain WG350 containing pDC79 was prepared as described under "Experimental Procedures" in NaCl-free MOPS medium (0.14 mol/kg (white circles)); in the same medium adjusted with NaCl to attain osmolalities of 0.43, 0.52, 0.60, or 0.70 mol/kg (represented by increasingly dark gray circles); or in the same medium adjusted with sucrose to attain an osmolality of 0.72 mol/kg (triangles, inset). The initial rate of proline uptake via ProPEc was measured using assay media adjusted with NaCl to the indicated osmolalities, and lines were created by regression analysis as described under "Experimental Procedures." (encoding ProPAt; squares), or pYT12 (encoding ProPCg; diamonds) was cultured, and its initial rate of proline uptake was measured using growth and assay media adjusted with NaCl to the indicated osmolalities. Expression of ProPAt was induced by including 0.3 mM arabinose in the medium. Lines were created by regression analysis as described under "Experimental Procedures." A, bacteria were prepared in NaCl-free MOPS medium (0.14 mol/kg). B, bacteria were prepared in MOPS media adjusted with NaCl to attain osmolalities of 0.62 mol/kg (ProPAt) or 0.60 mol/kg (ProPEc and ProPCg).
Peter et al. (22) reported that ProPCg transports proline and ectoine and that it can sense and respond to osmotic changes after expression in E. coli. E. coli WG350 expressing ProPCg was grown at culture osmolalities of 0.14 and 0.60 mol/kg. As for ProPEc and ProPAt, ProPCg activity depended on both the growth and assay medium osmolalities (Fig. 4, compare A and B). However, the osmolalities required for half-maximal ProPCg activation (0.45 Ϯ 0.02 and 0.56 Ϯ 0.03 mol/kg, respectively) were up to 2-fold higher than those required to activate ProPEc and ProPAt, both of which include the coiled-coil domain. These observations showed that the C-terminal coiled-coil is not essential for the osmotic adaptation of ProP but did not rule out involvement of the extended, C-terminal sequences shared by all ProP orthologues (illustrated by the gray box in Fig. 1A and the sequence alignment in Fig. 1C).
Changes in Lipid Composition May Cause the Osmotic Adaptation of ProP-The osmolalities required to activate osmoregulated proteins BetP of C. glutamicum (5,40) and OpuA of L. lactis (8) increase as the PG content of the membrane increases at the expense of zwitterionic lipid (phosphatidylethanolamine (PE) or phosphatidylcholine (PC)). We hypothesized that the alteration in osmotic activation threshold for the ProP orthologues might result from alterations in the anionic lipid content of the bacterial membrane due to growth in media of varying osmolalities. The phospholipid head group composition of E. coli strain WG350 expressing ProPEc did change as the salinity of its growth medium varied (Fig. 5A, solid symbols). The CL content increased, PE content decreased, and PG content remained unchanged as cells were grown at increasing osmolalities. Similar changes were observed when the bacteria were grown with and without sucrose as osmolyte (Fig. 5B, open symbols). Thus, the anionic lipid content (CL plus PG) increased as the zwitterionic lipid content decreased, and the osmolalities required for half-maximal activation of ProPEc in cells grown at varying osmolalities (⌸1 ⁄ 2 /RT) correlated directly with their CL content (Fig. 5C).
The C-terminal Coiled-coil Controls the Osmolality Required to Activate ProP-Although all three ProP orthologues examined in this study showed osmotic adaptation (Fig. 3), the absolute ⌸1 ⁄ 2 /RT values for ProPCg (Group B) were consistently higher than those for ProPEc and ProPAt (Group A). Thus, the transporters that share the C-terminal coiled-coil motif (ProPEc and ProPAt) activated fully at lower assay osmolalities (near 0.4 mol/kg) than the one that lacks the coiled-coil (ProPCg, full activation at 0.6 mol/kg). This led us to hypothesize that the osmolality required to activate ProP depends on the C-terminal domain. In other words, a ProP protein with a coiled-coil structure at the C terminus will activate at a lower osmolality than a ProP protein without one.
ProPEc was truncated to remove almost two (His 6 -ProPEc-⌬11) or three (His 6 -ProPEc-⌬18) C-terminal heptads (see "Experimental Procedures"). These truncations would preclude formation of the antiparallel, four-heptad ProPEc coiled-coil (20). Bacteria expressing these deletion proteins were cultivated, harvested, and resuspended in low osmolality medium (0.13 mol/kg) supplemented with arabinose to elevate their expression. The proline uptake rates of these bacteria, measured in media adjusted to the appropriate osmolality with NaCl, increased from base-line levels as the assay medium osmolality approached 0.6 mol/kg (Fig. 6A). Thus, as predicted, these proteins required a higher osmolality than wild type His 6 -ProPEc to activate (⌸1 ⁄ 2 /RT values of ϳ0.45 and 0.18 mol/kg, respectively).
The maximum proline uptake rate attained by the truncated transporters (8 -10 nmol min Ϫ1 mg Ϫ1 protein) was much lower than that of their wild type control (His 6 -ProPEc, 95 nmol min Ϫ1 mg of protein Ϫ1 ). In principle, these low activities could be due to low expression levels. The anti-His 5 antibodies normally used to assess expression levels of ProPEc-His 6 and its derivatives (28) did not react with His 6 -ProPEc or its derivatives on Western blots. Selective anti-ProPEc antibodies that do not recogize epitopes present in a C-terminal ProPEc fragment (Glu 440 -Glu 500 ) were therefore used to determine the expression levels of His 6 -ProPEc-⌬11 and His 6 -ProPEc-⌬18. Western blots revealed that, with the specified arabinose induction, both deletion proteins and ProPEc were expressed to similar levels (Fig. 6B). Thus, in addition to altering the osmolality at which they became active, deletion of the C-terminal sequence that characterizes Group A ProP orthologues dramatically reduced the amplitude of their osmotic activation. It is possible that the deleted C-terminal sequences are also required for interactions between ProPEc and amplifier protein ProQ.
DISCUSSION
ProPEc and ProPCg were previously shown to act as both osmosensors and osmoregulators (4,22). In this study, ProPAt was found to function in the same manner (Fig. 4A). In addition, all three transporters were found to undergo osmotic adaptation (Fig. 3). Namely, the osmolality required to activate each transporter was proportional to the osmolality of the culture medium in which the bacteria were grown. This adaptive phenomenon broadens the osmolality range over which ProP can promote bacterial osmotolerance, ensuring that the transporter is poised to respond to ambient osmolality.
The osmolalities required to activate osmoregulatory transporters BetP of C. glutamicum and OpuA of L. lactis increase with the PG content of the bacterial (40) or proteoliposome (5,8) membranes in which they reside. We therefore speculated that the osmotic adaptation of the ProP orthologues in E. coli may correlate with changes in lipid composition that would occur due to osmoregulation of phospholipid metabolism, since the cells expressing the transporters were grown in media with increasing osmolalities. Although effects of other parameters (e.g. temperature) on bacterial membrane lipid composition were defined prior to this study, the dependence of E. coli lipid composition on growth medium osmolality (and salinity) had received limited attention (41). The phospholipid composition of other bacteria is affected by growth medium salinity, but the physiological consequences of those changes are not known (42)(43)(44). In fact, the CL content of E. coli cells increased significantly, whereas the PE content decreased, as cells were grown in media with osmolalities in the range pertinent to ProPEc adaptation (Fig. 5). The CL content of E. coli is also known to rise during the transition to stationary phase (41).
In E. coli, CL (or diphosphatidylglycerol) is produced when CL synthase catalyzes the condensation of two PG molecules, releasing glycerol. Thus, bacteria unable to synthesize PG also lack CL (and hence most anionic lipid species). CL synthase activity is attributed to the cls gene product, but other enzymes may also contribute, since cls mutants contain residual CL. Transcription of cls begins immediately downstream from a classical 70 promoter (45,46) and is enhanced in stationary phase (47). Tropp predicted that cls expression may also be controlled by stationary phase factor RpoS (48). Our data support that notion, since RpoS mediates transcription of multiple osmoregulatory genes including proP (49). However, cls was not identified as an RpoSregulated gene during multiple screens designed to identify members of the RpoS regulon (50); nor was it identified as responding at the transcriptional level to osmotic stress (51)(52)(53)(54).
We observed a direct relationship between the mole fraction of CL among E. coli polar lipids and the osmolality required to activate ProPEc (Fig. 5B). A causal link between changing cardiolipin content and altered osmoregulation of ProP activity remains to be demonstrated. However, the adaptation of ProP to growth osmolality could result from the impact of increased CL and decreased PE on bulk membrane properties (e.g. increased anionic surface charge or altered potential to form the nonbilayer HII phase (55)). Alternatively, ProP could interact with and respond directly to CL.
CL constitutes a small proportion of polar lipids in E. coli, and the absolute change in CL content in response to increasing osmolality is quite small (increase of 3-4 mol %). Polar lipid extracts from E. coli include phospholipids derived from both the cytoplasmic membrane and the inner leaflet of the outer membrane. The inner leaflet of the outer membrane contains a higher proportion of PE and a lower proportion of PG than does the cytoplasmic membrane (56). Thus, the impact of increasing osmolality on the proportion of CL in the cytoplasmic membrane may exceed that observed for the total polar lipid pool. Furthermore, CL contains two negatively charged phosphate groups versus only one in the other phospholipids. In addition, dye-binding studies suggest that CL is concentrated near the septa and poles of E. coli FIGURE 6. ProPEc derivatives lacking the C-terminal coiled-coil can be osmotically activated. A, E. coli strain WG350 containing pMD2 (encoding His 6 -ProPEc-⌬11; black circles) or pMD3 (encoding His 6 -ProPEc-⌬18; gray circles) was prepared as described under "Experimental Procedures" in NaCl-free MOPS medium (0.13 mol/kg) supplemented with 0.1 mM arabinose. The initial rate of proline uptake was measured using assay media adjusted with NaCl to the indicated osmolalities. B, the expression levels of the truncated ProPEc variants were compared with that of full-length ProPEc by Western blotting as described under "Experimental Procedures." The lanes of the gel were loaded with equal quantities of solubilized whole cell protein. The primary antibody (selective anti-ProPEc) did not recognize epitopes constituted by residues Glu 440 -Glu 500 of ProP (see "Experimental Procedures"). Marker, 45 kDa. cells (57). Thus, the ProPEc environment could be more strongly influenced by changing CL levels if it were similarly localized. For example, the osmolality required for ProPEc activation could be determined in part by the relative affinities of the ProPEc C terminus for itself (homodimeric coiled-coil formation) and a cytoplasmic membrane surface of varying CL content (7). Bacteria with null mutations in cls contain less than 0.1% CL yet show few phenotypes. It is therefore likely that PG can assume many functions of CL (55). Nevertheless, certain enzymes are specifically CL-dependent (58), and CL is a structural component of certain membrane proteins, in some cases creating a deformable "cushion" between subunits (59). For example, CL is required for the formation of respiratory enzyme supercomplexes in the inner mitochondrial membrane of yeast (60,61). An antiparallel coiled-coil structure links the subunits in ProPEc dimers within the cytoplasmic membrane of E. coli, and it is associated with activation of ProPEc at low osmolality (18) (Fig. 6). Since the osmolality required for ProPEc activation rises as the membrane CL content rises (Fig. 5), CL may intercalate between ProPEc monomers, obstructing the conformational changes necessary for transporter activation.
Our work with ProP orthologues revealed another interesting phenomenon. When grown at low osmolality, Group A ProP orthologues (ProPEc and ProPAt (Fig. 3) as well as OusA from Erwinia chrysanthemi (18), all with C-terminal coiled-coil motifs) could be activated at much lower osmolalities than a Group B orthologue (ProPCg, which lacks that motif) (see Figs. 1 and 3). Furthermore, substitution R488I, which disrupted coiled-coil formation by a peptide replica of the ProPEc C terminus, elevated the osmolality required to activate ProPEc (18). ProPEc variants with C-terminal deletions were characterized to further test the correlation between the coiled-coil structure and the osmotic activation threshold. Earlier we reported that removal of 26 C-terminal residues inactivated ProPEc in vivo (4). However, that study was not designed to detect limited residual activation of the transporter at very high assay osmolality. In this study, removal of sufficient sequence to preclude antiparallel coiled-coil formation (11 or 18 C-terminal residues; see Fig. 1) attenuated the maximum activity attained by the transporters and dramatically increased the osmotic activation threshold (Fig. 6). This result corroborated our hypothesis that antiparallel coiled-coil formation by the C-terminal domains of adjacent ProPEc molecules is required for its activation at low osmolality. Although other mechanisms are possible, attenuation of ProPEc activity by these deletions may indicate that the C terminus is required for interaction of ProPEc with amplifier protein ProQ.
In addition to the structure of the C-terminal domain, the membrane lipid compositions of the bacteria that encode ProP orthologues may modulate their osmotic activation. Group B orthologue ProPCg originates in a membrane composed entirely of anionic lipid, whereas the Group A orthologues examined in this study (ProPEc and ProPAt) originate in membranes containing much less anionic lipid, as does the putative Group A orthologue from Pseudomonas putida (TABLE ONE). Indeed, even higher osmolalities were required to activate ProPCg (and Na ϩ -betaine symporter BetP) upon expression in C. glutamicum than upon expression in E. coli (22,40).
It has been proposed that osmosensing occurs when changes in cytoplasmic ionic strength or K ϩ concentration immediately alter interactions between existing phospholipid head groups and particular osmosensor domains or change the conformation of an osmosensory protein within the membrane (2,7). We propose that, on a longer time scale, extracellular osmolality changes elicit physiologically relevant alterations in phospholipid head group composition. The altered membrane lipid composition may alter the structures of embedded proteins (e.g. changing the conformation of ProP in a manner that elevates the extracellular osmolality required for its activation). We therefore propose that at least some effects of phospholipid composition on ProP activation are relevant to its osmotic adaptation, not to osmosensing. | 2018-04-03T03:00:59.104Z | 2005-12-16T00:00:00.000 | {
"year": 2005,
"sha1": "66dbcc09b8c64a683f865e157ae48f05a310932f",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/50/41387.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7baafa082117473cde4ea6a88a1a4833f1552a60",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15742486 | pes2o/s2orc | v3-fos-license | Establishment of a promoter-based chromatin architecture on recently replicated DNA can accommodate variable inter-nucleosome spacing
Nucleosomes, the fundamental subunits of eukaryotic chromatin, are organized with respect to transcriptional start sites. A major challenge to the persistence of this organization is the disassembly of nucleosomes during DNA replication. Here, we use complimentary approaches to map the locations of nucleosomes on recently replicated DNA. We find that nucleosomes are substantially realigned with promoters during the minutes following DNA replication. As a result, the nucleosomal landscape is largely re-established before newly replicated chromosomes are partitioned into daughter cells and can serve as a platform for the re-establishment of gene expression programmes. When the supply of histones is disrupted through mutation of the chaperone Caf1, a promoter-based architecture is generated, but with increased inter-nucleosomal spacing. This indicates that the chromatin remodelling enzymes responsible for spacing nucleosomes are capable of organizing nucleosomes with a range of different linker DNA lengths.
INTRODUCTION
The genomes of eukaryotes exist as chromatin. The fundamental subunit of chromatin, the nucleosome is not a static structure, but can be reconfigured dynamically. For example, variant histones can be incorporated into nucleosomes and the histone polypeptides themselves subject to extensive post-translational modification. In combination, such changes have led to the identification of distinct chromatin states (1)(2)(3). Chromatin states are often conserved through cell divisions, and recent studies have shown that different types of histone modification are restored at different rates (4,5). However, the processes that underlie this are poorly understood.
The positioning of nucleosomes is non-random and influences access to underlying regulatory DNA sequences (6,7).
The separation of DNA strands during replication requires dissociation of histones and raises the question of how nucleosomes are reorganized to the positions that are optimal for their functions in gene regulation. Previous studies have indicated that following replication, chromatin exists in a state that is distinct to mature chromatin. For example pulse chase radiolabelling has been used to show that chromatin is more sensitive to nuclease digestion 1 min following replication, but matures within about 10 min (8)(9)(10)(11)(12). Rapid reassembly of nucleosomes is supported by electron micrographs showing nucleosomes assembled close to replication origins (13). Subsequently, analysis of the regions protected from psoralen cross-linking showed that nucleosomes are assembled within 250 bp of replication forks (14)(15)(16). As DNA replication proceeds at several kilobytes per minute (17), this indicates that nucleosomes are reassembled within seconds. A related approach was then used to show that nucleosomes at the rDNA locus are assembled at positions in nascent chromatin that are similar to those observed in mature chromatin 600 bp from a replication fork (18).
Since these studies were carried out further progress has been made towards understanding how nucleosomes are organized on a genome scale. In budding yeast it has been observed that nucleosomes are organized with respect to coding genes (19,20). In some locations the underlying structural properties of DNA may contribute to nucleosome organization. However, this effect is likely to be greatest at the nucleosome depleted regions within the vicinity of promoters (21). Trans acting factors are implicated in the establishment of the regularly spaced arrays of nucleosomes over coding regions. Amongst these, a subset of chromatin remodelling adenosine triphosphatase (ATPases) with the biochemical capability to generate regularly spaced arrays of nucleosomes are attractive candidates (22,23). Further support for this stems from the observation that deletion of combinations of ISWI and Chd1 enzymes results in the loss of nucleosome organization over coding regions (24)(25)(26).
Although it is clear that adenosine triphosphate (ATP)dependent chromatin remodelling enzymes act to organize nucleosomes over coding regions, it is less clear when this occurs or how long it takes. The fact that nucleosomes are organized across coding regions suggests that nucleosomes organization is coupled to transcription. Supporting this the key enzymes ATPases associated with nucleosome organization are both linked to elongating RNA polymerase. Chd1 through its interaction with the RNA Polymerase IIassociated factor (PAF) complex (27) and Isw1b through its interaction with the coding region histone modification H3 K36me3 (28). However, following inhibition of transcription promoter-based chromatin architecture persists for 20 min becoming perturbed but not lost after 120 min (29). This indicates that ongoing transcription is not required to maintain nucleosome organization. In addition, it has been observed that yeast extracts that do not support transcription are capable of partially restoring promoter based chromatin architecture (30). From these observations, it is not clear when nucleosome organization is established over the majority of coding regions, and especially how long it takes for this to occur following the disassembly of nucleosomes coupled to the transit of DNA polymerase.
If replication origins were used with high efficiency and identical timing in all cells within a population, it would be possible to study nascent chromatin by isolating chromatin from synchronized cultures. However, origin use and timing varies (31), possibly explaining why intermediates in chromatin reassembly are not detected in the bulk chromatin of synchronized cultures (32,33). To address this, we have developed approaches to specifically enrich for recently replicated DNA. Using these we show that the majority of nucleosomes are aligned to promoters within the minutes following replication. This supports the existence of a transcription independent pathway capable of organizing nucleosomes over gene bodies. This provides a means of reestablishing nucleosome organization on newly replicated chromosomes prior to their segregation into daughter cells. As a result genome scale nucleosome organization can be propagated through mitotic cell divisions.
Stable isotope labelling
Differential mass labelling was performed by growth in heavy medium (34) containing D-glucose-13 C 6 ,1,2,3,4,5,6,6-d7 (Cambridge isotope laboratories) and Ammonium-15 N sulphate (Sigma-Aldrich). Cells were grown in heavy media to an OD 660 of 0.66 at 30 • C. The ␣-factor mating pheromone was added to a final concentration of 50 ng/ml for 1 h 30 min. Cell morphology was checked by light microscopy to ensure cells were in M or G 1 phase. Cells were collected and washed on cellulose filter membranes with 800 ml of warm YPAD. Cells were re-suspended in 350 ml of YPAD containing 50 ng/ml ␣-factor and grown for 60 min at 30 • C. Cell morphology was again checked by light microscopy for shmoo formation representative of G 1 arrest. Cells were filter washed with 800 ml of YPAD and released into 350 ml of YPAD (isotopically light) S-phase medium at 23 • C. Approximately 50 ml of cells were collected at defined time points and treated with formaldehyde to allow fixation for subsequent chromatin digestion.
CsCl gradient ultracentrifugation
A solution of CsCl (sigma) and T 10 E 100 was made to a starting density of 1.4 g/g (CsCl/ T 10 E 100 ). A total of 90 l (in T 10 E 0.1 , pH 7.5) of MNase digested, differentially mass labelled DNA was mixed with 9.3564 g of CsCl solution and sealed in a 5.1 ml ultracentrifugation tube (Beckman Coulter). Centrifugation (Vti 65.2 rotor) was performed sequentially at 65 000 rpm for 50 h, 50 000 rpm for 18 h, 28 000 rpm for 3.5 h and brought to rest with the slow brake setting applied.
Ultracentrifugation tubes were fixed to a retort stand and pierced at the base and then top with a small bore needle. Mineral oil was pumped in the top of the ultracentrifugation tube forcing drop wise elution from the tube at a rate of ∼400 l/min. A total of 250 l of CsCl gradient was collected per fraction allowing collection of ∼20 fractions per gradient. Gradient fractions were subsequently dialysed against water (50 ml) on a floating dialysis membrane (Millipore) for 60 min. Fractions 9 and 17 were chosen to represent the non-replicated (HH) and replicated (HL) portions of the gradient respectively.
EdU labelling in synchronized cultures
Cultures were grown to an OD 660 of 0.66 at 30 • C in YPAD and synchronized with ␣-factor. Cells were filter washed with YPAD and released into YPAD medium containing 50 M EdU at 23 • C. Cells were harvested at defined time points and were fixed with formaldehyde for subsequent MNase digestion.
EdU labelling in asynchronous cultures
Cultures were grown to an OD 660 of 0.8 at 23 • C in YPAD. EdU was added to a final concentration of 100 M EdU. Cells were harvested at defined time points and fixed with formaldehyde for subsequent MNase digestion.
Biotinylation and isolation of EdU labelled nascent DNA
Biotin azide was attached to EdU labelled DNA using the Click-iT R Nascent RNA Capture Kit (Invitrogen, C10365). EdU labelled DNA replaced EU labelled RNA in the protocol. Isolation of biotinylated DNA was achieved using Dynabeads R MyOne TM Streptavidin T1 (Invitrogen).
Chromatin digestion and deep sequencing
Cells were cross-linked by addition of formaldehyde to a final concentration of 1% v/v for 10 min at room temperature (RT). Crosslinking was quenched with addition of 2.5 M glycine to a final concentration of 0.125 M and cells were further incubated for another 5 min at RT. Crosslinked cells were washed 3× with ice cold Tris-buffered saline (20mM Tris pH 7.5, 120 mM NaCl). Cells were mechanically lysed according to (35) and digested using micrococcal nuclease (MNase) according to (36). MNase titrations were selected to obtain largely mononucleosomal DNA with larger nucleosomal DNA fragments apparent. Nucleosomal DNA was prepared to create a library for paired end deep sequencing on Illumina platforms. Briefly, DNA was blunt ended, Nucleic Acids Research, 2016, Vol. 44, No. 15 7191 A-tailed and ligated to Illumina genomic adapters, followed by a final polymerase chain reaction with a size-selecting gel purification. Sequencing data is deposited at ENA ref PR-JEB13217 (to be released upon acceptance for publication). Supplementary Table S1 provides a summary of the datasets released. Reads were mapped to the genome using bowtie (37). Representation of reads across individual loci was performed using IGB (38). Data was then analysed using custom python scripts included as Supplementary Data. For average plots surrounding multiple reference points, each value was divided by the sum of reads for each dataset as a means of normalization as illustrated in the python script accompanying the supplemental materials. Where applied, data was smoothed using a 75 bp moving average. For plots of nucleosomal reads across whole chromosomes, data was twice smoothed using a 10 000 bp moving average.
Imaging of EdU labelled nascent DNA
Cultures were grown to an OD 660 of 0.5 at 23 • C in YPAD. EdU was added to a concentration of 100 M for defined time points. Cells were fixed with 2% formaldehyde for 30 min and wash 3× with phosphate buffered saline (PBS). Cells were incubated with 0.5% triton x-100 for 25 min. Cells were then washed 2× with 3% bovine serum albumin (BSA) in PBS. Cells were further processed for the Click-iT EdU reaction as described in the protocol C10337 (Invitrogen). Subsequently cells were washed 2× with 0.1% tween in PBS and 2× finally with 3% BSA in PBS. The images were acquired with widefield microscopy using the OMX Blaze platform.
Affinity purification of EdU containing nucleosomal DNA provides a means of studying chromatin within minutes of replication
The thymidine analogue 5-ethynyl-2 -deoxy-uridine (EdU) differs from thymidine only at the 5 position and is incorporated by DNA polymerase in place of thymidine (39). Following incorporation into DNA, EdU can be coupled to biotinylated azide which provides a means of affinity purification ( Figure 1A). To ensure that EdU was available for rapid incorporation we used a strain in which five copies of the herpes simplex thymidine kinase were expressed from GDP1 promoters (40) and the human equilabrative transporter 1 (ENT1) gene was expressed from the ADH1 promoter (41,42). Fluorescent labelling of EdU was used to assess the rate at which it gets incorporated into cells. A progressive increase in the number of cells with fluorescent foci was observed following incubation of an asynchronous culture with EdU between 5 and 60 min ( Supplementary Figure S1A). This indicates that the time taken for EdU to enter cells and reach concentrations comparable with the endogenous pool of Thymidine is less than 5 min as foci will only be detected by microscopy once sizable tracts of EdU have been incorporated.
To provide a means of isolating chromatin assembled on recently replicated DNA, cultures were released from G1 arrest into media containing EdU. Chromatin was prepared from cultures at various time points and streptavidin beads used to purify replicated chromatin from the total input chromatin at each time point. When the distribution of nucleosomes on recently replicated DNA was plotted across chromosome XIII, reads were found to be highly enriched (c20-fold) and tightly distributed surrounding replication origins (43) 27.5 min following release from G1 arrest (Figure 1B and C). At later time points the enrichment at origins reduces and spreads away from origins consistent with the replication of the majority of the genome between 25 and 60 min following release from G1 arrest ( Supplementary Figure S1B).
When nucleosomal reads were aligned with respect to promoters, it was notable that the amplitude of the nucleosomal oscillation was less pronounced than that observed in input chromatin (Figure 2A). Over subsequent time points promoter based nucleosome organization is restored to the state observed in input material (Figure 2A-D). This indicates that it is possible to monitor the re-establishment of chromatin organization in the minutes following replication. In order to investigate whether the maturation observed at all genes averaged was also observed at individual loci, the distribution of reads was plotted across selected loci. At regions close to origins where read depth at the early time points is high, nucleosomal features were apparent at the earliest time point and are often observed to become better defined at a rate consistent with the average at all genes ( Figure 2E). In some cases, rates of maturation differed from the genome average, and for example appear to be established at the earliest time point and either decayed or remain unchanged ( Figure 2F). Nascent chromatin from the early stages of replication was subject to greater amplification than used in conventional MNase-Seq reactions. This may contribute to the sporadic distribution of reads distant from replication origins ( Figure 2G). The relatively disordered nature of nascent chromatin complicated the use of nucleosome calling algorithms and clustering to identify cohorts of genes that mature at similar rates.
The kinetics of chromatin organization
Budding yeast have defined origins of replication, by definition the early stages of replication take place close to origins. The profile of reads surrounding origins allows the mean length of DNA replicated to be estimated within the vicinity of each isolated origin. The total length replicated at the 27.5 min time point typically ranges from 0 to 33 kb. Although, the base of the peak flanking many replication origins is ∼33 kb, the majority of the reads flanking each origin are considerably shorter. This arises from the fact that origin firing is stochastic (31) and as a result at later time points additional origins fire in different cells, but these have time to replicate progressively shorter regions. The distance from one side of an origin required to account for 50% of the read depth was calculated as 4500 ± 600 bp. This means that on average DNA polymerase has travelled 4500 kb at this time point. As the rate of DNA replication has been measured as 1.6 kb/min (17) this means that on average within the 27.5 min sample we can assume DNA had been replicated for 2.8 min. In addition, we can measure the extent to which chromatin is organized for nucleosomes at different positions within the coding region. This was achieved by measuring the amplitude of the nucleosomal oscillation ( Figure 3A) in nascent chromatin as a fraction of that in the input chromatin for different time points. Relative nucleosome organization could then be plotted against the time following replication calculated with reference to the length distribution of fragments surrounding origins ( Figure 3B). A fit of the data points to the rate equation for a first order reaction enables the half time for nucleosome organization to be estimated as 2.1 min.
Nucleosomes are restored at replication origins within minutes of replication
The timing with which chromatin is restored is short, ∼2 min, in comparison to the half-time for transcription of yeast genes, 8 min (44). This raises the question, does the alignment of nucleosomes with promoters require transcription? One way of investigating this further is to study the organization of chromatin at cohorts of genes that are likely or unlikely to be expressed during the period of EdU labelling. To do this cohorts of genes were selected based on expression during the cell cycle (45). Nascent chromatin for genes expressed in G1 or S-phase was disordered at the 27.5 min time point (Supplementary Figure S2A and C). However, by 35 min from release from G1 arrest nucleosomes had adopted a more similar organization at genes expressed in S-phase in comparison to genes expressed in G1 (Supplementary Figure S2B and D). Little effect was observed if the maturation of chromatin was compared for genes expressed at high and low levels in asynchronous cultures (Supple- mentary Figure S2E-H). The stronger initial alignment of nucleosomes with genes expressed during S-phase could result from the coupling of ATP-dependent nucleosome spacing with transcriptional elongation. Alternatively, genes expressed in S-phase may have higher occupancy of bound transcription factors capable of acting as a reference point from which nucleosomal arrays can be established. Distinguishing between these explanations could be assisted by studying alignment of nucleosomes to a feature not involved in transcription.
Within the yeast genome it is known that nucleosomes are also aligned to replication origins (46). Alignment of nascent nucleosomes to replication origins shows that nucleosomes are substantially aligned with replication origins at the 27.5 min time point ( Figure 4A). By 32.5 min the +2 and +3 nucleosomes are fully organized which is consistent with the half-time observed for chromatin restoration at promoters. The magnitude of the +1 nucleosome varies during S-phase perhaps reflecting changes to accessibility at origins during S-phase. Replication origins are often located close to promoters, so a subset of replication origins with no promoter located within 500 bp was also studied ( Figure 4 E-H). At these 127 origins, positioning of the +2 and +3 nucleosomes was also re-established by 32.5 min following release from G1 arrest. This provides additional evidence that the realignment and spacing of nucleosomes does not require transcription.
Defects in chromatin assembly result in disruption and delay in the organization of nascent chromatin
It is known that histone chaperones such as Asf1 and Caf1 assist in the delivery and assembly of nucleosomes on newly replicated chromatin (47)(48)(49)(50). The chromatin from asynchronous cultures of strains mutated for these chaperones show defects to nucleosome positioning of promoter distal nuclesosomes (51). We next investigated the effect mutations to these chaperones had on nascent nucleosome organization. Differences observed include a reduction in the amplitude of the nucleosome oscillation, a reduction in the occupancy of the +1 nucleosome and changes to the posi-tioning of nucleosomes ( Figure 5A and C). These changes were less prominent in mature chromatin ( Figure 5B and D).
Reduced histone supply results in increased inter-nucleosome spacing in nascent chromatin
The cac1 mutant is especially interesting as in this strain it has been shown that fewer nucleosomes are deposited on replicated DNA in strains mutant for components of the CAF1 complex (52). This provides an opportunity to investigate the effect of nucleosome depletion during the course of chromatin organization. We found that the combination of growth in the presence of EdU and the cac1 mutation resulted in substantial checkpoint activation. Prolonged exposure to EdU has previously been observed to activate DNA damage checkpoints (53,54) and in combination with mutation of CAC1 progression through S-phase was severely disrupted, making it impossible to study the maturation of chromatin in this mutant using the EdU approach.
Instead, we used an alternative approach to separate replicated DNA fragments. This involved adaption of the classical isotope labelling approach (55) for separation of nucleosome length DNA fragments. This relies on the ability of CsCl gradients to resolve the difference in the mass of DNA fragments labelled on both strands with heavy isotopes of 13 C and 15 N from replicated DNA in which only one strand includes heavy isotopes ( Supplementary Figure S3A). Importantly, this involves no chemical change to DNA that could contribute to replication stress. This approach has previously been used to monitor the progression of replication genome wide (34), but is typically applied to the separation of fragments that are kilobases in length. In order to achieve separation of smaller fragments we increased the mass difference achieved by isotope labelling through growth on D-glucose-13 C 6 ,1,2,3,4,5,6,6-d7. This sugar enables heavy labelling of both carbon and nonexchangeable hydrogen atoms. These atoms result in an increase in the mass difference from 13 to 18 Da per base. Using this approach in synchronized cultures, nascent nu- (46). Nascent (blue) and input chromatin (orange) are plotted 27.5 (A), 32.5 (B), 35 (C) and 45 min (D) following release from G1 arrest. The +2 and +3 Nucleosomes are significantly ordered at the first time point and this improves over the following minutes. As many replication origins are located adjacent to transcribed genes, the same analysis was performed with 127 replication origins for which no TSS was present within 500 bp of the origin (E-H). Nucleosomes are not as precisely aligned to TSS-free origins in comparison to all origins (Compare input chromatin A-D to that of E-H). In particular the nucleosome depleted region at origins is poorly defined in early S-phase. Organization of the +2 and +3 nucleosomes at replication origins with no adjacent TSS mature at a similar rate to that observed at all origins. cleosomes are observed to be enriched flanking replication origins (Supplementary Figure S3B and C). The earliest time point at which we could isolate replicated DNA from wild-type strains using this approach was 33 min following G1 arrest, at which time nucleosomes were observed to be significantly promoter aligned and to become fully aligned over subsequent time points (Supplementary Figure S3D-G). Adoption of this approach with the cac1 mutant showed that replication proceeds with similar timing to the CAC1 parental strain (Supplementary Figure S4) as has been observed previously (56). Alignment of nucleosomal reads to the TSS over this time course reveals progressive organization of nucleosomes indicated by an increase in the amplitude of the nucleosomal oscillation ( Figure 6A-G). Interestingly, we also observe shifts in the centres of the nucleosomal peaks in nascent HL chromatin compared to unreplicated HH chromatin for the same time points (Figure 6A-G). Quantitation of this defect indicates that it is greatest at the 48 min time point which corresponds to mid S-phase and decreases as chromatin matures at later time points ( Figure 6H). In addition, the number of base pairs with which each nucleosome is shifted increases in incre-ments of ∼5 bp for progressively more 3 nucleosomes (Figure 6H). This is consistent with an increase in the spacing between nucleosomes on nascent DNA from 165 to ∼170 bp. A similar increase in the length of dinucleosomal fragments was also observed providing a direct measure of transiently increased inter-nucleosome spacing (Supplementary Figure S6C). Comparing the maturation of chromatin between the nascent chromatin in wild-type and cac1Δ mutant strains shows that the defect to nucleosome positioning is most pronounced at time points in mid S-phase ( Figure 7A and B). In late S-phase nucleosome spacing is restored almost to that observed in the wild-type ( Figure 7C). As a result it seems plausible that a subpopulation of cells in Sphase contribute to the smaller defect in spacing observed in asynchronous cultures ( Figure 7D).
The changes to spacing observed in Figures 6 and 7 indicate that mutation of CAC1 results in the establishment of a promoter-based chromatin architecture with increased nucleosome spacing. This altered chromatin is then converted to a form that is more similar to that observed in the wildtype. One possible explanation for this would be that as a result of the cac1 mutation nucleosomes are assembled at A and B). This is restored in late S-phase (C). The spacing defect in asynchronous total chromatin (D) is less than that observed in mid S-phase (A and B). Nucleosomal reads from asynchronous wild-type, cac1 , hir1 and hir1 cac1 strains were aligned to the TSS of all genes (n = 5015) (E). The nucleosome depleted region at promoters is partially filled in a hir1 cac1 strain (green) in comparison to cac1 (orange), hir1 (blue) and wild-type (grey). The defect to nucleosome spacing is quantified in (F). The defect is increased in the hir1 cac1 consistent with replication-independent histone turnover acting to restore nucleosome density on coding regions. reduced density. However over time the normal density of nucleosomes is restored over coding regions. One way in which this could occur is as a result of post-replicative redistribution of nucleosomes via replication-independent histone turnover. It is known that replication-independent histone is higher at some regions, such as promoters than it is on coding regions (57,58). Thus it is possible that repli-cation independent turnover could act to redistribute nucleosomes from sites of high turnover to coding regions. As the HIR complex is required for replication-independent histone turnover at many sites (58), we investigated this by studying nucleosome organization in hir1 mutants. Mutation of HIR1 alone results in a reduction to the amplitude of the nucleosomal oscillation on coding regions, but lit-Nucleic Acids Research, 2016, Vol. 44, No. 15 7199 tle change in nucleosome spacing ( Figure 7E). In a hir1Δ cac1Δ double mutant there is increased occupancy of histones within the nucleosome depleted region. This is consistent with Hir1 normally playing a role in removing nucleosomes from the nucleosome depleted region (NDR) in the absence of Cac1. The nucleosomal oscillation is dampened in the hir1Δ cac1Δ strain indicating that nucleosomes are not spaced as effectively in this mutant. In addition the residual promoter based nucleosomes show a defect which is increased in comparison to that observed in the cac1Δ mutant. This is consistent with the idea that replicationindependent histone turnover acts to restore nucleosome density and as a consequence nucleosome spacing over coding regions.
DISCUSSION
The EdU-based affinity purification approach described here provides a means to quantitatively asses the realignment of nucleosomes with promoters genome wide. The system relies on the presence of defined origins of replication in budding yeast and the use of synchronized cultures. A limitation is that the timing with which individual replication origins fire is stochastic with individual origins initiating over a distribution of times (31). We address this by calculating timing based on the lengths of DNA fragments replicated at time points following release from arrest. This enables us to estimate the half time for reassembly of a promoter based chromatin architecture as ∼2 min. This time scale is also consistent with the data we obtained using isotope labelling. The enrichment for nascent chromatin was c6-fold using the CsCl approach in comparison to 20-fold using EdU meaning that we could not enrich for chromatin at very early time points. However, promoter based chromatin was largely (c90%) re-established at the earliest time point corresponding to 4.9 min post-replication (Supplementary Figure S3D). We have also isolated chromatin from asynchronous cultures following incubation with EdU for times as short as 5 min (Supplementary Figure S5). Using this approach, DNA is labelled at all distances from replication origins so we cannot use DNA fragment lengths to infer timing. The time taken for EdU to outcompete the intracellular pool of thymidine is not known, but is <5 min based upon detection of EdU tracts by microscopy (Supplementary Figure S1A). This means that the observation of c80% chromatin organization after 5 min could have occurred <5 min following replication but not more. Using all three approaches we observe that promoter-based chromatin is restored to between 70 and 90% of the level observed within native chromatin 5 min post-replication. Isolation of chromatin from earlier time points requires greater amplification. In our experience, the resulting data was not suited for high resolution nucleosome mapping, even after averaging for many genes.
Rapid chromatin reorganization post-replication is consistent with previous observations of chromatin reassembly behind replication forks within seconds (13)(14)(15)(16). The initial deposition of histones is likely to occur so rapidly that we do not detect a substantially nucleosome free state. The read length distribution we observe in nascent chromatin shows a strong peak in the 165 bp size range consistent with the assembly of canonical nucleosomes. However, our fragment amplification was not tuned to identify subnucleosomal species that have been observed in vitro (59,60). Following the initial steps in nucleosome assembly, we find that nucleosomes are aligned to promoters over the following minutes. One previous study reported that three nucleosomes within the rDNA intragenic region are repositioned to the locations observed in asynchronous cells within a few seconds (18). This is considerably faster than we have observed here. It is possible that positioning of nucleosomes on the 5S intragenic regions is unusual in that is more strongly influenced by the underlying DNA sequence than is the case for most coding region nucleosomes. Supporting this, a DNA sequence that partially overlaps this locus has been observed to position nucleosomes similarly in vivo and in vitro (61). Rapid alignment of chromatin with promoters is also consistent with the alignment of Okazaki fragments with nucleosome dyads (62). Our study provides a more direct measurement of timing as in this approach Okazaki fragments are harvested typically 2.5 h after depletion of DNA ligase (62) and there is potential for the positions of nicks to change as a result of fragment maturation during this time (63).
Re-establishment of a promoter-based chromatin architecture over 2 min is fast in comparison to the half time for transcription of yeast genes, 8 min (44). This suggests promoter alignment does not require transcription which is further supported by the observation that nucleosome alignment occurs over a similar time course at replication origins where no coding transcription is anticipated. An attractive and simple model to explain the positioning of arrays of nucleosomes involves a barrier acting as a reference point from which nucleosomes are statistically positioned (64)(65)(66). A range of different DNA bound factors may be capable of acting in this way. For example, fortuitous binding of TFIIB has been observed to coincide with the establishment of promoter-like nucleosomal arrays (21). In vitro it has been observed that binding of lac repressor can act as a reference point for the phasing of nucleosome arrays (67). In vivo a number of factors including Tbf1, Reb1, Abf1 and Rsc3 are implicated in maintaining chromatin organization at promoters (51). The process of positioning could be facilitated by ATP-dependent nucleosome spacing enzymes such as Chd1 that are capable of redistributing nucleosomes to locations equidistant between neighbours (24,(68)(69). This provides a means by which the rapid alignment of nucleosomes with transcriptional start sites and replication origins could occur as a result of the rapid rebinding of DNA binding proteins which can then act as a reference point for positioning nucleosome arrays directed by remodelling enzymes. Changes in the distribution of strong DNA binding proteins capable of acting as reference points from which arrays are positioned could result in changes to the organization of nucleosomes at specific regions throughout the cell cycle or in response to environmental changes. This potentially provides an explanation for changes to chromatin at large cohorts of genes during the cell cycle and in response to metabolic changes (33,70) both of which are not necessarily linked to DNA replication. It is also worth mentioning that while our study has focused on chromatin organization post-replication, it is likely that there is a distinct transcrip-tion linked pathway that acts to restore chromatin following transit by RNA polymerases (28,71). If this is as rapid as replication coupled assembly, methods that can isolate chromatin in the minutes or seconds following transcription may be required to characterize this further.
Previous studies have shown that nucleosome spacing appears to be insensitive to a reduction in histone copy number (30,51,(72)(73). As consequence alternate models have been proposed in which linker DNA length is directly sensed during the course of nucleosome spacing reactions (30). Following depletion of CAF1 subunits it is known that histones are depleted and that nucleosome density is reduced in the total chromatin of asynchronous cultures (51,52). Given that CAF1 functions in chromatin assembly following replication, it is likely that histone supply is most severely compromised during the assembly following replication. We observe an increase in inter-nucleosome spacing from c165 to 170 bp in the nascent chromatin of a cac1Δ mutant yeast strain. This defect in spacing affects coding region nucleosomes and is most pronounced in mid S-phase when histone supply is likely to be most critical. In the minutes following replication, this extended spacing is restored to that observed in asynchronous cultures. Browsing through individual loci, the evidence for a change in nucleosome spacing is not as clear as in the data averaged for all genes (Supplementary Figure S6). A range of effects are observed. In many cases the dominant nucleosome positions are retained, but the pattern becomes less ordered around 48 min following release from G1 arrest ( Supplementary Figure S6A) when the average defect is largest ( Figure 6). In some cases shifts in positioning consistent with an increase in spacing are observed, but these are quite heterogeneous with some shifts appearing considerably larger than those observed on average (Supplementary Figure S6B). A major problem with using any single nucleosome positioning data to infer nucleosome spacing is that it is difficult to know which adjacent dyad locations observed in a population of cells are normally both occupied in the same cell. This is especially acute when nucleosome locations are not well defined as is the case for nascent chromatin. To address this, dinuclesomal fragments were sequenced. As dinucleosomal fragments encompass two nucleosomes and the intervening linker, they must be present on the same molecule. Assuming that the DNA protected by mononucleosomes remains constant, the change in dinucleosomal fragments reports directly on changes in linker length. A change in the mean length of dinucleosomal fragments is observed that is similar to the average change in mononuclesome positioning across genes (Supplementary Figure S6C). The size distribution of the HL dinucleosomes from the cac1Δ strain is quite broad at the early time points following release from arrest. This could result from the presence of nucleosomes deposited with variable spacing immediately following replication. In mid S-phase the distribution of dinucleosomal fragment lengths becomes better defined, but the most frequently observed lengths are just over 330 bp, 10 bp longer than observed in the unreplicated chromatin prepared from the same digests (Supplementary Figure S6C). As with the defect in mononucleosome spacing, this difference is reduced 80 min following replication. Minor differences between the data from mononucleosomes and dinu-cleosomes may reflect differences in the effects at coding regions (TSS aligned mononucleosomes) in comparison to all nucleosomes (dinucleosome data).
One of the most plausible explanations for the extension in linker length during S-phase is that reduced histone density during S-phase has an impact on statistically based spacing of nascent nucleosomes directed by ATP-dependent chromatin remodelling enzymes (64)(65)(66). Nucleosome spacing enzymes such as ISWI and Chd1 have many of the biophysical properties to accelerate a statistically based mechanism for nucleosome spacing. They can accelerate bidirectional nucleosome movement (74) and do so in a way that is sensitive to the length of DNA adjacent to nucleosomes (69,(75)(76). This sensitivity to linker DNA may act as a lower limit below which repositioning of adjacent nucleosomes is less efficient. Consistent with this different enzymes have been observed to establish arrays of nucleosomes with different periodicities in vitro (68) and changes to linker lengths are observed following changes to ionic conditions or incorporation of linker histones (68,77). Our observations are however more difficult reconcile with more recent reports using in vitro systems indicating that histone density does not to affect nucleosome spacing (30,78). It difficult to formally rule out the possibility that cac1 mutations alter the expression or activity of specific remodelling enzymes. For example, it has recently been proposed that Isw1 acts to generate wider-spaced arrays of nucleosomes than Chd1 (79) and an increase in the relative contribution of Isw1 relative to Chd1 during S-phase could contribute to the observed effects. Further investigation will be required to resolve this.
A key question arising from the observation of altered spacing in the cac1Δ is how are nucleosomes restored to a periodicity more similar to that observed in wildtype strains in mature chromatin? One possible explanation is that histone depletion is unevenly distributed across genomes in post-replicative chromatin. There is evidence to support this as previous studies have noted reduced nucleosome occupancy following histone depletion at promoters, regions enriched for Htz1 and DNA sequences unfavourable for nucleosome formation (51,(72)(73). It is known that replication-independent histone turnover is more pronounced at specific genomic regions such as promoters and regions enriched for Htz1 while it is reduced at nucleosomes enriched for genic histone modifications (57,72). While replication-independent histone turnover acts to maintain an equilibrium between assembly and disassembly in wild-type cells this may be perturbed during conditions of histone depletion with the net effect of reducing histone occupancy at sites of high turnover and increasing it elsewhere. To investigate this further, we characterized nucleosome organization in which the Hir1 component of the HIRA complex has been mutated. This complex is required for replication-independent histone turnover at many sites in a range of species (58,(80)(81)(82). Interestingly, it is required both for turnover at sites such as promoters and maintaining chromatin integrity over coding regions (83)(84)(85). We observe partial filling in of the NDR at promoters in hir1Δ cac1Δ double mutants consistent with a role for replication-independent turnover in influencing how a histone deficit is distributed across genes ( Figure 7E). In ad-dition, the defect in spacing is increased in asynchronous hir1Δ cac1Δ in comparison to cac1Δ ( Figure 7F). This effect may be partially mitigated by the role the HIR complex plays in repressing histone gene expression outside of Sphase (86) as this would be anticipated to reduce rather than increase inter-nucleosome spacing. As a consequence we believe that replication-independent histone turnover mediated by HIRA and other factors has the potential to explain why histone depletion in vivo does not result in systematic changes in the nucleosomal repeat in asynchronous cultures.
The rapid re-establishment of chromatin means that the nucleosomal platform for gene expression is re-established prior to the partition of chromosomes into daughter cells. This potentially acts to maintain gene expression programs through cell divisions. However, it should be noted that while nucleosomes are rapidly reorganized, reestablishment of the distributions of certain histone modifications is rapid while for others it is delayed (4,(87)(88). One of the major consequences of a loss of nucleosome organization is increased intragenic transcription (28,51,84,89). Limiting the time during which chromatin is perturbed reduces the opportunity for potentially disruptive intragenic transcription. However, the disruption of chromatin during replication may also provide an opportunity for the reprogramming of expression. The 2 min half-time we have measured may balance these opposing requirements. | 2017-10-17T16:36:03.110Z | 2016-04-22T00:00:00.000 | {
"year": 2016,
"sha1": "11556a9f6f2fdc421bee692207db747adca95fc2",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/44/15/7189/17437127/gkw331.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97392085cabeecdb472b06416383e3dfb4bd103d",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
25396626 | pes2o/s2orc | v3-fos-license | Cement Salvage of Instrumentation-Associated Vertebral Fractures
The authors describe the treatment of 22 vertebral compression fractures in 11 patients with metastases and prior spinal instrumentation. Pain improved in all patients, only 1 patient needed additional surgery, and there were no vertebral cement augmentation–related complications. BACKGROUND AND PURPOSE: Spinal instrumentation plays a key role in the treatment of spinal instability in patients with metastatic tumors. Poor bone quality, radiation, and diffuse osseous tumor involvement present significant challenges to spinal stabilization with instrumentation and occasionally result in postinstrumentation compression fractures. Vertebral cement augmentation has been effective in the treatment of painful tumor-related compression fractures. Our objective was to describe cement augmentation options in the treatment of vertebral compression fractures associated with spinal instrumentation in patients with metastatic tumors. MATERIALS AND METHODS: Patients who underwent percutaneous vertebral cement augmentation in the treatment of instrumentation-associated vertebral compression fractures between 2005 and 2011 were included in the analysis. Only fractures that occurred within the construct or at an adjacent level were included. The change in Visual Analog Scale and need for further surgery were analyzed. RESULTS: Eleven patients met the inclusion criteria, with 8 tumors located in the thoracic spine and 3 tumors in the lumbar spine. The median time between instrumented surgery and vertebral augmentation was 5 months (1–48 months) and the median follow-up after cement augmentation was 24 months (4–59 months). A total of 22 vertebrae that were either within or immediately adjacent to the surgical instrumentation underwent vertebral augmentation. All patients reported a decrease in their pain scores (mean decrease: 6 Visual Analog Scale points; P < .003). One patient required reoperation after cement augmentation. None of the patients experienced vertebral cement augmentation–related complications. CONCLUSIONS: Vertebral cement augmentation represents a safe and effective treatment option in patients with recurrent or progressive back pain and instrumentation-associated vertebral compression fractures.
T he role of surgery in the treatment of metastatic spinal tumors has been firmly established as an effective and safe method for spinal cord decompression and stabilization of the spine. The goals of surgery for spinal metastases remain palliative and include preservation or restoration of neurologic function and pain control. Tumor control is largely accomplished using radiation and chemotherapy. In patients with metastatic spinal tumors, spi-nal instrumentation is required in most cases to provide spinal stability after circumferential spinal cord decompression. Spinal fixation in this patient population can be quite challenging because of extensive osteoporosis and lytic tumor destruction. Furthermore, chest wall resection may be required, further destabilizing the spine and increasing the risk of fixation failure. Prior spine radiation results in increased risk of vertebral compression fractures. [1][2][3] Failure of fixation may require interruption or delay of systemic or radiation therapy, increasing the risk of local or systemic tumor progression. Vertebral compression fractures either within or adjacent to the surgical construct often result in either recurrent or progressive back pain.
Percutaneous vertebral cement augmentation (ie, balloon kyphoplasty/vertebroplasty) has been established as a safe and effective method of quickly achieving pain control in osteoporotic and tumor-related compression fractures. 4,5 Cement has also been used to reinforce screws at the time of insertion. 6,7 However, little information exists regarding its use as a salvage technique for instrumented patients who develop recurrent back pain secondary to new vertebral compression fractures within or adjacent to their surgical construct. We report a series of patients in whom percutaneous vertebral cement augmentation was used as an initial treatment of symptomatic instrumentation or junctional fractures in place of open hardware revision.
Patient Population
Patients who underwent kyphoplasty or vertebroplasty and surgery for the treatment of spinal metastatic tumors between 2005 and 2011 were included in the study. A waiver of institutional review board authorization and informed consent was obtained from the institution to collect the existing data regarding these patients. Among the 29 patients who fit these inclusion criteria, 18 patients were excluded because the postcement augmentation follow-up was less than 2 months, they underwent cement augmentation before surgical stabilization, or the cement augmentation levels were more than 1 level outside of the instrumented levels. The charts and imaging studies of the remaining 11 patients were retrospectively reviewed for tumor histology, tumor level, decompression, instrumentation and cement augmentation levels, further revision surgery, and Visual Analog Scale (VAS) scores.
Surgery
All patients underwent separation surgery, an open surgical technique that separates epidural disease to reconstitute the thecal sac and posterior stabilization, followed by postoperative radiation therapy. 8 To provide circumferential thecal sac decompression at the level of epidural extension of the tumor, laminectomy with bilateral or unilateral facetectomy and resection of the ventral epidural tumor with very limited vertebrectomy were performed. Spinal stabilization was provided by posterolateral fixation at least 2 levels above and below the tumor. All patients were treated with adjuvant radiation therapy that consisted of either conventional external-beam radiation or stereotactic radiosurgery (SRS) that was selected based on tumor histology and prior radiation history.
Cement Augmentation
All patients had cross-sectional imaging of the spine before the procedure usually consisting of MR imaging and often a CT scan. This determined which vertebrae to augment and helped in the planning of the trajectory of the introducer needles for subsequent cement augmentation. The procedure was performed under general anesthesia in the interventional radiology suite that has both fluoroscopic and CT (conebeam as well as collimated) capabilities. Both CT and fluoroscopy were used for placement of the introducer needles. The trajectory of the introducer needles was dictated by the hardware and anatomy (Fig 1). During the fluoroscopic portion of the procedure, oblique views of the spine were often required in addition to the more standard anterior-posterior and lateral views to "throw off" the hardware and to allow better visualization of the introducer needles and cement infusion. If there was any question regarding needle or cement location, an intraprocedural conebeam CT scan was obtained. Deciding between balloon kyphoplasty and vertebroplasty was determined during the procedure by needle trajectory and the anatomy. At the levels without intrapedicular screws or with only a unilateral screw, an inflatable bone tamp (Medtronic MIS, Sunnyvale, California) was used before the cement infusion (kyphoplasty). At levels with previously placed bilateral screws, the trajectory of the introducer needle was extrapedicular, often at the superior or inferior extremes of the vertebral body thereby obviating the ability to place a bone tamp. In these cases vertebroplasty was performed, usually through a curved AVAflex needle (Carefusion, Waukegan, Illinois). The use of the curved needle was particularly helpful in directing the cement into different regions of the vertebral body when surgical screws limited the position of the introducer needle.
The cement was hand injected coaxially through the introducer needle under fluoroscopic visualization, with repeat CT imaging performed if there was a question of extravasation into the spinal canal or neural foramina. The cement used was either the standard high-viscosity radiopaque polymethylmethacrylate or Cortoss bone augmentation material (Stryker Neurovascular, Fremont, California), which is a nonresorbable composite. The latter has an advantage that a small amount can be mixed on demand, which is particularly helpful if switching back and forth between CT and fluoroscopy is required.
Data Analysis
Statistical analysis was performed using SPSS 20.0 (IBM, Armonk, New York). A Wilcoxon signed ranks test was used to compare the prekyphoplasty and postkyphoplasty VAS scores.
RESULTS
Individual patient and treatment information are summarized in the On-line Table. The median age at time of postsurgical cement augmentation was 60 years (range: 38 -71 years). Median follow-up after cement augmentation was 24 months (range: 4 -59 months), and the median time between instrumentation and salvage cement augmentation was 5 months (range: 1-48 months). Eight tumors were located in the thoracic spine and the remaining 3 were located in the lumbar spine. All patients underwent cement augmentation after developing new painful compression fractures. The pain rather than the radiographic finding was the indication for intervention. Eight of the cement augmentation procedures were done at the levels of the top or bottom screws or immediately adjacent to these levels. The remaining 3 patients had cement augmentation in the middle of the construct.
The mean prekyphoplasty VAS score was 8.4 (range 4 -10) and postkyphoplasty score was 1.5 (range 0 -5). All patients reported a decrease in their pain scores. The mean decrease in the VAS score was 6 points (P Ͻ .003).
One patient required surgery after kyphoplasty. The patient initially underwent L1 decompression and T11-L3 stabilization for a renal cell metastasis (Fig 2A). One year after the initial operation, the patient developed severe back pain and was found to have a new L1 compression fracture; however, the patient's hardware appeared intact (Fig 2B). The patient underwent a L1 kyphoplasty with significant pain relief, from 10/10 to 1/10 ( Fig 2C, -D). Five months after the kyphoplasty, the patient developed new back pain and x-rays revealed a unilateral rod fracture. The rod was replaced and the back pain resolved.
None of the patients experienced vertebral cement augmentation-related complications such as neural element compression or cement embolization.
Case Example
A 62-year-old man with metastatic melanoma underwent singlefraction SRS (24 Gy) to L4. Three months after radiation, he developed an L4 burst fracture and mechanical radiculopathy requiring an L3-L5 posterolateral instrumentation and fusion with left L4 -L5 facetectomy. One year later, the patient developed an L2 metastasis. Initially he underwent an L2 kyphoplasty; however, because of the progression of radicular pain and posterior element instability he required extension of instrumentation to T12 and right-sided transpedicular decompression of the epidural tumor. Four years after the initial surgery, the patient developed recurrence of back pain and was noted to have a compression fracture at L4 and endplate infractions at L3 and L5 without evidence of tumor progression (Fig 3A). This patient underwent cement augmentation at L3-L5, resulting in significant decrease of pain symptoms (VAS 10/10 to 2/10). At L4, kyphoplasty was performed; the needle was advanced into the vertebral body via the left pedicle, and an inflatable bone tamp was used before cement infusion (Fig 3B, -C). At levels L3 and L5, where bilateral screws were present, the trajectory of the introducer needles was extrapedicular, therefore obviating the ability to place a bone tamp; instead, vertebroplasty was performed through a curved AVAflex needle. At L3, a lateral parapedicular approach was undertaken. At L5 the guide needle was inserted via a superior, extrapedicular approach and the augmentation needle was then advanced coaxially, allowing access to both the contralateral and unilateral side (Fig 3B, -C).
DISCUSSION
The treatment of spinal metastases is performed with the palliative goals of preservation or restoration of neurologic function and spinal stability, pain control, and local tumor control. Surgery is indicated for patients with metastatic spinal tumors in the setting of spinal cord compression and spinal instability. 9 Instrumentation restores spinal stability after circumferential decompression and osseous infiltration by tumor. Generally patients with metastatic tumors require systemic therapy, which requires coordination with surgery and radiation. Spinal radiation is generally administered 3-4 weeks after surgery to decrease the risk of wound dehiscence or infection. Systemic therapy is administered after radiation and is of paramount importance in preventing systemic progression of cancer. Postoperative wound complications or hardware failure may require significant delay in chemotherapy administration and may have disastrous implications, with systemic progression leading to the demise of the patient or requiring additional surgery and radiation.
While the beneficial role of surgery has been thoroughly documented in the treatment of patients with spinal metastases, these operations may be associated with a wide range of potential complications. The reported perioperative complication rate range is 19%-50%. [10][11][12] The hardware failure rate has been reported to be 2.2%-16%. Hardware-related complications include dislodgement of titanium cage, screw, hook, rod, or plate back-out, or breakage and adjacent level fractures. Generally, symptomatic hardware malfunction compromises spinal stability and requires patients to return to the operating room to replace the fractured components and often to extend the fixation to adjacent levels. In oncologic patients, multilevel tumor infiltration along with chest wall involvement further complicates the stabilization, as does underlying poor bone quality often secondary to osteoporosis and prior radiation. Avoidance of multiple hardware revisions is crucial in continuation of chemotherapy and radiation and in avoidance of high-risk reoperations.
Percutaneous vertebral cement augmentation has been established as an effective treatment for painful fractures in patients with metastatic spinal tumors. 4 The Spine Oncology Study Group 13 made a strong recommendation for the use of vertebral cement augmentation in patients with symptomatic osteolytic metastases and compression fractures. The group conducted a systematic review of the literature that confirmed that kyphoplasty or vertebroplasty consistently relieves mechanical axial pain and improves functional status. Furthermore, the investigators of the Cancer Fracture Evaluation study 5 randomized 134 patients with painful compression fractures to undergo kyphop-lasty or nonsurgical management and found a significant improvement in the pain and function in the treatment group at 1-month follow-up. Complications were very rare, with 1 patient experiencing anesthesia-related non-Q-wave myocardial infarction and 1 patient developing an adjacent-level vertebral body fracture 1 day after the kyphoplasty. Thus, vertebral cement augmentation provides a safe and effective minimally invasive treatment option for cancer-related vertebral fractures that can be performed on an outpatient basis.
While the procedure of postinstrumentation vertebral body cement augmentation is similar to standard percutaneous vertebral cement augmentation procedures performed in nonsurgically stabilized patients, there are some unique technical challenges. The fixation hardware often consists of bilateral intrapedicular screws and posterior stabilization rods, which alters access to the vertebral body. The presence of pedicle screws essentially eliminates the transpedicular approach to the vertebral body. The screws within the vertebral body are typically lateral, making access to the central aspect of the vertebral body challenging. Fluoroscopic imaging is more difficult because the introducer needle tips may be obscured or silhouetted by the surgical hardware. In the lateral plane under fluoroscopy, the presence of intrapedicular screws will also obscure a portion of the spinal canal and ventral epidural space. This is of critical importance during the infusion of the cement as the posterior extent of the cement may not be readily apparent. If the vertebral body to be augmented is at the level of tumor resection, often the standard fluoroscopic imaging landmarks are absent. The pedicles and posterior elements typically have been resected and or previously destroyed by tumor. This not only makes fluoroscopic access challenging, but also the absence of the posterior elements removes the bony protection of the thecal sac. Similarly, if the costovertebral junction has been removed, the thorax is more vulnerable to penetration by the introducer needles. Multiple oblique fluoroscopic trajectories may be required to optimize the visualization of the needle trajectory and cement. If needle placement trajectory or the posterior extent of cement is not clear with fluoroscopy, an intraprocedural conebeam CT can be obtained.
In addition to the hardware and postsurgical osseous changes, postsurgical soft tissue changes need to be considered. In particular, the presence of a paraspinal fluid collection, seroma, or pseudomeningocele will need to be avoided and require modification of the trajectory of the introducer needle. This reinforces the necessity of having a preprocedural cross-sectional imaging study to help determine the best trajectory of the augmentation needle into the collapsed vertebrae.
The current patient series documents the feasibility of percutaneous vertebral cement augmentation in the treatment of symptomatic vertebral compression fractures within or immediately adjacent to pedicle fixation constructs. The data show that cement reinforcement provides effective pain relief in instances of junctional fractures as well as fractures within the construct. Thus, in place of open hardware revision and extension, patients undergo an outpatient procedure with minimal risk of morbidity. Our report includes the results of a small series of consecutive patients who were treated with this technique, and a larger prospective cohort will be necessary to determine the optimal candidates for this treatment and to provide more generalizable outcome data. Furthermore, in some patients, the position of the instrumentation may prohibit safe cement augmentation. Cement salvage of hardware-related fractures provides a safe and well-tolerated alternative to open surgery that does not require interruption of systemic therapy.
CONCLUSIONS
Surgery for metastatic spinal cancer is a palliative measure. Poor quality of bone and tumor progression can lead to new symptomatic compression fractures. The use of percutaneous vertebral cement augmentation in these situations can be extremely beneficial for the patient by effectively relieving pain with an outpatient procedure that does not require interruption of systemic therapy or radiation. | 2017-09-22T16:54:02.270Z | 2014-11-01T00:00:00.000 | {
"year": 2014,
"sha1": "bd5fdbd81e9cf5390a1820c0b0d35066d5021dff",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/35/11/2197.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bd5fdbd81e9cf5390a1820c0b0d35066d5021dff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252170230 | pes2o/s2orc | v3-fos-license | Forensic Applications of Markers Present on the X Chromosome
Microsatellite genetic markers are the gold standard for human genetic identification. Forensic analyses around the world are carried out through protocols using the analysis of STR markers in autosomal chromosomes and in the Y chromosome to solve crimes. However, these analyses do not allow for the resolution of all cases, such as rape situations with suspicion of incest, paternity without a maternal sample for comparison, and biological traces with DNA mixture where the profile sought is female, among other situations. In these complex cases, the study of X-chromosome STR markers significantly increases the probability of identification by complementing the data obtained for autosomal and Y-chromosome markers, due to the unique structure of the X chromosome and its exclusive method of inheritance. However, there are currently no validated Brazilian protocols for this purpose, nor are there any population data necessary for statistical analyses that must be included in the issuance of expert reports. Thus, the aim of this article is to provide a literary review of the applications of X-chromosomal markers in population genetics.
Introduction
The X chromosome is considered one of the most stable nuclear chromosomes, presenting a size length of approximately 155 million base pairs (Mb), accounting for nearly 5% of the human genome [1][2][3].
The X chromosome has many characteristics that are not shared by its counterpart, the Y chromosome [2]. In males, the heterogametic sex, there is a single copy of the X chromosome and a single Y chromosome [4], while in females, there are two copies of the X chromosome [1,4,5].
These mammalian sex chromosomes are believed to have evolved from an ordinary pair of autosomes, referred to as the ancestral protosex chromosomes. The proto-X and proto-Y underwent a series of deletion/addition events during evolution and became the modern X and Y. Additionally, it is believed that a mutation in a sex-determining locus (SRY) is responsible for triggering an evolutionary process of stepwise loss of recombination between the ancestral autosome pair, creating an X-specific region and a Y-specific region in the proto-Y (MSY) [1,5].
The X-chromosome has an exclusive inheritance pattern, according to gender. In the male gender, the X chromosome is (almost entirely) transmitted to females as an unchanged block, while in females, the two X chromosomes present can recombine during meiosis in the same way as the autosomes, and the new reshuffled chromosome is then transmitted to female and male offspring [2,6,7]. However, some recombination between the X and Y chromosomes in males is necessarily retained, ensuring proper segregation in meiosis. This recombination only occurs within two homologous sub-telomeric zones of the chromosomes, known as the pseudo-autosomal regions (PAR) [5,6]. In humans, there are two of these regions, known as PAR1 and PAR2, and it is believed that the two PARs present very different origins and properties [5].
To equalize gene expression between males and females due to type XX/XY chromosomal heteromorphism, one of the two X chromosomes in females is randomly inactivated during early embryonic development. This inactive X chromosome is called the Barr body. However, in the female germline, the inactive X is reactivated before meiosis, which ensures that all oocytes will inherit an active X chromosome [1]. The initiating event to conversion from active X chromosome to inactive X chromosome is the expression of the long noncoding (lnc.) RNA, the X-inactive specific transcript (XIST). The XIST RNA molecules coat the inactive X chromosome and recruit chromatin modifiers that lead to the silencing of most of the genes along its length [8]. However, the inactive X chromosome is not completely silent because many other genes continue to be expressed from this chromosome, in addition to the active X chromosome [4].
These X-chromosome specific properties make it a powerful complementary tool in forensics and population genetics, helping to solve complex cases, such as missing persons, incest, immigration, deficiency, paternity, and other issues, as well as for use in other research areas, such as human evolutionary studies and medical genetics [2,7,9].
Thus, the main objective of the present work is to review the advances and applications of markers present on the X chromosome related to population and forensic genetics.
X-Chromosome Markers
Over the years, the particular structure and singular properties of the X chromosome have highlighted the use of X chromosome markers in forensics and population genetics analysis. The knowledge obtained from these markers can be used in isolation, or in a complementary analysis of the information from autosomal and Y chromosome markers, or even from the mtDNA [2,10]. Therefore, these markers are a useful tool for obtaining an accurate interpretation, for example, when there are DNA mixtures. Here, we describe the main X chromosome markers and the specificity of each one.
X-STR
Short Tandem Repeats (STRs) are DNA sequences with repeat units that are 2bp to 7bp in length, which are widespread throughout the human genome [11,12]. STRs show abundant variability among individuals in a population and have become useful for different purpose, including genetic mapping, disease diagnosis, linkage analysis and, in particular, for human identification [12,13]. Gomes et al. (2020) [2] suggest a number of characteristics that makes STRs the preferential markers in human identification analysis. First, they are highly polymorphic, resulting in high discrimination capacity between individuals; second, they are rapidly and easily analyzed using PCR-based technology and capillary electrophoresis automated fluorescent detection; third, STRs exhibit a multiplex generation capability, with short amplicon lengths for degraded DNA.
Furthermore, there is a Short Tandem Repeat DNA Internet Database (http://www. cstl.nist.gov/biotech/strbase/, accessed on 1 July 2022) [14] compiled and maintained by The National Institute of Standards and Technology (NIST) since 1997. This database is an important resource that combines information from the literature with commonly used technologies and materials for STR DNA markers [13].
STR markers are not only autosomal, but also occur on the X and Y chromosome [15]. The utilization of X chromosomal STRs (X-STRs) can be valuable, as it may be used as an additional data source in complex cases where the analysis of autosomal markers is not informative. Therefore, X-STR can efficiently generate more information than autosomal STR, particularly in complicated kinship analysis [15][16][17][18].
X-SNPs
Single nucleotide polymorphisms (SNPs) are a single-base sequence variation highly abundant in the human genome. They may be present not only in genes (exons and introns), but also in the noncoding regions of the genome. SNPs are the most common type of genetic variation and may be used to aid in distinguishing individuals from one another. These polymorphisms are being used for linkage studies to track genetic diseases, for human evolutionary history studies, and they have also been considered as potential genetic markers by the forensic community [20,21].
According to Tomas et al. (2010) [22], the main advantages of including SNP in forensic analysis are the low mutation rates and the fact that it can be typed from small amounts of DNA, making them particularly useful in degraded DNA and difficult samples. However, there are significant disadvantages for SNPs when compared to STR markers. For example, to obtain equivalent match probabilities, it is necessary to analyze 40-60 loci of SNPs compared to 13-15 STR. Moreover, when there are sample mixtures, the interpretation by SNPs typing can be very difficult due to a limited number of alleles compared to multi-allelic STR markers [23]. Therefore, SNPs markers are most likely to have potential future in forensic application only to estimate ethnicity and to predict phenotypic characteristics [21,23].
Some efforts were made to analyze X-chromosomal SNP genotyping (X-SNP) in forensic cases to complement the analysis of autosomal, Y-chromosomal, and mitochondrial markers, especially in deficiency cases [16,22,24]. Although the use of X-SNPs in special relationship testing is promising, the interpretation is very complex and difficult, especially in mixed samples. Moreover, to elevate the combined power of discrimination, an increased number of X-SNPs are required, thus limiting the application of these markers in forensic cases, thus justifying the lack of interest in its use [2,22,24].
The FORensic Capture Enrichment SNP (FORCE SNP) panel, developed by Till-mar et al. (2021) [25], is a complete SNP panel applied in forensic cases. In this panel, clinically relevant markers are excluded, avoiding DNA database privacy concerns. It contains all relevant SNP markers for forensic applications, such as identity, ancestry, phenotype, and X and Y chromosomal SNPs. In addition, it features a new set of kin SNPs for inferring distant relationships (up to 4th degree relationship, with high statistical significance).
The FORCE panel includes features such as a relatively small size and a minimal number of primers/probes per reaction to reduce enrichment costs. The versatility of this panel is confirmed by the possibility of using enrichment methods such as hybridization capture, PEC, and multiplex PCR, allowing for the analysis of degraded samples. The inclusion of X-SNPs in the panel is due to their informative value of kinship for cases of specific X-chromosome inheritance, further enhancing the panel's analytical performance [25].
X-INDELs
Insertion Deletion Polymorphisms (INDELs) are biallelic markers that combine the interesting aspects of both SNPs and STRs. INDELs have low mutation rates, they are widely spread throughout the genome, including along the X and Y chromosome, they have short amplicon size, making them easy and inexpensive to analyze, and they can be representative of differences between geographically distinct populations [26][27][28][29].
INDELs have received less attention than SNPs in forensic studies, but they may also be an important marker to complement STR analysis, increasing the identification success rating in cases of degraded DNA [30]. Two main studies [31,32] call attention to the applicability of INDELs that may be underutilized for genetic studies in forensic science [28].
Over the last few years, there has been a growing trend toward examining X-chromosome INDELs markers, mainly in the field of evolutionary anthropology, to assess the admixture of population and kinship investigations with deficient relationships [29]. Despite exhibiting greater efficiency than the markers on the autosomal chromosome, the X-INDELs are still limited by their lower discriminating power compared to X-STR [28].
Population Genetics
Due to its unique inheritance patterns, the use of markers for the X chromosome in population studies has been more explored in recent decades. Features such as a lower recombination rate and a lower mutation rate result in faster genetic drift. Consequently, this makes linkage disequilibrium (LD) and the population structure of the X chromosome stronger. In women, the X chromosome transfers two-thirds of its genetic inheritance. Thus, understanding the genetic diversity of a population can help in demographic studies involving migration and sexual reproduction patterns [9].
X-STR markers are preferentially used in population analyses because they are highly polymorphic, technically easy to analyze, and they exhibit the ability to generate multiplex STRs with smaller amplicons [2]. To use them in these cases, specific knowledge about the frequency of alleles and haplotypes, as well as genetic linkage status and LD, is required. Genetic linkage assesses the co-segregation of loci located nearby in a pedigree, while linkage disequilibrium assesses the co-segregation of alleles at the population level [33]. Since population data are fundamental for forensic investigations, it is of great importance that there is more information compiled and organized on the X-STR markers of populations.
For forensic and human identification studies using molecular markers, linkage disequilibrium (LD) data between the tested loci is used to ensure the reliability of the results. To achieve this, LD studies regarding the recombination of data between loci are carried out, such as those of Phillips et al. (2012) [34], who evaluated the genetic distance of centimorgans (cM) to infer recombination rates at the loci of different STRs of the X chromosome, required for kinship tests due to the density and uneven distribution of the markers.
To better understand the genetic landscape of a given population, several studies were carried out using X-STR markers, as was the case with Ferragut et al. (2021) [9]. They evaluated the genetic diversity of a Western Mediterranean population using 12 X-chromosome markers included in the Investigator Argus X-12 kit (Qiagen GmbH, Hilden, Germany). Based on the X-STR analysis, it was possible to suggest a gender-biased migration rate, confirming the predominance of patrilocality in this area. In 2019, a similar study was conducted by Garcia et al. [35], with the aim of building a database of X chromosome markers in the Argentine population. The Investigator Argus X-12 kit was also used in this study, and 914 complete haplotypes were obtained for the markers included in the kit. Knowing the uniqueness provided by the X-STR markers, several research groups conducted studies to assess the population distribution of countries such as Brazil, Switzerland, Italy, India, Croatia, and countries on the African continent, among others [9,15,[36][37][38][39][40][41][42][43][44][45][46][47][48][49]. Table 1 lists some population studies that explore X-STR markers, published in the last ten years.
In the cases mentioned above, the amplification kit chosen was the Investigator Argus X-12 kit (Qiagen GmbH, Hilden, Germany), being the most widely used amplification system. However, the literature reports that the X-STR loci are found in four linkage groups, whose combined system performance may be the identification of less than 12 independent X-STR loci. In addition, many forensic types of X-STR rely on self-designed, non-commercial multiplex assays. Because of this, there was a need for the development of a multiplex system including more unlinked X-STRs [6].
Allele frequency differences at each locus vary across different populations. DXS10146 and DXS10135 markers presented 38 alleles, revealing the most polymorphic alleles. The most informative locus among the population studies mentioned is DXS10135, shown in 11 different studies [9,15,35,39,40,42,43,45,46,49]. This is due to the high PIC (polymorphism information content) value of this locus in the different populations [51]. X-STR allele sequence variation data were found primarily at the DXS10134 locus, showing two more repeats [GAAA] in the GRCh38 and at the DXS10146 locus, with four sets of nucleotide differences, including an extra T nucleotide in the GRCh37 assembly in X:149584331 and an extra repeating unit [AAAG] in GRCh38 [51].
Although X-STR markers are extremely efficient for population analyses, the absence of a database for the collection, storage, and use of frequencies of X STR alleles or haplotypes is a factor that renders their use difficult [2,6]. To date, the main ways to obtain this data are through consultation of publications in the PubMed database, conference proceedings of the International Society for Forensic Genetics (ISFG), and the Forensic ChrX Research website. In some countries, an organized database is available for consultation regarding haplotypes, as is the case of the Brazilian Genetic Bank of the X Chromosome (BGBX) [6,44].
Moreover, several countries do not have data compiled and stored for X-STR. The greatest scarcity is seen in areas such as Sub-Saharan Africa and the Americas (except for the USA, Brazil, and Argentina). On the other hand, China has made significant progress in this regard in recent years. The country holds a large amount of information and studies about the X-STR marker for its population [2].
The use of X-STR markers in population studies is promising, but to advance the issue, further studies on the frequency distributions of haplotypes, mutation rates, and LD are essential. Considering these observations, it will be possible to build an effective human X-STR database that contains comprehensive data from different populations in different parts of the world.
Parenthood Testing
The inference of genetic kinship between two individuals has been a subject of great theoretical and practical interest in the forensic field [52]. With current technological advances, a specific demand for kinship testing is expected to arise where only remote relatives are available for testing [16], and there are a multitude of applications for paternity testing, such as the clarification of bilateral relationships [53], determination of kinship in immigration proceedings, and identification of parental lines [54].
In this way, the paternity test (PT) becomes a very important instrument for the advancement of forensic genetics in a wide spectrum of activities. However, it is worth noting the comparison of the main differences between the PT and the maternity test (MT), because, as for the PT, there is an understanding that the involvement of the mother's genotype generates an increase in the power of identification of the biological father [53]. However, in the absence of maternal data, the exam may be inconclusive [55].
Insertion of markers based on STR sequences and mitochondrial DNA sequence variations linked to the analysis of sex chromosomes (X and Y) provide greater PT efficiency, with respect to autosomal markers [16,55]. Owing to the inheritance pattern of Chr-X, in which the daughter receives the unaltered paternal X chromosome, Chr-X markers have a high power of exclusion [17]. The X-STRs exclusion power is due to the difference in the number of alleles when compared to autosomal alleles in male individuals [44].
Because of these unique characteristics, X-STRs can satisfactorily complement cases in which the analyses of autosomal STRs are not sufficiently informative, as in father-daughter duo cases. Therefore, in these cases, the analysis of X chromosomal markers can be more informative than autosomal markers.
X-STRs can also be highly informative in cases of father-daughter paternity, where the alleged parents are father and son, as the analysis of autosomal STRs would be inconclusive due to the sharing of alleles. X-STRs inherited from their respective mothers and not shared with each other are very useful in such cases.
In addition, X-STRs can be used in cases of sisters or half-sisters whose common relative is the father. It is possible to observe a greater resolving power, since both, being daughters of the same father, necessarily share the same alleles.
Autosomal DNA markers can pose difficulties when they are physically close to each other on the same chromosome. For these reasons, it is worth highlighting the importance of software, such as FamLinkX, that implements a new algorithm for probability calculations that account for linkage, linkage disequilibrium, and mutations [56,57].
For this reason, such software becomes highly sought after among forensic users as more and more ChrX markers become available [57]. This is justified by its usefulness in calculating case-specific likelihood ratios for two (or more) hypotheses with observed DNA data for a pair of linked DNA markers. In also performs simulations for two or more pedigrees (hypotheses) and analyzes cases that give rise to complex pedigrees. In summary, such compilations of functionalities are now widely available, and are free of charge [34].
Moreover, following practices established through adoption and further characterization of X-STR typing and application will promote the development of additional tools, such as software that provides functions for the likelihood calculation of family relationships/pedigrees using X-chromosomal genetic marker data to facilitate their implementation into additional laboratories, providing a rich area for the future of forensic research.
Finally, autosomal STR typing is likely to remain the gold standard for the forensic laboratories well into the future, and X-STR markers have proven to be useful complementary tools in the forensic armory.
Incest
Incest is usually defined as mating between first-degree relatives, (such as fatherdaughter, mother-son, or brother-sister), who have 30-50% of their genes in common [58,59]. This definition, however, may be expanded with the addition of sexual activity between uncle-niece, grandfather-grandchild [60,61].
Children of consanguineous parents can inherit two alleles identical by descent (ibd) at any locus, show an increase in homozygous genotypes, and are at greater risk for autosomal recessive diseases [62]. Decreased population heterozygosity over the generations is expected in cultures which encourage consanguineous marriages between specific blood relatives (e.g., uncle-niece) [63].
In Brazil, as in many other countries around the world, incest in itself is not a crime. However, in cases where violence or serious threats are used, in which the act is performed with children under 14 years of age, with someone who, due to illness or mental disability, does not have the necessary discernment to perform the act, or who, for any other reason, cannot offer resistance, sexual activity can be considered the crime of rape, or rape of a vulnerable person. In these cases, legal interest arises, given the criminal nature of the act, and there is a need to make use of forensic DNA tools.
Vaginal and oral swabs are commonly collected shortly after the events when incestuous criminal activity is suspected, which may allow for the recovery of spermatic material from the suspect and the comparison of genetic profiles (victim-aggressor).
Child sexual abuse is a global public health concern considered by World Health Organization (WHO) as a silent heath emergency [64]. Victims of incest usually do not talk about the situation due to embarrassment, guilt, and fear. Thus, incest cases are rarely reported [65]. Moreover, the efforts of families to cover up incest cases is a well-known reality [63]. Thus, in many rape cases, sperm is not available from vaginal swabs, and the only resulting genetic evidence may be the products of conception [59]. Therefore, in cases where rape leads to pregnancy, it is possible to compare the genetic profiles of the alleged father, mother, and fetus.
In many circumstances, DNA profiling of autosomal STR loci can be reliably used for solving criminal and paternity cases focused on males [66]. Nonetheless, in those cases involving close blood-relatives as putative fathers, the exclusion power of autosomal STRs is considerably reduced, and ChrX (Chromosome X) STRs may be most appropriate [16]. For example, if two alleged fathers are father and son, ChrX markers would be more efficient than autosomal STRs, since father and son do not share any X-chromosomal alleles idb [63]. In some criminal paternity investigations, the high rate of homozygosity displayed by the child may raise the suspicion of an incestuous situation [2].
The analysis of the ChrX STR profile, in the case of a daughter, is quite informative, even in the absence of the genetic profile of the father. When the father of the daughter is also the father of the mother (father-daughter incest), the child will either be homozygous for all ChrX STR markers or will present the same genotype as the mother [2].
Considering that ChrX is maternally transmitted, the analysis of its STR is not useful in the case of criminal paternity tests in which the child is a boy, since he inherited a Y chromosome from his biological father. However, it can serve as a supplement in criminal maternity test cases.
The estimated frequency of incest ranges from 0.5 to 2%, [67][68][69] but estimates vary by definition and the method of determining cases [70]. Most victims of incest are minors [65], which ultimately makes the social and psychiatric consequences more serious. Thus, it is imperative that public authorities raise awareness in these cases and adopt multidisciplinary and specialized protocols for monitoring victims, especially younger ones, aiming at treatment and full rehabilitation [65].
Complex Cases Using X-STRs
In the last decades, autosomal STR markers have become the best option for most cases of genetic identification, paternity testing, and other kinship analysis. Despite their reliability and high power of discrimination, in some particular cases, autosomal markers provide little information, even with a high number of polymorphisms typed [71]. In these cases, the use of Y-STRs and X-STRs as additional markers by recombination [17,72], could provide more strength to the genetic evidence due to its inheritance characteristics and different recombination patterns [2]. Additional genetic information can increase the statistical values of true parental relationships in analysis and reduce the chances of false attributions [73].
ChrX markers have been rarely employed in forensic practices, although gonosomal markers are especially efficient for solving deficiency cases [74]. X-STRs are particularly useful in complex kinship cases, where just a few and/or distantly related individuals are available for genetic analysis [71], especially when the mother is absent. They are also used in some missing persons/mass disaster situations to identify victims, when direct reference samples are not available and biological relatives must be used [75]. Complex analysis involving singular materials, such as DNA from exhumed bones or historical samples (small number of low size STRs) [74] could also be aided by X-STRs data.
The scenarios when the presumed father is not available for genetic analysis using X-STRs is the most common type of complex kinship testing regarding financial inheritance disputes to prove the affiliation to the deceased alleged father [71]. The father will convey his ChrX copy to daughters only, and all sisters will share at least one allele per locus for the ChrX. In this context, the investigation of sisters or stepsisters can exclude paternity, even if the DNA of the parents is not available, by genotyping the putative grandmother [16].
X-STRs may also be a better choice in cases where the genetic material from different individuals is mixed. The male hemizygous status for X-STRs makes these markers more advantageous compared to autosomal markers [74]. In cases of abortion involving a female fetus, the DNA of the embryo and the mother are mixed, in which case, it is possible to perform a paternity test on the fetus, as alleles not shared with the mother can be analyzed [74].
Thus, X-STRs markers can be useful for any parent-child relationship that involves at least one female [76]. However, for closely linked markers, it is advisable to consider linkage and LD for the most precise likelihood calculation [17].
In that regard, Bini et al. (2019) [76] evaluated possible mutational and recombination events for X-STR markers using Investigator Argus X12 kit in Italian pedigrees. In order to explore the segregation stability, three-generation families (grandpa-mother-son) and two-generation families (mother-sons, father-daughters), for a total of 269 pedigrees, were analyzed, and calculations to estimate the recombination fractions between pairs of markers and mutation rates were performed [76].
It is important to underline the significance of larger databases to enhance the estimation of haplotype frequencies, more software packages for kinship evaluations of ChrX transmission [2], and new, tested, and optimized X-STRs markers for kinship analysis. To support further development of X-STRs, the studies conducted presented the discovery of novel X-STRs markers and multiplex systems which are highly promising for forensic use [49,66,77,78] and the extension of local haplotype data through studies of the viability and discriminatory power of X-STRs [9,40,[79][80][81]. Despite more than 20 years of usage and X-STR research in forensic genetics, there is still a continuous demand for high-quality genetic data to support new studies and expand the application of these gonosomal markers.
Future Directions and Conclusions
The need to resolve sexual assault crimes and to analyze kinship scenarios in areas such as immigration, paternity, missing persons, and mass disasters will remain an important part of the impact forensic science has on society well into the future, especially as the questions surrounding such situations become more complicated and new technologies, such as next generation sequencing, make laboratory implementation of X-STR marker systems more accessible [6,58].
Regardless of the increasing use of SNPs and INDELs markers in forensic cases, mainly due to the advancement of Next-Generation Sequencing (NGS) technology, standardization must be advanced, due to the large number of markers necessary to acquire a high degree of discrimination between individuals in a population [2,82]. The special features of STRs have made them the most popular multi-allelic markers adopted as reference loci for the Combined DNA Index System (CODIS), facilitating the worldwide implementation of the National DNA Databases (NDNADs) [20]. Therefore, at least for the time being, STRs will remain the essential and preferable markers used in forensic studies.
The use of X-STR markers is promising, but more studies regarding haplotype frequency distributions, mutation indications, and LD are necessary to insure that the new markers are incorporated correctly into the routine of forensic companies and laboratories that work with DNA identification.
Furthermore, in order to implement X markers, it is necessary to maintain a single database covering different populations.
In order for X-chromosome markers to be used, distribution and frequency studies in different populations must be carried out.
These methodologies may contribute to solving pending complex cases that fit the criteria for the use of X-chromosome markers, increasing the efficiency, quality, and reliability of services offered for identification, paternity testing, and forensics. Institutional Review Board Statement: Ethical review and approval were waived for this study, as this study is a review article.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable. | 2022-09-10T15:46:18.675Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "58d7253dd7965bdc3f51aba4919cba44203fcf67",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/13/9/1597/pdf?version=1662542903",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbc2a321c8006b5bfbface3aac935023820de70f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247189291 | pes2o/s2orc | v3-fos-license | Study on the value of small dense low‐density lipoprotein in predicting cardiovascular and cerebrovascular events in the high‐risk stroke population
Abstract Background Lipid management in people at high risk of stroke is an important measurement to prevent the occurrence of stroke. The study aims to investigate the association between sdLDL and cardiovascular and cerebrovascular events in high‐risk stroke populations. Methods This was a prospective study. Screened from 15,933 individuals aged >40 years in April 2013 and followed up at 3rd, 6th, 12th, and 24th months, 823 participants met the screening criteria and were investigated for clinical data and biochemical parameters. Results A total of 286 subjects had varying degrees of carotid stenosis, and 18 subjects experienced cardiovascular and cerebrovascular events during the two‐year follow‐up period. There was no positive correlation between sdLDL and carotid stenosis. Carotid stenosis and extent of carotid stenosis involvement did not predict cardiovascular and cerebrovascular events in patients with high‐risk stroke, while sdLDL did. The sdLDL level in the events group was significantly higher than those in the no event group (p = 0.002). In the events group, the risk of events in the fourth quartile of sdLDL was 10.136 times higher than in the first quartile (HR = 10.136, 95% CI: 1.298–79.180, p = 0.027). Conclusions sdLDL was positively correlated with the incidence of cardiovascular and cerebrovascular events, which can predict the occurrence of an event and provide a scientific basis for early prevention.
(LDL) and total cholesterol (TC) are the most important risk factors for AS, contributing to the formation of atherosclerotic plaques and ultimately leading to cardiovascular and cerebrovascular events. 3 sdLDL is associated with AS, and an increased level of sdLDL has a certain predictive value for cardiovascular and cerebrovascular events. 4,5 Firstly, sdLDL is small and easy to get through the vascular wall, becoming an effective cholesterol source for the formation of atherosclerotic plaque. 6 Secondly, the low sialic acid content of sdLDL readily binds to anionic proteoglycans in the vessel wall, increasing the retention time of sdLDL in the subendothelial space of arterial vessels. 3 Moreover, sdLDL has few polar molecules on its surface and is susceptible to various chemical modifications such as glycosylation, 7 which is subsequently phagocytosed by macrophages and accelerates the formation of foam cells. 8 In addition, sdLDL contains a few antioxidant vitamins 9 and is more likely to become oxidized LDL (OX-LDL) than larger forms of lipoproteins, generating oxidation-specific epitopes, inducing immune responses and inflammation, and converging chemotactic monocytes to vascular endothelial cells, while being able to generate antigen-antibody complexes to induce the formation of more foam cells to accumulate within the arterial wall and form the lipid core portion of atherosclerotic plaques. 10,11 OX-LDL is cytotoxic, damaging the vascular endothelium, throwing off homeostasis, and accelerating the formation of atherosclerotic plaques. 2 The increase of triglyceride levels in plasma can promote the transformation of LDL from lbLDL to sdLDL 12 and increase the risk of AS. 13 Studies have shown that sdLDL is closely related to the risk of CVD, and the National Cholesterol Education Program (NCEPIII) lists sdLDL as one of the risk factors for CVD. 14 To prevent and control cardiovascular and cerebrovascular events in high-risk stroke patients at an early stage, this study aimed to investigate the value of sdLDL levels in high-risk stroke patients for early prediction of cardiovascular and cerebrovascular events, provide a scientific basis for risk stratification management and early prevention in high-risk stroke patients, and strive to lighten the heavy burden caused by CVD.
| Study subjects
This was a prospective study. High-risk stroke patients were selected in April 2013 from the information of 15933 residents older than 40 years old in Chaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. We enrolled 835 subjects, 12 cases were lost in the twoyear follow-up, and 823 cases finally met the criteria, including 473 female and 350 male subjects. During the two-year follow-up, there were 18 cardiovascular and cerebrovascular events, including 13 hospitalizations and 5 deaths (research flow chart is shown in Figure 1).
Stop taking lipid-lowering drugs for 4 weeks before enrollment.
Inclusion criteria: 8 risk factors were assessed according to the risk of stroke for people at high risk of stroke, those with 3 or more risk factors were considered high risk. Stroke-risk screening assessment criteria: For people aged 40 years or older, stroke risk screening assessment was performed based on the following 8 risk factors (a) history of hypertension (systolic blood pressure ≥140/90 mmHg) or taking antihypertensive drugs; (b) heart disease such as heart valve disease and/or atrial fibrillation; (c) dyslipidemia; (d) diabetes mellitus; (e) obesity or significant overweight (body mass index ≥26 kg/m 2 ); (f) lack of physical activity; (g) smoking; (h) family history of stroke.
Exclusion criteria: living or working outside the study area for more than six months; having severe liver or kidney disease or malignancy, psychiatric disease, or systemic immune disease; having incomplete information or missing visits during the 2-year follow-up cycle.
| China stroke prevention project committee (CSPPC) stroke program
Stroke is of high incidence, disability, and mortality, which is the first cause of burden for disease in China. To reduce the burden of stroke, the former Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke risk factor screening and risk assessment for permanent residents over 40 years old in high-incidence areas and to conduct health education and regular physical examinations for selected low-risk populations, intervention guidance for the middle-risk population based on individual characteristics, further inspections for the high-risk population, and comprehensive intervention. During regular follow-up of the middle-risk and high-risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment. 15
Our Stroke Prevention and
Control Center is one of the members of CSPPC, and this project is jointly conducted with our Stroke Prevention and Control Center according to the task under CSPPC.
| Carotid ultrasound and other information
Ultrasound examination of the neck was performed using a diagnostic ultrasound machine (Siemens S2000, Germany). The examined vessels mainly included bilateral common carotid arteries (CCA), carotid sinus (ICS), internal carotid artery (ICA), subclavian artery (SA), and vertebral artery (VA) to observe the smoothness of the vessel wall, the presence of plaque and carotid artery stenosis. Carotid artery stenosis was classified as (i) mild stenosis with 1%-49% reduction in internal diameter, with ultrasound images showing localized plaque and no significant changes in blood flow; (ii) moderate stenosis with 50%-69% reduction in internal diameter, accelerated blood flow at the plaque stenosis, and formation of pathological vortex distal to the stenosis; (iii) severe stenosis with 70%-99% reduction in internal diameter, aggravated plaque, further accelerated blood flow at the plaque stenosis, and distal A mixed-signal of pathological vortex and turbulence was formed.
Demographic parameters such as age, gender, waist circumference, BMI, systolic blood pressure, diastolic blood pressure, a medical history of all subjects such as a history of heart disease, history of diabetes, history of hypertension, history of dyslipidemia, history of smoking, family history of hypertension, and family history of diabetes such as stroke data, family history of coronary heart disease, family history of stroke, carotid ultrasound findings, cardiovascular and cerebrovascular events were obtained from the China Stroke Data Center database.
Follow-up: All enrollees were followed up by standardized trained community physicians, either as an outpatient or by telephone at 3rd, 6th, 12th, and 24th months during the two years from April 2013 to April 2015. The main content of the follow-up visits was whether the enrollees had a cardiovascular and cerebrovascular event. No laboratory tests were performed during the follow-up. Cardiovascular and cerebrovascular events are defined as death due to stroke or heart disease, or admission due to coronary heart disease, nonfatal myocardial infarction, stroke, and transient ischemic attack (TIA).
After verification and confirmation, the data were entered into the database of the China stroke data center. F I G U R E 1 Flow chart of subjects selection. There were 835 cases enrolled from 15933 individuals who were screened for stroke risk, 12 cases lost to follow-up, and 823 cases finally met the criteria, including 473 female and 350 male cases. A total of 286 cases with carotid artery stenosis, and 18 cases experienced cardiovascular and cerebrovascular events
| Determination of sdLDL and other biochemical indexes
The data were the laboratory tests performed once before the follow-up of the included patients. For all subjects, fasting serum samples were collected within 24 h, 3 ml of cubital venous blood was drawn into a separating gel-accelerating vacuum blood collection tube and left for 15 min after the plasma was precipitated. The
| Cardiovascular and cerebrovascular event
Death due to stroke or heart disease, or coronary heart disease, nonfatal myocardial infarction, stroke, and transient ischemic attack (TIA).
| Statistical analysis
Statistical analysis was performed using SPSS (version 20.0), and graphs were created using GraphPad Prism (version 9). Categorical variables were expressed as numbers (percentages, %), and continuous variables were expressed as mean ± standard deviation.
Comparisons between groups were analyzed by the chi-square test.
Two groups were compared by t test, and multiple groups were compared by single-factor analysis of variance. Data for non-normally distributed measurements as median (interquartile range) and comparisons between groups were made by rank-sum test. Correlation analysis was performed using Spearman. Survival analysis was performed using the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered to be statistically significant.
| Relationship between sdLDL and carotid artery stenosis
A total of 823 subjects were enrolled in this study. According to the neck ultrasound findings, there were 537 cases in the no stenosis group and 286 cases in the stenosis group. The age, systolic blood pressure, proportion of males, and proportion of diabetes history were higher in the stenosis group than in the no stenosis group, and the difference between the two groups was statistically significant (p < 0.05) ( Table 1). The sdLDL in the stenosis group was higher than that in the no stenosis group, and the difference was not statistically The Spearman correlation analysis showed that the lipid indicators including TC, TG, LDL, non-HDL, and Lp-PLA2 were positively correlated with sdLDL (r > 0, p < 0.05), and Lp-PLA2 was highly correlated with sdLDL in these high-risk stroke populations(r = 0.555, p < 0.001) ( Table 2). According to the degree of carotid artery stenosis, the three groups were divided into the no stenosis group (n = 537), mild stenosis group (n = 254), and moderate to severe stenosis group (n = 32). sdLDL increased with the severity of carotid artery stenosis, but the differences of 7 indicators including sdLDL among the three groups were not statistically significant (p > 0.05) ( Figure 2). According to the number of carotid artery stenosis, the three groups were divided into the group with no stenosis, the group with less than 3 stenoses (involving 1-2 carotid arteries) and the group with more than or equal to 3 stenoses (involving 4-6 carotid arteries), the differences of 7 indicators including sdLDL among the three groups were not statistically significant (p > 0.05) (Figure 3).
This study also analyzed the relationship between the grouping of sdLDL quartiles and the incidence of carotid stenosis. The incidence of carotid stenosis increased with increasing sdLDL levels but without statistical significance (p > 0.05). In the grouping with or without stenosis and the grouping of stenosis severity, the incidence of the fourth quartile in the mild stenosis group and the fourth quartile in the moderate to severe stenosis group were the highest among the four groups, and the differences were not statistically significant (p > 0.05) ( Table 3) In the subsequent analysis, using Cox regression to analyze the forward LR method, the occurrence of cardiovascular and cerebrovascular events was used as the value indicating that the event had occurred, the follow-up time as a time variable, the sdLDL quartiles as a categorical covariate, and the first quartile of sdLDL quartiles as a reference group. The results suggested that sdLDL was a risk factor for the occurrence of cardiovascular and cerebrovascular events using the Cox regression forward LR method (p = 0.037). In the group of cardiovascular and cerebrovascular events, the incidence of cardiovascular and cerebrovascular events increased with the increase of the sdLDL quartile (p = 0.015) ( TA B L E 1 Baseline characteristics of subjects in the no stenosis and stenosis groups fourth quartile of sdLDL was 10.136-fold higher than the first quartile (HR = 10.14, 95% CI: 1.30-79.18, p = 0.027) ( Table 6) ( Figure 6).
| DISCUSS ION
Cardiovascular and cerebrovascular events continue to be one of the leading causes of death and disability in the world. 16
F I G U R E 2 Levels of seven indicators
with different degrees of stenosis in the three groups. The differences were not statistically significant (p > 0.05)
F I G U R E 3 Levels of seven indicators
with different narrow involvement range in the three groups. The differences were not statistically significant (p > 0.05) and reliable predictive value for cardiovascular and cerebrovascular events. 21 Therefore, this study investigated the relationship between sdLDL and cardiovascular and cerebrovascular events, and the results showed that the sdLDL level in the cardiovascular event group was significantly higher than that in the no event group. sdLDL level was positively correlated with the incidence of cardiovascular TA B L E 4 Relationship between carotid artery stenosis and the incidence of cardiovascular and cerebrovascular events in high-risk stroke patients
F I G U R E 4 Difference of 7 indicators
between the group without cardiovascular and cerebrovascular events and the group with cardiovascular and cerebrovascular events, **p < 0.01 with a hemodynamic disorder is the key factor of stroke, and the automatic regulation of cerebral blood flows is the main way of brain tissue self-protection. Therefore, the risk of stroke caused by carotid stenosis is also related to the stability of plaque and the automatic regulation of cerebral blood flow. This may help explain why carotid stenosis cannot predict cerebrovascular events in this study.
Chen et al. 23 found a statistically significant difference in sdLDL in the ischemic stroke group compared with normal controls (p = 0.001). Chen et al. 24 found that sdLDL-C was higher in the cerebral infarction group than in the cerebral hemorrhage group in a study of 652 stroke patients (p < 0.05), and the specificity of sdLDL-C was 90.0%, which can be used as a risk assessment for cerebrovascular disease and has certain diagnostic value. Tu et al. 25 reported that adiponectin was associated with a high risk of major adverse cardiovascular and cerebrovascular events and mortality.
The subjects of this study were slightly different from the above- The results suggest that sdLDL is a risk factor for cardiovascular and cerebrovascular events in patients with high-risk stroke (p = 0.037), and the risk of cardiovascular and cerebrovascular events increases with quartiles. These findings confirm the value of the clinical application of sdLDL levels in predicting the occurrence of cardiovascular and cerebrovascular events in high-risk stroke patients.
The limitations of this study are as follows. Firstly, the volume of specimens included was not large, the follow-up period was only 2 years, and the number of cases of cardiovascular and cerebrovascular events that eventually occurred was relatively small. Secondly, the subjects included were high-risk stroke patients among residents aged 40 years or older in Chaohui Street, Xiacheng District, Hangzhou, and the results are only representative of a small population. In the subsequence, we will increase the sample size and continue follow-up to obtain more and more comprehensive data for further exploration and analyze the clinical value of sdLDL in predicting cardiovascular and cerebrovascular events.
In conclusion, sdLDL levels are positively correlated with the incidence of cardiovascular and cerebrovascular events, which can predict the occurrence of cardiovascular and cerebrovascular events and provide a scientific basis for risk stratification management and early prevention in people with high risk of stroke.
ACK N OWLED G EM ENTS
We thank all participants and their families for their support and participation in the study. We also thank the staff of the People's Hospital of Zhejiang Province.
CO N FLI C T O F I NTE R E S T
The authors declare that they have no competing interests.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-03-03T06:23:53.802Z | 2022-03-02T00:00:00.000 | {
"year": 2022,
"sha1": "57509abb74098e1054193f2625596c5561089903",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "44d7df991dc86dd6cd5a9d4badb367b7c4cc9e8d",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235690893 | pes2o/s2orc | v3-fos-license | Incidence, risk factors and outcome of acute kidney injury (AKI) in patients with COVID-19
Background Acute kidney injury (AKI) is a severe complication of coronavirus disease-2019 (COVID-19). This study aims to evaluate incidence, risk factors and case-fatality rate of AKI in patients with COVID-19. Methods We reviewed the health medical records of 307 consecutive patients with COVID-19 hospitalized at the University Hospital of Modena, Italy. Results AKI was diagnosed in 69 out of 307 (22.4%) COVID-19 patients. Stages 1, 2, or 3 AKI accounted for 57.9%, 24.6% and 17.3%, respectively. AKI patients had a mean age of 74.7 ± 9.9 years. These patients showed higher serum levels of the main markers of inflammation and higher rate of severe pneumonia than non-AKI patients. Kidney injury was associated with a higher rate of urinary abnormalities including proteinuria (0.44 ± 0.85 vs 0.18 ± 0.29 mg/mg; P = < 0.0001) and microscopic hematuria (P = 0.032) compared to non-AKI patients. Hemodialysis was performed in 7.2% of the subjects and 33.3% of the survivors did not recover kidney function after AKI. Risk factors for kidney injury were age, male sex, CKD and higher non-renal SOFA score. Patients with AKI had a mortality rate of 56.5%. Adjusted Cox regression analysis revealed that COVID-19-associated AKI was independently associated with in-hospital death (hazard ratio [HR] = 4.82; CI 95%, 1.36–17.08) compared to non-AKI patients. Conclusion AKI was a common and harmful consequence of COVID-19. It manifested with urinary abnormalities (proteinuria, microscopic hematuria) and conferred an increased risk for death. Given the well-known short-term sequelae of AKI, prevention of kidney injury is imperative in this vulnerable cohort of patients. Supplementary Information The online version contains supplementary material available at 10.1007/s10157-021-02092-x.
Introduction
COVID-19 is a complex infectious disease characterized by a broad spectrum of manifestations ranging from asymptomatic to severe illness [1]. The disease is associated with a high rate of morbidity and mortality in patients hospitalized for severe symptoms of SARS-CoV-2 pneumonia [2]. Lung is the main target of the virus, but other organs including brain, liver and kidneys can be involved in this infection [3]. The pathogenesis of COVID-19 is poorly understood and the principal etiology of organ dysfunction seems due to the direct and indirect effects of proinflammatory cytokines release [4][5][6].
3
The rate of acute kidney injury (AKI) in COVID-19 is unclear, but recent evidence has established that kidney involvement is proportional to the severity of the underlying lung involvement [7]. Studies conducted in China and the US reported a high prevalence of urinary abnormalities (proteinuria and microscopic hematuria) and a rate of AKI ranging from 0.5% to 36.6% [7][8][9][10][11][12][13]. A report from Bordeaux (France) documented that the impact of AKI has been estimated to about 80% in severely ill patients admitted in ICU [14].
The etiological mechanism leading to kidney injury is still unknown. Direct cytopathologic damage, cytokine storm/sepsis, drug toxicity and dehydration may be potential interlinked mechanisms of kidney injury in COVID-19 patients. A great number of living and post-mortem kidney biopsies showed a widespread proximal tubule injury consistent with acute tubular necrosis [15][16][17]. Collapsing glomerulopathy and thrombotic microangiopathy were other common findings on kidney biopsy [16,18]. The use of offending agents including nonsteroidal anti-inflammatory drugs [19] and high-dose vitamin C [20] has been associated with kidney involvement. Lastly, the detection of the virus in renal parenchyma and consequently in urine leads to hypothesize a potential cytopathic effect of the virus [21], though the pathogenetic mechanism of SARS-CoV-2driven kidney injury remains elusive. Based on these data, understanding the impact of SARS-CoV-2 infection on kidney function is necessary to elucidate epidemiological and clinical characteristics of patients experiencing AKI. The aim of this study was to evaluate the incidence, risk factors and outcome of AKI in COVID-19 patients.
Study design and setting
This retrospective, observational study was conducted in patients with laboratory confirmed-COVID-19 admitted to the University Hospital of Modena. The city of Modena is geographically located in Emilia Romagna region that overall accounted for a total amount of 28.143 documented COVID-19 cases on June 18, 2020 [22]. Clinical and laboratory data were prospectively recorded in consecutively admitted patients from 23 February to 27 April 2020. This time frame coincided with the observational period of the study.
The study was approved by the regional ethical committee of Emilia Romagna (prot. n. 0013376/20).
Population
This study recruited all consecutive adult patients (≥ 18 years) admitted with SARS-CoV-2 infection. Patients with chronic kidney disease (CKD) in renal replacement therapy were excluded from the analysis. According to the WHO guidelines, the diagnosis of SARS-CoV-2 infection was defined as a positive real-time reverse transcriptase-polymerase chain reaction (RT-PCR) assay of nasopharyngeal swabs or lower respiratory tract specimens. [23].
Standard of care
Delivery of healthcare services for all SARS-CoV-2 infected patients was ensured by a public healthcare system. Care of COVID-19 patients was delivered by an integrated multidisciplinary team including infectious disease specialists, pneumologists, internal medicine physicians, nephrologists, rheumatologists, intensive care and coagulation specialists. Patients were admitted on general and infectious disease ward.
From 18 March 2020, combined therapy darunavir/ cobicistat was stopped due to the supervening information on the lack of clinical benefit of protease inhibitors (e.g., lopinavir) to treat COVID-19 [27]. A sub-cohort of patients received tocilizumab treatment in addition to the standard of care when they met the following criteria: SO 2 < 92% and a PaO 2 /FiO 2 < 200 mmHg in room air or a decrease in PaO 2 /FiO 2 > 30% in the previous 24 h after hospitalization.
Severely ill patients were evaluated by intensive care consultants for ICU admission and invasive mechanical ventilation eligibility. Medical history, age, comorbidities, vital signs, physical and laboratory examinations were assessed daily.
Criteria and definition
AKI was defined according to the 2012 Kidney Disease: Improving Global Outcomes (KDIGO) criteria [28]. Three AKI stages were classified as follows: 1 3 (i) stage 1: increase in serum creatinine (sCr) ≥ 0.3 mg/ dl within 48 h or 1.5-1.9 times increase of baseline sCr measured within 7 days; (ii) stage 2: 2-2.9 times increase of baseline sCr measured within 7 days; (iii) stage 3: 3 times or greater increase in baseline sCr measured within 7 days or sCr ≥ 4 mg/dl within 48 h or the initiation of renal replacement therapy [28]. Stage of AKI was the highest stage reached during hospitalization. Urine output criteria was not used for the diagnosis of AKI.
Baseline sCr was defined as the last available sCr measurement within 365 days before the onset of COVID-19 symptoms. When not available prior to the diagnosis of COVID-19, sCr measured on admission was used as the 'baseline' value.
The estimated glomerular filtration rate (eGFR) was calculated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation [29].
Polypharmacy occured when five or more medications were used [30].
Non-renal SOFA score was calculated by subtracting the score resulting from the degree of renal dysfunction from total score [31].
Data collection
Data collected from electronic medical records included demographics, comorbidities, medications, laboratory values, vital signs and outcomes. They were prospectively recorded from hospital admission. Comorbidities were identified upon review of the patient's medical records. International Classification of Diseases (ICD) was used to code and classify mortality data from death certificates.
Outcome
The primary outcome measure was the incidence of AKI in hospitalized patients with COVID-19. Additional analyses included the detection of risk factors for AKI and its relationship with mortality.
Statistical analysis
Baseline characteristics were analyzed using descriptive statistics and were reported as proportions, mean (standard deviation [SD]) or median (interquartile range [IQR]) as appropriate. χ2 test or Fisher's test was used to analyze categorical variables. Analyses of continuous variables were compared using an unpaired t-test or Kruskal-Wallis test, as appropriate. ANOVA has been used to evaluate the differences between AKI stages. A gamma distribution function was used to plot the probability of AKI events during hospitalization.
Mortality and incidence of AKI were evaluated using Kaplan-Meier (K-M) curves. Univariate and multivariate analysis were performed by Cox regression to identify risk factors for AKI. Cox regression also assessed the association between AKI and in-hospital mortality, after adjusting for sex, age, CKD, cardiovascular disease (CVD), diabetes, non-renal SOFA score and chronic obstructive pulmonary disease (COPD).
A P value of less than 0.05 was considered statistically significant. SPSS 23®was used for statistical analysis.
Clinical characteristics of patients with AKI
A total of 307 patients were included in the study. During the study period, 22.4% (n = 69) of patients developed AKI. The mean age of patients with AKI was 74.7 ± 9.9 years. sCr was measured 734 times in the AKI group (10.6 times per patient) during the average period of hospitalization lasting 16.7 ± 10.6 days.
The rate of mechanical ventilation and ICU admission in AKI group was 26% and 34.7%, respectively. Overall, patients with severe acute respiratory distress syndrome requiring mechanical ventilation had a higher rate of AKI events than non-mechanically ventilated patients (P = 0.045). In particular, the incidence of AKI stage 2 and 3, and unrecovered AKI was higher in patients on mechanical ventilation (Supplementary Table 1).
Stage of AKI
Patients with AKI were stratified according to the 2012 KDIGO guidelines and were distributed as follows: stage 1: 57.9%, stage 2: 24.6%, and stage 3: 17.3%. At the end of follow-up, 33.3% of the survivors did not have a full recovery of their kidney function as prior to admission. Five No pre-existent differences in terms of morbidities were observed between these two groups of patients. As shown in Fig. 1, the cumulative incidence curves show a steep rise in AKI stage 3 events within the first 10-15 days from admission.
Observation of the frequency histogram (Fig. 2) and the probability distribution plot (Fig. 3) revealed a peak of AKI events at the timing of hospital admission that decreased gradually up to the end of the follow-up period. A substantial clustering of AKI events was noted before patients' exitus ( Fig. 3B).
Risk factors for AKI
To capture probable causes of AKI (e.g., dehydration, hypotension), patients with kidney injury were subdivided into smaller groups, but analysis of the main lab test examinations, performed at baseline and at diagnosis of kidney injury, did not reveal any clinically significant differences. (Supplementary Table 3).
Outcome
Patients with AKI had an overall mortality rate of 56.5%. A high mortality rate was detected in patients with AKI stage 2 (82.4%) and 3 (83.3%). The primary causes of death were respiratory failure (61.5%), followed by sepsis (15.3%) and septic shock with MOF (9%). Crude mortality was significantly higher in AKI patients (56% vs 6.7%; P≤0.0001) compared to patients with normal kidney function (Table 2).
In a multivariable COX regression analysis that included age, sex, comorbidities (diabetes mellitus, CVD, CKD, COPD and non-renal SOFA, the HR for in-hospital death in patients with AKI was 4.82 (95% CI, 1.36-17.08) and 13.21 (95% CI, 2.92-59.69) in patients with unrecovered kidney function at the end of the follow-up compared to non-AKI (Table 4).
Discussion
The results of this study confirm the recently published data reporting AKI as a frequent event in COVID-19. In a cohort of 307 patients hospitalized for severe respiratory symptoms due to SARS-CoV-2 infection, AKI complicated the clinical course of 69 (22.4%) patients. In the majority of them (57.9%) AKI was mild (stage 1), whereas AKI stage 2 and 3 accounted for 24.6% and 17.3% of the cases, respectively. As already noted in previous studies, [7,13] kidney injury was accompanied by a higher burden of urinary abnormalities such as microscopic hematuria and proteinuria compared to patients who did not experience AKI. Renal function was replaced in 7.2% of patients with AKI by continuous renal replacement therapy. The outcome of these patients was poor because all died of refractory septic shock evolving in multiorgan failure. Of note, one-third of survivors did not have complete renal recovery at the end of follow-up.
AKI is a devastating syndrome with a significant impact on morbidity and mortality [32]. Early reports from Chinese cohorts documented a low prevalence of renal involvement [8,33]. Subsequent observational studies conducted in larger cohorts reported an incidence of AKI ranging from 0.5% to 10.4% [8][9][10][11][12][13]. A recent study evaluating 5449 patients in the New York metropolitan area confirmed that AKI was a frequent complication of COVID-19 [7] since it was diagnosed in more than onethird of patients. AKI occurred in patients with a high burden of comorbidities and, mainly in patients with respiratory distress requiring mechanical ventilation. We are unable to explain the wide variability in the prevalence of AKI, but different criteria adopted for the definition of AKI, population selection, sCr measurement frequency and timing of hospital admission are all potential determinants of these heterogeneous estimates. In our study, AKI was predominantly diagnosed in symptomatic older patients (74.7 versus 62.4 years) experiencing a more severe infection compared to non-AKI subjects. Patients who developed AKI presented a significantly more severe systemic disease (SOFA score, 3.8 versus 2.3), a high level of the classical biomarkers of systemic inflammation (IL-6, LDH, D-dimer, albumin, platelet count, hemoglobin, ferritin) and impairment of other organs including lung (PO 2 /FiO 2 ), heart (troponin, BNP) and liver (bilirubin, ALT).
Of interest, an early peak was noted in the timeline of AKI development. Similar to the findings of Hirsch et al. [7], this high number of AKI events, coinciding with admission, imposes a careful management of COVID-19 patients within few hours from admission. Early assessment of basic vital parameters and hemodynamic stabilization of critically ill patients may reduce, as far as possible, the severity of kidney injury. After the first peak, observation of our data showed a substantial clustering of AKI events before death. In this setting, the diagnosis of AKI reflected the severity of COVID-19 that in the most severe cases manifested with multiple organs failure including AKI.
Etiology of COVID-19-associated AKI is not fully understood. Potential triggering factors include hemodynamic disturbance, inflammation and exposure to nephrotoxic agents. A further cause of AKI is kidney tropism of SARS-CoV-2. Recent studies provided insights into the ability of the virus to target the tubular and glomerular cells of the kidney, especially in critically ill patients. [34,35]. In the present study, we have no data to prove direct virus damage of renal parenchyma. The incidence of AKI was more frequent among patients with CKD and diabetes mellitus, comorbidities largely known to be associated with an increased vulnerability to kidney injury [36][37][38]. Analysis of risk factors, showed that non-renal SOFA score, age, male sex and CKD were statistically significant predictors of AKI. According to our findings, age [39], male sex [40], CKD [41] are well-known risk factors for AKI in the general population. SOFA score is a reliable prognostic scoring for critically ill patients with sepsis [42] as well as kidney injury [43][44][45][46]. Furthermore, extrarenal SOFA score has been identified as independent predictors for AKI in a cohort of non-COVID-19 critically ill surgical patients. [47] In the setting of SARS-CoV-2 infection, a study conducted on 5216 US veterans provided evidence that older age, male sex and lower baseline eGFR were independent risk factors for AKI during hospitalization. [48] In parallel to our findings, several studies confirmed that age [7,49,50], male gender [50,51], severe COVID-19 (respiratory distress) [52] and CKD [49,51] were independent risk factors for the development of COVID-19-associated AKI during hospitalization.
The identification of these risk factors may elucidate potential strategies for the prevention of kidney injury, as this event is independently associated with in-hospital mortality. The burden of this association is estimated to confer about five-fold excess risk of mortality in patients with AKI and 13-fold in subjects with unrecovered AKI. Detection of vulnerable patients at risk for AKI, prevention and supportive strategy in patients prone to AKI could improve the prognosis of these patients and prevent long-term consequences [53]. According to national health policies, we suggest implementing home assistance of infected patients to minimize the surge of critically ill patients in already overwhelmed hospitals. Therapeutical strategies providing intravenous hydration in dehydrated patients, avoidance of nephrotoxic agents (NSAIDs) and early withdrawn of offending agents (i.e., diuretics, RAS-blockers) may be beneficial if undertaken before arrival in hospital.
Several limitations of this study should be mentioned, some of which intrinsic to the retrospective nature of the study. A certain number of AKI events may be underdiagnosed because of the unavailability of urinary output and sCr at the time of symptoms onset. As a result, the incidence of AKI may be underestimated in our population, however, this limit also recurs in recently published retrospective studies on AKI [13,14].
Although the hazard ratio for death has been adjusted for potential demographic and clinical confounding variables, we cannot rule out the effect of other unrecognized cofounders. We used the non-renal SOFA score to avoid collinearity between predictor and outcomes. We are confident that the adjustment of our model for this strong clinical variable reinforced the relationship between AKI and in-hospital mortality. Lastly, the lack of data on the long-term outcome of kidney injury, do not allow to weight the real consequences of AKI in term of morbidity and mortality in a cohort of patients at high risk for CKD.
Conclusion
Acute kidney injury was a frequent complication of COVID-19. In our cohort of hospitalized patients it occured in one-fifth of the population. AKI was generally diagnosed in symptomatic elderly patients with hypoxemia and a severe systemic inflammatory response to the ongoing infection. Non-renal SOFA (score > 3), age, male sex and CKD were risk factors for AKI in our cohort of patients. Identification of the etiological mechanism of AKI and strategy aimed to prioritize the prevention and early identification of AKI are urgently required, as AKI is an independent predictor of all-cause mortality in COVID-19. | 2021-07-01T13:42:58.555Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "e43f898e09296b20221bb6d2aac5e9bb1d0c5b63",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10157-021-02092-x.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e43f898e09296b20221bb6d2aac5e9bb1d0c5b63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235721367 | pes2o/s2orc | v3-fos-license | Pharmacovigilance
<jats:p>NONE</jats:p>
When a new medication is released in the market, information about its adverse effects becomes available, which may result in its withdrawal, restrictions in use and labelling changes. Some adverse effects are a cause of concern among healthcare professionals and the public. Data on drug efficacy and safety are usually based on the experience of thousands of people who participated in controlled clinical trials. 4 Rare adverse events may not always be identified in clinical trials because of the lack of long-term safety data and unforeseen interactions with coexisting clinical conditions or other drug therapies. 2,4 Risks and benefits associated with medications can be better understood only after their use by a wider group of people and monitoring for a longer period. 2 Characterisation of a new drug's complete safety profile relies on clinicians' careful observation of its effects in the 'real world' practice; pharmacovigilance is the observational science that helps in this process. 4 Pharmacovigilance helps identify the safety concerns associated with medications and helps regulatory agencies or manufacturers make decisions regarding withdrawal, restrictions in use or labelling changes for medications.
p h a r m a c o v i g i l a n c e s y s t e m s a n d s p o n ta n e o u s r e p o r t i n g Pharmacovigilance employs various methods to monitor the safety of medications with spontaneous reporting being the most common one. Spontaneous reporting is done by people who make a connection between a drug and a suspected drug-induced event. This data about suspected ADRs are collected in a central database. 5 Although spontaneous reporting is critical for drug safety monitoring and should be considered a professional responsibility, under-reporting of ADRs is a limitation of current pharmacovigilance systems. 2 Despite the inherent limitations of spontaneous reporting, it provides crucial evidence for generating a hypothesis regarding the association between a drug and an adverse event. 5 Carefully planned post-marketing studies and ongoing systematic evaluation using linked databases can help construct efficient pharmacovigilance systems. 2 Pharmacovigilance serves as an indicator of clinical care standards that are practised within a country. 6 Every country has its own pharmacovigilance programme due to the differences in several factorsincluding predominant diseases, prescribing practices, the genetic composition of the population, diet, and people's traditions. These factors can influence the pattern, presentation and incidence of ADRs. 7 In response to the thalidomide disaster in 1961, the WHO initiated the Programme for International Drug Monitoring (PIDM) and has an active WHO Collaborating Centre for International Drug Monitoring (Uppsala Monitoring Centre, Sweden), which promotes pharmacovigilance at the country level. The WHO programme is a worldwide collaboration of 140 fulland 30 associate-member countries and contributes towards patient safety worldwide. 8 Safety information received from pharmacovigilance centres helps design drug utilisation practices, essential drugs programmes, standard treatment guidelines and national and institutional formularies. 6 Regulatory authorities maintain databases of adverse event reports and analyse them systematically for new safety signals; one striking case report, an unusual pattern of adverse events or a collection of adverse event reports exceeding the expected level in usual clinical experience which might initiate a targeted and comprehensive investigation and analysis. 4 p h a r m a c o v i g i l a n c e i n p r a c t i c e A healthcare system that includes pharmacovigilance promotes the safety of medications by minimising ADRs' occurrence and provides a warning network of various healthcare providers, regulators, manufacturers and consumers to take remedial actions in a timely and orderly manner. 9 The key stakeholders involved in pharmacovigilance are patients, healthcare professionals, governments and pharmaceutical companies. 2 Among these stakeholders, healthcare professionals play the most significant role.
Pharmacovigilance is a multidisciplinary approach that includes the collaboration of multiple disciplines such as clinicians, pharmacists, nurses and dentists. A clinician's role in handling ADRs is essential not only for patients' safety but also for drug safety monitoring at the population level. 7 Pharmacists monitor the ongoing safety of medicines and are the most responsible members of the multidisciplinary team to establish and maintain an effective pharmacovigilance programme in a practice setting. Pharmacists provide information related to medication safety after critical evaluation. 9 The exclusive role of nurses in pharmacovigilance is identifying ADRs, which is difficult for other healthcare providers. 10 Dentists may help build a better pharmacovigilance system by adopting pharmacovigilance practices and reporting ADRs that are useful for dentistry as a whole. 11 Pharmacovigilance education and training in healthcare professionals helps construct a better pharmacovigilance system in clinical practice. 6 Key pharmacovigilance aspects should be integrated into existing programmes as well as courses for medical, pharmacy, dentistry and nursing education. 12 Although basic knowledge about ADRs can be acquired through undergraduate pharmacology textbooks and curricula, additional educational efforts are needed to inculcate the habit of drug safety and pharmacovigilance among medical students. 2,7 These students should be trained so that they could be able to report ADRs in their area. 7 Competence in handling ADRs in clinical practice is also important for drug safety monitoring at the population and individual patient levels. 12 Healthcare professionals should possess the skills required to critically evaluate drug information and decide how a drug's safety profile might be applied to a particular patient. 6 Educating and training healthcare professionals and linking the clinical experience of drug safety with research and health policies can enhance effective patient care. 9 p h a r m a c o v i g i l a n c e s y s t e m i n o m a n To address the need for an effective system for routine drug safety monitoring and to ensure public health protection in Oman, the Ministry of Health (MOH) joined the WHO PIDM in 1995. 13,14 The background activities were initiated in the International Communication Section under the Drug Control Department and all healthcare professionals working in both government and private sectors in Oman were involved in the programme. In 2015, the Department of Pharmacovigilance and Drug Information (DPVDI) was established as the National Pharmacovigilance Centre (NPVC), following a restructuring of departments in the MOH, Oman. Pharmacovigilance activities in DPVDI are based on the Directorate General of Pharmaceutical Affairs and Drug Control in the MOH [ Figure 1]. DPVDI also collaborates with international stakeholders, such as the WHO and the Uppsala Monitoring Centre (UMC), Sweden, for matters related to the safety of medicines.
There are 34 regional pharmacovigilance centres and 80 sub-regional pharmacovigilance centres functioning under the NPVC in Oman. The ADR reporting algorithm of the Omani NPVC is depicted in Figure 2. 15 The total number of ADRs reported at the NPVC in the initial years of the ADR monitoring programme was limited; however, it increased through constant awareness programmes, workshops and training focused on healthcare professionals at regional and institution levels. The total number of reports submitted to the UMC in 2019, 2018 and 2017 were 2,472, 1,703 and 2,196, respectively. The DPVDI was instrumental in developing guidelines such as the Guideline on Good Pharmacovigilance Practices in Oman, Guide for Reporting Adverse Drug Reactions and Quality Problems, Guide for Direct Healthcare Professional Communications and a Supplement to Chapter 11 that focused on Marketing Authorisation Holders and pharmaceutical manufacturing companies. 15,16 These guidelines will facilitate the activities related to pharmacovigilance within the country.
Pharmacovigilance is an ongoing process during medication use and is an essential component of clinical practice that promotes safe medication use through prevention, identification, analysis, management and documentation of adverse effects and drugrelated problems. Stakeholders involved in pharmacovigilance include patients, healthcare professionals, drug manufactures and regulatory agencies. A multidisciplinary approach with healthcare professionals such as pharmacists, clinicians, nurses and dentists is essential for developing an effective pharmacovigilance system. Teaching pharmacovigilance aspects to future healthcare professionals as a part of their curriculum will ensure effective use of these aspects during clinical practice. Although Oman is a part of the global pharmacovigilance for several years and has an active pharmacovigilance system, continuing awareness and | 2021-07-04T05:21:23.193Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "44db878eba863e52d01def06e78d2638ca8a015f",
"oa_license": "CCBYND",
"oa_url": "https://journals.squ.edu.om/index.php/squmj/article/download/4398/3203",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "44db878eba863e52d01def06e78d2638ca8a015f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225297185 | pes2o/s2orc | v3-fos-license | Analysis of Cavitation Processes in Xylem
Cavitation in plants is caused by development of air bubbles, which is related to their equilibrium and development. There is a univariate cubic equation for bubble balance. New root formula of this kind of equation was proposed by Shenjin Fan, which is simpler than the Caldan’s. Using Shenjin formulas and taking water pressure l P as an independent variable, this paper gives the exact solution of the equation under certain conditions. The stability of the equilibrium of an air bubble in its different radius ranges is obtained by the way different from the previous. This kind of cavitation includes two types: First type may be caused by the growth of pre-existent air bubbles; Second type is air seeding, here defined as the sucking of air bubbles from already gas-filled conduits. For air seeding three ways of cavitation have been proposed. For the first type this paper puts forward that two ways of cavitation can occur, which are the same with the first two ways of air seeding except of air reservoirs. Moreover, for the first way of the first type, the range of water pressures is the same with that of the first way of air seeding. For the second way of the first type the range of water pressures is much wider, or the pressure range equals the pressure sum of the second and third ways of air seeding. Through the specific data the relationship between the two types is given.
bolism formation via the process of air-seeding etc. [6] [7]. Whether the capillary failure is an appropriate physical model comes to be a question [7]. Then, from the experiments [8] [9] [10] [11], it is obvious that the former hypothesis of air seeding is still effective to the xylem of some trees although cavitation in lipid bilayers has negative pressure stability limit [12].
Where does an air seeding event take place? Considering the potential importance of the rare pit hypothesis, Plavcová et al. [13] suggested that more attention should be paid to the structural irregularities, as those may represent the rare sites ultimately responsible for air-seeding.
Isolated conduit has been seen, which might be caused by another mechanism [9] [11]. The development of nanobubbles snapped off at pit membranes can also cause cavitation events [14]. These all may involve the growth of pre-existent air bubbles in xylem.
Ponomarenko et al. [11] distinguished two types of optical events. The first is the "nucleation" events, starting in a fully wet area, which might be caused by the growth of pre-existent air bubbles. The second is the "air-seeding" events, being defined as the appearance of bubbles near an already gas-filled conduit.
The definition of types of cavitation in this paper follows that defined by Ponomarenko et al. [11].
Three ways of cavitation by air seeding have been proposed [15] (In the article [15] the word "way" was defined as word "type"). The types of cavitation by the growth of pre-existent air bubbles in xylem should be given more attention.
The two types of cavitation are all related to the equilibrium, stability and development of air bubbles in xylem. Analysis of bubble expansion by mechanism and by the equilibrium criterion of Helmholtz function has been made, which is based on the equation of bubble balance [16] [17]. This is a univariate cubic equation. Taking mole number n of air in a bubble as an independent variable, its analytic solution has been made [18]. A new formula for finding the root of univariate cubic equation was proposed by Fan [19], which is simpler than the Caldan's. Using Shenjin formula and taking absolute water pressure l P as an independent variable, this paper gives the exact solution of the equation of bubble balance under certain conditions. As gas super-saturation is likely to occur in xylem sap almost daily [14], here the number n is regarded as a constant. And the stability of the equilibrium of an air bubble in its different radius ranges is obtained by the way different from our previous article [17]. Journal of Applied Mathematics and Physics For the first type this paper puts forward two ways of cavitation, which are the same with the first two ways of air seeding except of air reservoirs, etc. Then, the relationship of the two types of cavitation is given.
Equilibrium Equation of Air Bubbles
Provided there is a bubble of radius r with n mole air in xylem sap. In order to simplifying the problem, several assumptions were made. First, because the water saturation vapor pressure in a bubble is generally less than 0.0023 MPa at 20˚C, comparing with atmospheric pressure o P , it is ignored. We also ignore some facts, including abundant hydrophobic surfaces and insoluble surfactants in xylem.
According to the ideal gas law g P nRT V = , the gas pressure P of the bubble of volume When a bubble is in an equilibrium, we have: The relationship among l P , atmospheric pressure o P and xylem pressure is the radii of the bubble in an equilibrium.
Above system consists of three parts: an air bubble, the surrounding water and the interface between the air and the water. Corresponding to a fluctuation, the changes of its Helmholtz function are: Integrating Expression (4) gives ( ) ( ) Once Helmholtz function ( ) F r ( Figure 2) reaches an extreme, or ( ) 0 F r ′ = , the bubble will attain its equilibrium. Thus, from expression 4 we also have Equation (3).
Solution of Equation (3)
Letting the left side of Equation (3) be a function of r gives ( ) 2) If l 0 P ≠ , corresponding to the following equation the analytic solution of Equation (3) can be gotten by Shenkin formula [19]. nRT r r σ ′ < π = . As xylem pressure l P is often negative, we do not pay more attention to it.
② When l 0 P < there are several situations as follows.
The values of r Ⅱ all are negative and should not be considered. iii) . When . Thus, r Ⅲ is 1 r in Figure 3, the values of which are in the range of o 1 * r r r < ≤ .
Therefore, in the range of * l l 0 P P To sum up, if bubble is o o 1 * 2 r r r r r ′ < < < < .
Stability of Bubble Equilibrium
The stability of an air bubble which is in equilibrium depends on Formula (6b). For
( )
F r reaches its minimum and the equilibrium of the bubble is stable. In turn, F r arrives at its maximum, the equilibrium of the bubble is unstable.
4) When
* l l P P < , a gas bubble could not be at any equilibrium. Every one of bubbles has its own nRT , also its own * l P , being called its Blake threshold pressure, and its * r , or Blake critical radius [20] [21].
First Type of Cavitation: Growth of Pre-Existent Air Bubbles in Conduits
Suppose that along with the decreasing of l P a bubble with n mole air in a conduit of radius c r enlarges stably. If its Blake radius * c r r > (or based on ways of cavitation all are the same with the first two ways of cavitation by air seeding [15] except of forming isolated embolized conduits without any reservoir.
Second Type of Cavitation: Air Seeding
When an air seed is sucked into a conduit of radius c r from atmosphere through a pore of radius p r in pit membrane, its initial radius equals p r and initial gas pressure o P P = [22]. In the range of pressure o l o 2P P P − < < its radius should be o r′ , o r or 1 r [17]. As l P decreases, it will develop like the growth of a pre-existent air bubble in a conduit, presenting the first or second ways of cavitation but with air reservoirs [15].
If a seed enters a conduit of radius c r through a pore of radius pc r in the conduit wall from atmosphere and will break up at * lc c there is a relationship [22], the pressure l P at which the seed enters the conduit is ( ) However, at the moment the radius of the seed reaches c r . Then, it should become a long shaped bubble. Thus, the exploding event might disappear.
Using formulas (9) and (10), and combining the results of the articles [15] [17] the following conclusions are obtained.
1) In the range of lc l o P P P ≤ < and p pc r r ≥ , the first way of cavitation will form.
2) In the range of o l lc 2P P P − < < and p pc 0.487 μm r r < < the second way of cavitation will take place.
3) In l o 2 P P ≤ − and p 0.487 μm r ≤ , soon after an air seed is sucked into a conduit, as its radius is 2 r it will explode immediately and the conduit will be filled with the seed air instantly, presenting the third way of cavitation.
The experiments [8] [9] show that as primary xylem conduits were directly connected to air-filled spaces within the pith, inter-conduit air seeding was the primary mechanism. Thus, o P in should be replaced by internal air pressure a P , causing some data to be recalculated.
Relationship of the Two Types of Cavitation
For the development of air seeds, Table 1 gives the values of radii (in bold) of some seeds, which are just sucked into conduits, and their corresponding pressures l P (in bold). Also the values of corresponding nRT , o r , * r and * l P of the seeds of radii o r′ , o r and 1 r . For a seed of radius 1 r (or 2 r ), using the Journal of Applied Mathematics and Physics formula 8 the corresponding 2 r (or 1 r ) can be calculated. Note the two states of the bubble of radius 1 r and 2 r are at the same water pressure l P .
If a seed of radius pc r r = at l lc P P = enters a conduit of radius c 6.501 μm r = , from formulas 9 and 10, we got pc 2.740 μm r = and lc 0.04672 MPa P = (Table 1 line 2). In the range of lc l o P P P ≤ < the bubbles will expand gradually ( Table 1 line 1 and 2), presenting the first way of cavitation. In the range of o l lc 2P P P − < < , the bubbles of o r′ , o r and 1 r will expand to their respective * r , presenting the second way of cavitation (Table 1 line 3 → 5). For the seed of radius 1 r during the dropping of l P it will break up at * l P with * r r = before its radius reaches 2 r . Thus, in the range of o l 2 0 P P − < < the bubble of radius 2 r in the parentheses does not exist ( All air seeds of radius o r′ , o r , or 1 r (lines 1 → 5) by air seeding in Table 1 can also be regarded as pre-existent air bubbles. Occasionally the air seeds of ra- Table 1 can be regarded as pre-existent. During the decreasing of l P they will expand. The bubbles of radius * c r r > in lines 1 → 2 will grow gradually. The others will expand to their respective radii * r at * l P to explode. The bubbles with the values in parentheses can't exist. the second way will take place.
From Table 1 we can see that the more the amount nRT of a bubble, the larger its Blake critical radius * r and the higher its Blake threshold pressure.
This means that a bubble with more nRT is prone to burst at higher pressure and only the nanobubbles with a small amount of air can exist steadily in larger ranges of water pressures. For example, in Table 1 we can see that the smaller the σ , the smaller the absolute value of * l P , meaning that at higher water pressure an air bubble will burst and a cavitation event will occur easily. Thus, the values in Table 1 should be recalculated.
Conclusions
For the equation of bubble balance, using Shenjin formula, which is simpler than the Caldan's, this paper gets its analytic solutions. The stability of equilibrium of air bubbles was made by the way different from the previous in the article [17].
Two types of cavitation are analyzed further. For the first type of cavitation two ways can occur, which are the same with the first two ways of air seeding except of air reservoirs. Moreover, for the first way of the two types, the range of water pressures is the same. For the second way of the first type the range of water pressures is much wider, or the pressure range equals the pressure sum of the second and third ways of air seeding.
Through the specific data the relationship between the two types is given. P : absolute water pressure at which an air bubble of radius * c r r = will burst pc r : radius of the pore through which an air seed enters a conduit of radius c r and will burst at * lc P lc P : absolute water pressure at which an air seed enters a conduit of radius c r and will burst at * lc P ( ) F r : Helmholtz function A: gas/water interface | 2020-10-28T18:00:07.218Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "a66df1f12a23a664799463460095312ce73230fd",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=102817",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "44ec3422cccea5e6b3016168a64cb756a5f080f1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
119316811 | pes2o/s2orc | v3-fos-license | Graded character rings, Mackey functors and Tambara functors
Let $G$ be a finite group and $\mathbb{K}$ a field of characteristic zero. the ring $R_\mathbb{K}(G)$ of virtual characters of $G$ over $\mathbb{K}$ is naturally endowed with a so-called Grothendieck filtration, with associated graded ring $R^*_\mathbb{K}(G)$. Restriction of representations to any $H\leq G$ induces a homomorphism $R^*_\mathbb{K}(G) \to R^*_\mathbb{K}(G)$. We first determine that, unless $G$ is abelian, induction of representations does not generally preserve the filtration, so $R^*_\mathbb{K}(-)$ is not a Mackey functor; we propose a modified filtration which remedies this. We then turn to tensor induction of representations, and show that in the abelian case, $R^*_\mathbb{C}(-)$ is a Tambara functor.
Introduction
Let G be a finite group and K a field of characteristic zero, and let R K (G) be the ring of virtual characters of G over K. Exterior powers or representations turn R K (G) is a λ-ring, equipped with a so-called Grothendieck filtration. In [Che18], we undertook to compute examples of the graded character ring R * K (G) associated with this filtration. The structures appearing as graded character rings of finite groups are remarkably complex; there is no Künneth formula for R * K (G), and computing graded rings of small groups reveals challenging. As an example, consider: Z[x, y] (4x, 4y, 2x 2 y + 2xy 2 , x 4 y 2 − x 2 y 4 ) , where |x| = |y| = 1 (see [Che18,Prop. 7.2]). This result was obtained by combining the topological properties of the filtration with the functoriality of R * K (−). We turn here to a problem of a more abstract nature: graded character rings are functorial, and thus for any subgroup H ≤ G, restriction of representations to H induces a well-defined ring homomorphism Res G H : R * K (G) → R * K (H) (see [Che18,Lem. 4.1]). Does induction of representations induce a map R * K (H) → R * K (G)? Such a transfer map would turn R * K (−) into a Mackey functor, a particularly widespread type of algebraic structure: group cohomology, algebraic K-theory, character rings are all Mackey functors. Among other results, the stable element method of Cartan and Eilenberg (see [CE99]) generalises to all Mackey functors, and would allow us to relate the graded character ring of a group to those of its Sylow subgroups. A concise account of the general theory of Mackey functors is given in [Web].
As mentioned earlier, if S is any Mackey functor, then the following "stable element" result applies: This is Theorem 3.5 in the text. It is possible to "Mackeyfy" graded character rings by modifying the Grothendieck filtration. We define the saturated filtration {F n (G)} n≥0 as the minimal filtration that is preservd by induction of characters and contains the Grothendieck filtration, that is: where {Γ n (H)} n≥0 is the Grothendieck filtration on R K (H). Fortunately, restriction of representations also preserves this filtration, and thus, the associated graded ring R * K (−) is a Mackey functor (see Theorem 4.2). At a first glance, there is no guarantee that R * K (−) is not trivial in some way or other: a lot of the information contained in the Grothendieck filtration could be lost in the process. Reassuringly, both filtrations contain the same information "at infinity", as we show in Theorem 4.4: Theorem 1.3. The saturated filtration and the Grothendieck filtration induce the same topology on the character ring R K (G). This means, in particular, that induction of representations is continuous with respect to the Grothendieck topology, and can be extended to a map of completed rings Ind G H : R K (H) → R K (G). This, combined with the stable elements result, gives us Theorem 4.11 the following analogue to Artin's theorem: Judging from the definition of the saturated filtration, one could expect R * K (−) to remember some information about the subgroup structure of G. An interesting open problem is whether R * (−) could distinguish groups with the same character table and power maps, something the usual graded character ring cannot do.
One downside of the saturated filtration is how complex it is: direct computations seem, for now, out of reach. An interesting class of examples is that of saturated groups, that is, groups G such that the natural map R * K (G) → R * K (G) is an isomorphism. Those include most of the examples we already computed; the following result combines Proposition 5.1, Proposition 5.4, and Proposition 5.6: Theorem 1.5. Groups of order less than 12, as well as abelian groups, and dihedral groups of order 2p for p prime, are saturated.
For non saturated groups, one can use the stable element method, which we do in Theorem 5.7: Theorem 1.6. Let G = P SL(2, p) be the projective special linear group over F p , where p is an odd prime such that p ≡ 3, 5(mod 8). Write: Then: The last problem we treat in this paper is that of tensor induction, a multiplicative map R K (H) → R K (G). Mackey functors equipped with such a multiplicative map (and satisfying certain axioms) are called Tambara functors. In group cohomology, this role is played by the Evens norm; the (ungraded) character ring R K (G) with tensor induction is also a Tambara functor, as we prove in Section 6. In order to explore the connection between tensor induction and the Grothendieck filtration, one needs to understand the behaviour of the multiplicative norm on virtual characters. This is a remarkably complex problem, as there is no known formula for the norm of the sum of two characters, even when those come from actual representations. We follow Tambara's account and, restricting first to normal subgroups of prime index, then to abelian groups, we obtain such a formula. This is the key to prove Corollary 7.10: Let G be a finite group and K a field of characteristic zero. The ring of virtual characters (or character ring) R K (G) over K is the Grothendieck ring of the category of KG-modules; that is, it is the abelian group generated by irreducible representations of G up to isomorphism, with multiplication given by the tensor product of representations. Since K has characteristic zero, representations up to isomorphism are fully determined by their character, thus we'll use both terms interchangeably. The character ring is an augmented ring, with augmentation ǫ : R K (G) → Z sending a character to its degree. Additionally, exterior powers of representations turn R K (G) into a λ-ring: if χ is a character of G over K afforded by a representation ρ, write λ n (χ) the character of the n-th exterior power Λ n ρ of ρ. The operations {λ n } can be extended to the whole ring R K (G), and satisfy the axioms for a λ-ring outlined in [AT69]. For x ∈ R K (G) and n ∈ N put γ n (x) = λ n (x + n − 1), the n-th gamma operation. Let I = ker ǫ be the augmentation ideal. We define the n-th ideal Γ n in the Grothendieck filtration (or Γ-filtration) as the abelian subgroup generated by monomials of the form Then Γ n is a λ-ideal (that is, an ideal that is preserved by λ-operations) in R K (G). We have Γ 0 = R K (G) and Γ 1 = I. Moreover, Γ n · Γ m ⊆ Γ n+m , therefore we can define the graded character ring of G over K as follows: From now on, we write R(G) and R * (G) when K is clear from the context. Note that although our general results are independent of the field K, we always compute explicit examples over K = C. The ring R * (G) is generated by Chern classes of irreducible representations; for the full definition and properties of Chern classes, we refer the reader to [Che18]. Suffice it to say that the n-th algebraic Chern class c n (ρ) of a character ρ is defined as the image in R * (G) of the element It is an element of degree n in R * (G). The definitions of Mackey and Tambara functors, to be given in later sections, are greatly simplified by looking at character rings from the point of view of Gequivariant K-theory. We view a G-set X as a category with an object for each point, and an arrow between two objects (g, x) : x → y for each g ∈ G such that g · x = y. A vector bundle is then defined as a functor V between X and the category of K-vector spaces and linear maps; that is, it associates to each x ∈ X a vector space V x , and to each g ∈ G linear maps V (g,x) : V x → V g·x . For an element e ∈ V x , we write g · e ∈ V g·x for V (g,x) e. A functor V then corresponds to the data of each V x and g · e. Let K G (X) be the semigroup of isomorphism classes of vector bundles over X, under direct sum. In the sequel, we restrict ourselves to finite G-sets.
Lemma 2.1. Let X be a transitive G-set with a distinguished point x ∈ X, and let H = Stab(x). Then there is an isomorphism (depending on x) between K G (X) and the semiring of representations R(H).
Proof. Let W be a representation of H and consider the induced representaiton Define a vector bundle on X as follows: for each y ∈ X, write y = g · x and let (V ) y = g · W = g ⊗ W ⊂ V . This depends only on y and the action of g takes V x to V gx , so this is a vector bundle. Conversely, if V is a vector bundle on X, define W = V x . This is a well-defined H-module (since it is stable by H), so W is a representation. These two constructions are mutually inverse.
Remark.
(i) The isomorphism above depends on x; choosing the point y = g·x as a basepoint instead, one obtains the isomorphic representation of gHg −1 which is given by precomposing the action of H on V x by conjugation with g.
(ii) Since every finite G-set can be written as a disjoint union of transitive Gset, this gives us a way to prove general facts about K G (X) by restricting to representation rings.
This vocabulary allows us to generalize the notions of restriction, transfer and tensor induction of representations. Let f : X → Y be a map of G-sets, given by a functor between the categories X and Y as described above. We define: (i) The restriction f * : K G (Y ) → K G (X), as the composition of f and V . In other words: Note that with the shorthand notation mentioned above, for e ∈ V f (x) , we have (f * V ) (g,x) e = g · e, which corresponds to the same element in f * (V ) as in V , only understood in a different fibre. This is particularly intuitive in the case where f : In shorthand notation we have g · ( x e x ) = x g · e g −1 x .
(iii) The norm (or tensor induction) f : Note that although we use its vocabulary and definitions, the full extent of equivariant K-theory is beyond our scope. Thus we will mostly assume that X, Y are of the form G/K for some subgroup K ≤ G, and more often than not we will have
Remark.
(i) Equivalently, (ii) One can check that applying the norm formula to the case of X = G/H and Y = { * } yields the usual tensor induction, as defined eg. in [CR90,§13A] 3. Graded character rings are not Mackey functors Graded character rings are functorial (see [Che18, Lemma 4.1]); in particular, if H is a subgroup of G, restricting representations from G to H induces a well-defined homomorphism R * (G) → R * (H). Naturally, one wonders whether induction of representations from a subgroup H of G also preserves the Grothendieck filtration, and thus gives rise to a well-defined, additive induction map from R * (H) to R * (G). If so, then R * (−) satisfies the axioms of a cohomological Mackey functor (which we define below). In particular, an analogue to Cartan and Eilenberg's result on stable elements in cohomology ([CE99, Th. XII.10.1]) states that each p-primary component R * (G) p of R * (G) is isomorphic to some subring of the graded character ring of its p-Sylow subgroup. This is not the case, and we produce below an example where this property fails. Thus R * (−) cannot be a Mackey functor.
A thorough treatment of the theory of Mackey functors is given in [Web]; let us start with the definition. Let R be a commutative ring and G a group, and let Gset be the category of finite G-sets. A Mackey functor is a pair (S * , S * ) of functors from Gset to R−mod, where S * is contravariant and S * is covariant, and S * (−) and S * (−) are equal on objects. Additionally, we require the following axioms be satisfied: (ii) For every pair Ω, Ψ of finite G-sets, the morphism S(Ω)⊕S(Ψ) → S(Ω⊔Ψ) obtained by applying S * to Ω → Ω ⊔ Ψ ← Ψ, is an isomorphism.
Remark. Alternatively, a Mackey functor S can be viewed a a function from the subgroups of G to R−mod, with, for any two subgroups H ≤ K and g ∈ G, maps The maps are required to satisfy the usual axioms governing conjugation, induction and restriction of representations, as detailed in [Web,§2]. If, additionally, the induction and restriction satisfy Res This second definition makes it easy to check that the (ungraded) character ring R(−) is a cohomological Mackey functor. Thus, if induction preserves the filtration, then R * (−) is also a cohomological Mackey functor, and the following result applies:
and its image consists of the stable elements in S(H) (p) .
This result is a consequence of [Web, Cor. 3.7 and Prop. 7.2]; a more elementary proof, in the case of cohomology, can be found in [AM69, Th. 6.6]. In the case of the alternating group A 4 of order 12, we use the following corollary: We show that the condition of surjectivity on stable elements fails. Note that the following computation relies heavily on the techniques developed in [Che18], to which we refer the reader for any details. We also use the following result: . Let C 2 be the cyclic group of order 2, and let ρ 1 , ρ 2 be the generating representations for Let A 4 be generated by the permutations (12)(34) and (123). There are 4 irreducible complex representations of A 4 : • Of dimension 1: the trivial representation 1, and the representations ρ (resp.ρ) that send (123) to e 2iπ/3 (resp. e −2iπ/3 ) and (12)(34) to 1.
• Of dimension 3: the standard representation θ, which is the quotient of the representationθ acting on C 4 by permutation of the basis vectors, by the trivial representation. The character of θ sends 3-cycles to 0 and (12)(34) to −1.
There are the following relations between the representations: Additionally λ 2 (θ) = θ (by a direct calculation of the exterior power) and det(θ) = 1.
Proof. The graded character ring R * C (A 4 ) is generated by all Chern classes of irreducible characters of A 4 , so we start by ridding ourselves of extra generators. Let x = c 1 (ρ) and y = c 2 (θ). Then x = c 1 (ρ) = −c 1 (ρ) and 3x = 0. Moreover As for the degree 3 generator c 3 (θ), we have: so there is no additional generator in degree 3 and R * (A 4 ) is generated by x, y. We have 3x = 0 by the above, and 12y = 0 since the order of A 4 kills R * (A 4 ) (see [Che18,Prop. 2.6]). We now turn to the relation 4y + x 2 = 0: applying the total Chern class c T to both sides of (3.3) yields: On the left-hand side, use the splitting principle ([Che18, Prop. 2.3]): we write the character θ as a sum θ 1 + θ 2 + θ 3 of linear characters. Looking only at even terms of degree ≤ 6 and keeping in mind that c 1 (θ) = c 3 (θ) = 0, we get: Equating 3.4 and 3.5 yields 4y = −x 2 . In particular this means that the order of y is a multiple of 3. To obtain more information, we can use the restriction .
We have Res H (y) = t 2 1 + t 1 t 2 + t 2 2 , which has order 2. So the order of y i is a multiple of 2, that is, it is either 6 or 12. To conclude, we use the continuity method described in ([Che18, §6]). Let X = C 1 (ρ) = ρ − 1 and Y = C 2 (θ) = 3 − θ, and let Then Γ n is an admissible approximation for Γ. The evaluation φ (12)(34) sends X to 0 and Y to −4, and thus is continuous with respect to the 2-adic topology on Z.
Proof. Let G = A 4 , and consider its normal, abelian 2- On the other hand, G acts on R * C (H) by cyclic permutations of the elements t 1 , t 2 , t 1 + t 2 . The element z = t 3 1 + t 3 2 + t 2 1 t 2 is invariant under this action. But z is not a combination of powers of t 2 1 + t 1 t 2 + t 2 2 since it has odd degree, and thus does not belong to the image of the restriction map. Therefore: is not a Mackey functor.
Saturated rings
Theorem 3.5 tells us that induction of representations is not compatible with the Grothendieck filtration. This prompts us to define a modified filtration, taking into account all images of Chern classes of subgroups of G under the induction map. This new filtration retains much of the information of the Grothendieck filtration: in fact, both induce the same topology on R(G). In the sequel, let H, K denote two arbitrary subgroups of G. On the λ-ring R(G), define the saturated filtration {F n } n as follows: This means that F n (G) is generated by elements of the form: x = Ind G H (γ i1 (ρ 1 ) · · · γ im (ρ m )), i 1 + · · · + i m ≥ n with each ρ ℓ an irreducible representation of H. Let w(x) = i 1 + · · · + i m be the weight of x. By definition, induction of representations preserves the filtration F . (i) Induction and restriction of characters preserve the filtration F .
Proof.
(i) By definition, induction preserves the filtration, so we only need to check that restriction does. Let Then y s is a representation of s H and We proceed by induction on the order of G. Suppose H < G is a proper subgroup. By the projection formula ([Ser77, §7.2]): ). Since restriction preserves the filtration and Ind G K (y) ∈ F j (G), we have Res G H Ind G K (y) ∈ F j (H), so: where the inclusion is true by induction. In conclusion: Lemma 4.1 lets us define the saturated graded ring associated to G as: Note that, as representation rings are of the form K G (X) for some transitive G-set X, we can extend the definition of this filtration to K G (X) for a general finite G-set X. Then the above discussion means that for every maps of finite G-sets f : X → Y , the maps f * and f * defined in Section 2 are compatible with the saturated filtration. Proof. This follows from the above.
Note that R * is actually a Green functor, that is, a Mackey functor with an Ralgebra structure compatible with restriction and satisfying the projection formula. At a first glance, R * (−) seems too good to be true, and we need to ensure we do not lose too much information by modifying the filtration: after all, we could end up with trivial graded rings. It is not the case however, and in fact both filtrations induce the same topology on R(−). We rely on the following result by Atiyah: Pick k (and thus m) large enough that we also have I(G) k ⊂ Γ N (G). Then
Proof. Let U ⊆ R(G) be open for the F -topology, that is, for any
, which completes the proof.
Since Γ n ⊆ F n for all n ≥ 0, there is a natural map of graded rings: induced by the identity. Here is a neat consequence of theorem 4.4: Proof. If η is surjective, then R * (G) is generated by Chern classes of elements of R(G). Let P w denote a polynomial in the C l (ρ k ) of weight w, then any x ∈ F n (G) can be written as: where the ρ j 's are irreducible representations of G and y n ∈ F n+1 . But then is compatible with the filtration (Γ n ), then H is said to be Γ-compatible with G.
Lemma 4.6. If the restriction maps i * : R(G) → R(H) are surjective for all
So all virtual characters in F n (G) (which are induced from subgroups of G) are also in Γ n (G), and thus R * (G) = R * (G).
Remark.
• In Section 4, we use Lemma 4.6 to show that Abelian groups are saturated. So R * (−) is a Mackey functor when restricted to abelian groups.
• We show in Proposition 5.6 that the converse of Lemma 4.6 is not true: the dihedral group of order D p for p odd is saturated, but restriction of representations to C p isn't surjective.
The saturated ring R * (G) is generated by Chern classes of irreducible characters G, as well as classes of the form Ind G H (c i (ρ)) with ρ a virtual character of H ≤ G.
. In a sense, the classes d i quantify the obstruction to G being saturated.
The following result implies that the saturated graded ring of G is completely determined by that of its Sylow subgroups. It is a consequence of [Web,Cor. 3.7 and Prop. 7.2]; for a more concrete proof, see for example [AM69, Th. 6.6].
is injective, and its image consists of the stable elements in R * (H) (p) A similar result to that due to Swan in cohomology (see [Swa60]) can be obtained as a straightforward application of Theorem 4.7.
Corollary 4.8 (Swan's Lemma). If H G is a normal subgroup such that H ⊇ Syl p (G), then Proof. If H is normal, the stability condition becomes c g (x) = x, that is, x is invariant by the action of G/H.
is an isomorphism.
Corollary 4.10. Let H = Syl p (G) be a p-Sylow subgroup. Then the induction map
is surjective.
Proof. First note that since R * (H) is p-torsion, the image of Ind G H is indeed contained in R * (G) (p) . Pick an element x ∈ R * (G) Proof. By Corollary 4.10, the characters induced from H p form a dense subset of R(G) (p) for the F -topology, so if X contains a p-Sylow of G for every p then Ind is surjective.
Computing saturated rings
We now apply Section 4 by trying our hand at some computations; a number of the groups mentioned in [Che18] (including all abelian groups) are saturated, as we show below. In general, it is much more difficult to compute saturated rings than usual graded character rings, due to the complexity of the saturated filtration. This is where Corollary 4.9 comes into play, as we show with the example of the projective special linear group P SL(2, q). For convenience, when the groups H ≤ G are clear from the context, we denote the induction Ind Proof. Let G be an abelian group and define G := Hom(G, C * ). Then any abelian group homomorphism φ : G → H induces a map φ : H → G, which is injective if and only if φ is surjective. Additionally, there is a natural isomorphism between G and its double dual G given by associating to g the evaluation at g. Now if H ≤ G, then the injection H → G induces a map φ : G → H, and also a map φ : H → G. The latter is injective, which means by the above that φ is surjective. Thus the characters of H all come from restrictions of characters of G, and G is saturated.
We also need a result from [GM14]: . Let C N be the cyclic group of order N and ρ a generating representation for R(C N ). Then where t = c 1 (ρ).
Proposition 5.4. The quaternion group Q 8 is saturated.
Proof. The quaternion group contains one subgroup isomorphic to C 2 , which is generated by −1, and three subgroups isomorphic to C 4 , which all contain −1 and are generated respectively by i, j and k. Since all these groups are saturated, we only need to check that the maximal saturated subgroup H = k ∼ = C 4 is Γ-compatible with Q 8 , which we do by showing that, if ρ is the generating representation of R(C 4 ), then each induced character Ind Q8 C4 (C 1 (ρ) n ) is in Γ n (Q 8 ). Note first that Ind Q8 C4 (C 1 (ρ)) ∈ Γ 1 (Q 8 ) = I Q8 . Moreover, the representation ∆ restricts on C 4 to ρ + ρ −1 , and so therefore C 1 (ρ) 2 = Res(−C 2 (∆)), and so Thus, for any n = 2m + l with l = 0, 1: which is an element of Γ l · Γ 2m . This means that C 4 is Γ-compatible with Q 8 , and therefore Q 8 is saturated.
With a similar technique, we can prove that dihedral groups are saturated.
Proposition 5.6. Let p be an odd prime, then the dihedral group D p of order 2p is saturated.
Proof. Since D p = C p ⋊C 2 and C p , C 2 are abelian, these are the maximal saturated subgroups of D p . The signature ε of D p restricts on C 2 to the representation ρ, which generates R(C 2 ). Thus C 2 is Γ-compatible with D p , and we only need to look at C p . Since Res(Y ) = −C 1 (ρ) 2 the same argument as in the proof of proposition 5.4 applies.
Projective linear groups.
We compute the saturated character ring of G = P SL(2, p), the projective special linear group over F p , where p is an odd prime such that p ≡ 3, 5(mod 8). Note that we do not use any information about the character table of G: we only need to know those of its Sylow subgroups, which are all abelian. For each prime l dividing |G| = p(p+1)(p−1)
2
, let H l = Syl l (G) and N l = N G (H l ). For each l, we determine the l-Sylow of G and the action of its normalizer, then deduce the stable element subring. There are 4 possible cases: (i) l = p. Then H p ∼ = C p is generated by the matrix 1 1 0 1 . The normalizer of H p is the group: (px) , this induces x → a 2 x. The subring generated by x p−1 2 is stable by this action, and conversely if a is an element of multiplicative order (p − 1), then a monomial x m being stable by the action x → a 2 x implies that m is a multiple of p−1 2 . Thus (ii) l is an odd prime dividing (p − 1). Then H l ∼ = C l i for some integer i, generated by n 0 0 n −1 for some n of order l i in F × p . A straightforward computation gives that N l is generated by diagonal matrices (which commute with the elements of H l ) together with the matrix 0 1 −1 0 which sends an element h ∈ H l to its inverse. The induced action on the representation ring is ρ → ρ −1 , which translates as x → −x in the graded ring. Thus (iii) l = r is an odd prime dividing p + 1. We prove that H r is cyclic. Note that the r-Sylow of G is isomorphic to that of G ′ := P SL(2, p 2 ) since the index of G in G ′ is coprime to r. Let α ∈ F × p 2 have multiplicative order r i . The matrix A ′ = α 0 0 α −1 generates a cyclic group H ′ r of order r i in G ′ , which is thus an r-Sylow subgroup. We have α / ∈ F × p , however any matrix of G similar to A generates an isomorphic group in G. One can take for example A = 0 −1 1 α + α −1 , the companion matrix to the minimal polynomial of α.
The normalizer N ′ r of C ′ r i in G ′ is a dihedral group of order p 2 − 1, generated by all diagonal matrices together with the matrix 0 1 −1 0 which sends A to its inverse. The change of basis sending A to A ′ allows us to view N r as a subgroup of N ′ r , and thus the elements of N r act either trivially or by inversion on H r .
It remains to show that there exists a matrix S ∈ G such that S −1 AS = A −1 . Let a = α + α −1 . By a direct calculation, one shows that any matrix of the form −x y ax + y x in GL(2, p) satisfies this property, thus S ∈ P SL(2, p) exists if and only if there is a pair (x, y) ∈ F 2 p such that −x 2 − axy − y 2 = 1. This equation is equivalent to X 2 + 1 = bY 2 , with X = x + a 2 4 y, Y = y and b = a 2 4 − 1. There are (p + 1)/2 squares in F p (including 0), so there are (p + 1)/2 elements of the form X 2 + 1, and, if b = 0 then there are also (p + 1)/2 elements of the form bY 2 . Thus whenever b = 0, the set of elements of the form X 2 + 1 and the set of elements of the form bY 2 have nontrivial intersection, and there is a solution to x 2 axy + y 2 = −1. Now, b = 0 if and only if a 2 = 4, that is, a = ±2(mod p). But then α is a solution of t 2 ± 2t + 1, that is, α = α −1 has multiplicative order 2, in contradiction with our assumption. Thus b is always nonzero, which completes the proof. We have: (iv) l = 2. Since p ≡ 3, 5(mod 8), the 2-Sylow subgroup of G has order 4. There are two cases: • if p ≡ 5(mod 8), let a satisfy a 2 ≡ −1(mod p). Then We show that N 2 ∼ = A 4 . First, we have C G (h 1 ) ∩ N G (H 2 ) = {Id}, as a direct calculation shows, and similarly for h 2 and h 1 h 2 =: h 3 . Therefore, if N ∈ N 2 acts nontrivially on H 2 , it must permute all 3 nontrivial elements. If T = x −ax x ax , with x 2 = 1 2a , then T h 1 T −1 = h 2 and T h 2 T −1 = h 3 . Both 2 and a are nonresidues mod p since p ≡ 5(mod 8) and if a were a residue, then P SL(2, p) would contain an element of order 4, contradicting Thus there is an x satisfying x 2 = 1/2a. Moreover T is unique up to multiplication by an element of C G (H 2 ) = H 2 , which shows that Then Again, we have N 2 ∼ = A 4 acting by cyclic permutations, generated by In both cases the normalizer acts as cyclic permutations on the nontrivial elements of H 2 , and thus: Putting all of this together, we get: Theorem 5.7. Let G = P SL(2, p) be the projective special linear group over F p , where p is an odd prime such that p ≡ 3, 5(mod 8). Write: 1 · · · l in n · r j1 1 · · · r jm m , with l k |(p − 1), r k |(p + 1). Then: Remark. For p = 3, this is the saturated ring R * (A 4 ).
Tambara functors, the ungraded case
After discussing whether the graded character ring functor is Mackey, it seems natural to turn to the theory of Tambara functors, which was introduced by Tambara in [Tam93]; they can be understood as Mackey functors S(−) that are equipped, for each subgroup H ≤ G, with a multiplicative transfer map S(H) → S(G). In cohomology, this is the Evens norm map (see for example [CTVEZ03,Ch. 6]). In the case of graded character rings, tensor induction of representations is a natural candidate for the role of the multiplicative transfer. We must begin, however, with the ungraded situation: the fact that the multiplicative transfer turns K G (X) into a Tambara functor is mentioned without proof in both [Str12] and [Tam93], and we propose here a proof for the sake of completeness.
To define Tambara functors, we need exponential diagrams. Let Gset/X, Gset/Y be the categories of G-sets over X, Y respectively, and let an equivariant map f : X → Y be given. The pullback functor Gset/Y → Gset/X has a right adjoint Π f : Gset/X → Gset/Y , which we now describe. Let p : A → X be a set over X. We construct q : Π f A → Y as follows: where we write sec p (U, A), given a subset U ⊂ X, for the set of all sections of p over U , that is, maps s : U → A such that p • s(u) = u for all u ∈ U .
Then Π f A is a G-set if we define g s : f −1 (gy) → A, x → g · s(g −1 · x), and of course there is an obvious map Π f A → Y . The adjointness property means that, as is easily established, (ii) To show that (S, f * , f ♯ ) is a Mackey functor, we check both axioms from the definition in Section 3. Let The following result by Tambara shows that, in fact, K G (X) is a Tambara functor. For an abelian monoid M , let γM be the universal abelian group with monoid map k M : M → γM , and generators k M (m) for m ∈ M and relations k M (m + m ′ ) = k M (m) + k M (m ′ ) for m, m ′ ∈ M . If M is a semi-ring, then γM has a unique ring structure such that k M is a semi-ring map.
Theorem 6.2 ([Tam93, Th. 6.1]). Let S be a semi-Tambara functor. Then the function which assigns the set γS(X) to each G-set X has a unique structure of a Tambara functor such that the maps k S(X) form a morphism of semi-Tambara functors.
Corollary 6.3. The functor K G (−) has the structure of a Tambara functor.
The addition formula
A formula for the norm of the sum of two characters would enable us to compute the value of the norm map on negative virtual characters, a necessary step in determining whether the norm map preserves the Grothendieck filtration. Strikingly, there is no known general formula for the Evens norm of a sum of cohomological classes, or the tensor induction of a sum of characters. Below, we first establish a formula for the sum of two positive characters after [Tam93, §4]; we then use this formula to determine N G H (−ρ) for ρ ∈ R + (H), in the case of a normal subgroup H of prime index in G, which gives us an explicit expression for the norm of a virtual character in this case. We then prove that in the case of abelian groups, the norm map preserves the Grothendieck filtration, and thus R * (G) is a Tambara functor on abelian groups. 7.1. A general formula for positive representations. The following is an application of [Tam93,§4], where Tambara gives a general addition formula for the norm. Let X, Y be G-sets and let f : X → Y be a G-map. As usual, we assume X = G/H, Y = G/K with H ≤ K ≤ G, and f = π H K . Moreover, we can restrict ourselves to K = G, since the tensor induction of a representation does not depend on the larger group. So Y = G/G = •, the one point set. Let: then for a vector bundle W ∈ K + G (G/H), the vector bundle χ(W ) ∈ K + G (V ) associates to each C = {x 1 , · · · , x n } the vector space W x1 ⊗ · · · ⊗ W xn ∼ = W ⊗n , and to each g ∈ G the linear map given by: Note that, since it involves the map t ♯ , the morphism χ is only defined on K + G (G/H) for now. Throughout this section, we determine how to extend χ to vector bundles with negative coefficients, then to all virtual bundles.
This operation does not involve multiplicative norms (that is, it does not involve f ♯ for some map f ), thus it is well-defined on the whole ring K G (V ), and not just the semi-ring K + G (V ). Each fiber in a vector bundle is a representation of the stabilizer of the point above which it sits; for purposes of intuition, we point out that, as a representation of Stab(C), we have where the direct sum is taken over all orbit representatives under Stab(C) of pairs (C 1 , C 2 ) such that C 1 ⊔C 2 = C. Again, since this operation only involves restrictions and inductions, it is defined for virtual characters. By [Tam93,Prop. 4.4], the map χ is is a morphism from the monoid (K + G (H), +) to (K G (V ), ∨). Moreover, for τ, σ ∈ K + G (G/H), we have: f ♯ (σ + τ ) = χ(σ + τ ) G/H = (χ (σ) ∨ χ (τ )) {G/H} .
We now assume that H is a normal subgroup of G. In terms of representations, the situation is as follows: pick a transversal set T = {t 1 , · · · t n } for G/H and let C ⊂ G/H. For a representation ρ ∈ R + (H), write: where ρ ti is the conjugate of the representation ρ by t i ∈ G, so that ρ ⊗C is a representation of Stab C. Note that ρ ⊗C does not depend on the choice of transversal set T , since a different coset representative t ′ i of t i H would be t h i for some h ∈ H, and ρ is invariant under conjugation by an element of H. Moreover, we have ρ ⊗G/H = N G H (ρ). Then we can reformulate the above as: Let us now restrict to the case K = C. Then the irreducible characters of G are one-dimensional and we can apply the above result. | 2018-11-14T18:28:00.000Z | 2018-11-14T00:00:00.000 | {
"year": 2018,
"sha1": "ac8a9402977bc85f91d002db523a5a0f159168f0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3e6b0616c672cd84f194054d53d3bb62e6fb6cfa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
134551150 | pes2o/s2orc | v3-fos-license | Plant populations and chemical weed control in soybean cross-sowing
Authors declare no conflict of interest In soybean production in Brazil, the ends often justify the means. The high marketing prices of grains have motivated producers to adopt new techniques for cultivation of the legume in an empirical way or without adequate scientific support. Thus, we hypothesized that soybean cross-sowing in equal populations or twice (as used in conventional sowing in parallel rows) may increase soybean yield and decrease weed infestation in crops. In this study, we aimed to evaluate the spatial arrangements of plants associated with the presence or absence of chemical treatment for weed control in the soybean crop. A randomized block design was used with five treatments (four replicates per treatment): 1) uncrossed with the recommended plant population; 2) crossed with the recommended plant population; 3) crossed with double the plant population; 4) crossed with the same plant population and without herbicide; and 5) crossed with double the plant population and without herbicide. Soybeans cross-sowed with the recommended plant population had the same growth as uncrossed soybeans in terms of height and dry matter of shoot and roots, but had a higher leaf area index. Moreover, root nodulation increased in number with soybeans cross-sowed with the recommended plant population and in mass for soybeans cross-sowed with double the plant population, without differences in indirect measures of chlorophyll. Thus, our findings suggested that crosssowing with the recommended plant population or double the recommended plant population did not aid in weed control and did not increase the yield of soybean grains.
The Paraná State is the second largest soybean producer in Brazil and uses approximately 88% of the grain production areas in the summer harvest for the sowing of the legume.Indeed, in this region, approximately 17 million tons of soybeans were produced in the 2016/2017 summer season, equivalent to 17% of national production (DERAL 2017).Changes in agricultural practices have been made in order to achieve maximize profits and productivity, as necessitated by high market prices.In this economic context, traditional and recommended crop practices rotation and adequate terrain sizing have been discarded by some producers, and others practices, such as cross-sowing, have been adopted based on empirical findings.
Tests carried out by researchers focused on soybean productivity have supported the application of the cross-sowing technique (Procópio et al. 2013), although the results reported in the literature are controversial.For example, soybean cross-sowing did not increase grain yields in the northern region of Paraná State (Balbinot Jr. et al. 2015a, 2015b, 2016) although in others Brazilian States, such Mato Grosso do Sul (Lima et al. 2012), São Paulo (Ormond et al. 2015, Silva et al. 2015ab), and Goiás (Buso et al. 2016, Souza et al. 2016), grain yields increased with cross-sowing of soybeans compared with soybeans in uncrossed rows.In contrast, reductions in grain yield also have been observed with the cross-sowing technique; a spatial arrangement with inter-row spacing of 0.5×0.5 m was found to decrease productivity by 8.2% in relation to uncrossed crops with inter-row spacing of 0.4 m (Holtz et al. 2014).In another study, soybean cross-sowing resulted in greater variability in the final population of plants compared with conventional sowing (Balbinot Jr. et al. 2015a) due to soil mobilization and http://cpsjournal.orgadditional compaction with transverse sowing at rows (Balbinot Jr. et al. 2015b).Moreover, greater soil mobilization with cross-sowing of soybeans increases the percentage of soil exposed, mainly with spacing between rows of 0.6 m, which may affect the emergence of positive photoblast weeds and the loss of water by evaporation and erosion (Balbinot Jr. et al. 2016).
However, producers tend to increase sowing density (Gaudêncio et al. 1990) or reduce the spacing between rows (Heiffig et al. 2010) of soybean crops with the aim of improving and increasing plant emergence, favoring weed control and reduced use of post-emergent herbicides.In both cross-sowed and parallel rows, grain yields increase linearly as the plant population increases (Buso et al. 2016).Reduced spacing between sowing rows provides greater soybean coverage and may decrease weed incidence, thereby maintaining or increasing grain yields (Bianchi et al. 2010).With row spacing of 0.6 m, soybean cross-sowing increased soil cover by 39.1% compared with that of parallel rows (Balbinot Jr. et al. 2016).Higher grain yields with cross-sowing compared with uncrossed soybeans were attributed to better spatial plant arrangements and reduced weed infestation (Ormond et al. 2015, Silva et al. 2015b).
Thus, we hypothesized that soybean cross-sowing in populations equal to or twice as large as those used in conventional uncrossed parallel rows may increase the yields of soybeans and that populations of similar or larger plants in cross-sowing may decrease weed infestation.Accordingly, in this study, we evaluated the effects of the spatial arrangements of plants on weed control the soybean crop.
MATERIAL AND METHODS
The experiments were carried out in an Oxisol clayey at the Regional Research Center of Ponta Grossa of the Agronomic Institute of Paraná (IAPAR), Ponta Grossa municipality (25°05'42"S, 50°09'43"W, 969 m.s), Paraná State, Brazil.The region had a humid subtropical climate, with mild summers (cfb classification according to the Köppen Climate Classification) and a uniformly distributed rainfall, without a dry season; the average temperature of the hottest month did not reach 22 °C, and the amount of precipitation was 1,100-2,000 mm year -1 , with frequent frosts and intense rains for an average of 10-25 days year -1 .Cumulative rainfall data and average minimum and minimum daily temperatures for the experimental period from December 2015 to April 2016 are shown in Figure 1.The experimental area with black oats (Avena strigosa Schreb.) as the winter crop was desiccated 30 days before sowing with 2 L ha -1 of glyphosate.The cultivar used was BMX Força RR, with indeterminate growth habits and a recommended population of 250,000 to 300,000 plants ha -1 .The experimental design was a randomized block design with five treatments and four replications, as follows: 1) conventional sowing with 260,000 plants ha -1 in parallel rows; 2) cross-sowing with 260,000 plants ha -1 ; 3) cross-sowing with 520,000 plants ha -1 ; 4) crosssowing with 260,000 plants ha -1 without herbicide; and 5) cross-sowing with 520,000 plants ha -1 without herbicide.The plots had dimensions of 4.5 × 8.0 m, with spacing between rows of 0.45 m; these plots were designed with six central lines in each plot, less 1.5 m from each end.A 5-m section was established between the plots to enable transit of machinery for treatments and agrochemical application, which were carried out according to standard recommendations for the crop.At sowing (06/12/2015), the soybeans were fertilized with the recommended amounts of fertilizer in parallel rows (Silva et al. 2015a), according to soil chemical analyses (Pavan et al. 1992) and fertilization recommendations for soybean crops in Parana State (Oliveira et al. 2007).Thus, 112 kg ha -1 P2O5, http://cpsjournal.org40 kg ha -1 K2O, and seed inoculation with Bradyrhizobium spp.were applied in both parallel rows and crosssowing.
At 15 days after the emergence of the plants, thinning was performed in order to establish the populations according to the respective treatments.After 30 days of thinning, three samples per plot of 0.09 m 2 each were collected to evaluate the weed fresh matter yields; after drying in an oven with forced air circulation at about 65 °C, the fresh matter yields were converted to dry matter yields.From the R2 to R3 stages or from full flowering at the end of flowering with pods up to 1.5 cm, the heights of the plants were evaluated from the measurement of five plants in the useful area per plot.Additionally, five plants of the useful area were collected, segmented into aerial parts and roots, and dried in an oven at 60°C until a constant mass was reached to estimate the dry matter of the shoot and roots.After washing, the soybean root nodules were counted and dried in an oven with forced air circulation at 45 °C to obtain dry matter of root nodules.In the third leaves of the middle third of 10 plants per plot, total chlorophyll was measured using a Falker chlorophyllometer.Photosynthetically active radiation measurements were performed at the R4 stage, below and above the canopy of the plants, and the leaf area index was calculated.
At the R8 stage, when the crop showed full ripeness (04/13/2017), the useful area of each plot (13.5 m 2 ) was collected for analysis of subsequent grain yields at 13% moisture.
The normality of the data was evaluated using Shapiro-Wilks tests, and homogeneity of variance was evaluated using Bartlett tests.The data were subjected to analysis of variance by F tests at 5% probability, and the means from Tukey tests were compared at 5% probability.
RESULTS AND DISCUSSION
There were no significant differences (p > 0.05) between plant heights in uncrossed and cross-sowed plots, with different plant populations, and with or without herbicides (Table 1); the observed values were within the limits considered ideal for mechanical harvesting, from 80 to 100 cm (Ormond et al. 2015).The vertical growth of soybean plants can be affected by the sowing density (Lima et al. 2012, Balbinot Jr. et al. 2015a), the cultivar employed (Lima et al. 2012), and the plant arrangement (Lima et al. 2012, Santos et al. 2015, Silva et al. 2015a).However, plant height has little agronomic relevance (Balbinot Jr. et al. 2015a), with no correlation (p > 0.05) with grain yield per pod or grain yield of the soybean crop (Bisinotto et al. 2017).Notably, higher plants so allow mechanized harvesting of the crop, without risk of plant lodging (Heiffig et al. 2010).
Table 1.Plant height, shoot dry matter (SDM) and root dry matter (RDM) and leaf area index (LAI) of soybean plants grown on uncrossed and crossed-sowing with plant populations of 260,000 and 520,000 (2x) plants ha -1 , with and without (-H) application of herbicide.Soybean cross-sowing with the same plant population as in uncrossed sowing produced 64.5% more shoot dry matter (SDM), although the difference was not significant (p > 0.05).Cross-sowing with twice the recommended plant population increased (p < 0.05) the SDM by 137.1% and 119.0%(8,385 kg ha -1 on average) compared with conventional uncrossed and herbicide-free soybeans cross-sowed with the recommended plant populations, respectively, which did not differ (p > 0.05) from each other (Table 1).Balbinot Jr. et al. (2016) did not observe differences between SDM in cross-sowed and uncrossed soybeans with a determined growth habit (BRS 294 RR); however, in plants with an indeterminate growth habit (BRS 359 RR), uncrossed rows resulted in 23.2% more SDM that cross-sowed soybeans.A study by Procópio et al. (2013) showed that BRS 359 RR soybeans (exhibiting indeterminate growth) produced more leaves and branches per plant in a cross-sowed system, although this http://cpsjournal.orgdifference did not occur per unit area due to the reduction in plant density evaluated at harvest due to the lower emergence of plants with cross-sowing of soybeans.As in the present work, plant populations were established by means of excessive seeding with subsequent thinning, and there were no effects of the treatments on plant height.Therefore, we concluded that cross-sowing of soybeans produced more SDM, probably via enhanced production of leaves and branches.In contrast, Santos et al. (2015) observed that four indeterminate growth soybean cultivars produced an average of 11.7% more SDM in crossed rows than in uncrossed rows.
Root dry matter (RDM) yields represented an average of 31.5% of total plant dry matter, and there were no differences between treatments, possibly due to the high variability of the results (Table 1).
Leaf area index (LAI) was 8.0% higher (p < 0.05) in cross-sowed than in uncrossed soybeans, and there were no differences (p > 0.05) between the different cross-sowing modalities studied (i.e., different plant populations and weed control treatments; Table 1).These results were contradictory to those obtained by Balbinot Jr. et al. (2016), who observed 25.0% more LAI in uncrossed than in cross-sowed indeterminate growth soybean cultivars.Souza et al. (2016) also did not observe a difference in LAI values between cross-sowed or uncrossed soybean cultivars of determined or indeterminate growth habits, indicating that the changes in the spatial arrangement of the crop did not cause morphological changes in plant architecture.Soybeans produce surplus leaves during seed filling and exhibit increased LAI and water demand when resources could be diverted to fill more seeds, resulting in decreased grain yields (Srinivasan et al 2017).
Soybean cross-sowing increased (p < 0.05) the nodule numbers by 49.1% and 63.2% compared with those of uncrossed and cross-sowed rows with double the plant population without herbicide, respectively.In contrast, cross-sowed soybeans with or without herbicide did not showed increased root nodule mass (p > 0.05) in comparison to uncrossed crops, although soybeans cross-sowed with double the plant population, with or without herbicide, did show a 117.5% increase in root nodule dry mass per plant (p < 0.05).This result was contradictory to the results of a study demonstrating that decreasing the amount of plants per line may increase the availability of root exudates capable of promoting rhizobia growth and increasing the rate of biological nitrogen fixation (Luca and Hungria 2014).These researchers observed increases in the number of plants per row at spacings between 0.5 and 1.0, which did not result in fluctuations in the number of nodules per plant.However, the mass of nodules per plant was higher with 16 plants per linear meter spaced at 0.5 m compared with that with four plants per linear meter spaced at 0.5 m and 16 plants per linear meter spaced at 1.0 m.
In our study, despite the differences in quantities and masses of root nodules between the treatments, there were no differences (p > 0.05) between values of the Falker chlorophyll index (FCI; Table 2).These results diverge from those obtained by Werner et al. (2016), who observed a linear decrease in SPAD index with increased seeding density of indeterminate growth habit soybean.
Table 2. Number and dry mass of root nodules, Falker chlorophyll index (FCI) and fresh (FMW) and
dry matter (DMW) of weeds shoots in soybean cultivated with non-crossed sowing lines and crossed with plant populations of 260 and 520 (2x) thousand plants ha -1 , with and without (-H) application of herbicide.
Treatment
Root nodules Chlorophyll FMWS DMWS plant -1 mg plant -1 FCI ------------kg ha -1 ------------ The fresh matter yields of weed shoots were much higher (p < 0.05) in treatments with cross-sowing of soybeans without herbicide (average increase: 4,250%).In contrast, cross-sowing of soybeans with or without an increase in the plant population did not reduce weed shoot fresh matter (p > 0.05).Due to the high coefficient of variation (109.0%), the dry matter yields of weeds overlapped in a similar manner; although there were no significant differences (p > 0.05), there was an increase of 357% in weed dry matter shoots with cross-sowing of soybeans without herbicide compared with cross-sowing of soybeans with herbicide (Table 2).After 50 days of sowing, the canopy closure rate was higher with soybean cross-sowing, but before or after 50 days, there were no differences compared with conventional uncrossed soybeans (Souza et al., 2016).Weed samplings were performed about 45 days after soybean sowing, indicating that soybeans cross-sowing in a population equal to or twice the recommended population for conventional sowing was not effective for weed control.
With herbicide application, soybean grain yields did not differ (p > 0.05) among treatments with uncrossed soybeans, cross-sowing with the recommended population, and cross-sowing with twice the recommended population.Moreover, there were no differences (p > 0.05) between grains yields in treatments with cross-sowing and the recommended plant population and treatments with cross-sowing without herbicide with twice the recommended plant population (Figure 2).These results were consistent with those obtained by Balbinot Jr. et al. (2015a,b, 2016), who did not observe an increase in grain yield with soybean cross-sowing with different spacing rows and seeding densities.According to Balbinot et al. (2015a), the reduction in the number of plants at harvest was a reflection of the harmful effects of the second sowing operation, transverse to the first one, resulting in a higher seed expenditure compared with that of uncrossed sowing.As in the present work, thinning of plants was carried out to guarantee that in the studied soybean populations, the absence of significant differences could be attributed to grain distributions in the branches and stems, which increase and decrease, respectively, with soybean cross-sowing (Balbinot et al. 2015b).With the treatments without herbicides, significant reductions (p < 0.05) of 69.6% and 49.9% were observed for grain yields in soybeans cross-sowed with the recommended population and cross-sowed with twice the recommended population, respectively.The average decrease in grain yield with the exclusion of the herbicide was 59.5% or 1,495 kg ha -1 of grains (p < 0.05), but the decrease of 46.8% or 1,141 kg ha -1 of grains between crosssowing with herbicide and cross-sowing with double the plant population and without herbicide was not significant (p > 0.05).The absence of chemical management of weeds significantly decreased (p < 0.05) the yields of soybeans of determined and indeterminate growth habits, whereas reduced row spacing increased grain yields occurred only for the latter cultivars (Bianchi et al. 2010).
In conclusion, soybeans cross-sowed with the recommended plant population had the same growth as uncrossed soybeans in terms of height, SDM, and RDM, but had higher LAIs.Root nodulation increased in soybeans cross-sowed with the recommended plant population and in mass in soybeans cross-sowed with twice the plant population, without differences in indirect measures of chlorophyll.Soybeans cross-sowed with the recommended plant population or with twice the recommended plant population did not affect weed control or show increased soybean grain yields.
Figure 2 .
Figure 2. Soybean grain yields as a function of uncrossed and crossed sowing lines with plant populations of 260 and 520 (2x) thousand plants ha -1 , with and without (-H) application of herbicide.Different letters indicate difference by Tukey test at 5% probability. | 2019-04-27T13:12:01.437Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "741f36ee80118c63f6d923b3f92178049d18d393",
"oa_license": "CCBYNC",
"oa_url": "https://thecpsjournal.files.wordpress.com/2018/09/cps2018010.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "741f36ee80118c63f6d923b3f92178049d18d393",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
165139104 | pes2o/s2orc | v3-fos-license | Enhancement of Power Generation for PV Systems Using Dynamic Tacking System
The objective of this research is tracking power obtained daily (March-September 2017 in Mansoura state) from moving solar panel system during 12 hrs. to enhance power gain. The automatic sun tracking system provides peter alignment of solar panel with the sun. The aim of designing and implementing technology an automatic dual-axes tracking mechanism though chaining the sun tilt-angle due to change of both the seasons and day time is to improve maximum power gain from sun. This proposed system is added to determine the exact time of cleaning process based on real-time clock (RTC). The reading of power output from the proposed system gives a power gain of 74% more than power gain of a fixed solar panel. The proposed solar system design has low consumed power, minimum cost, reliable structure and residential usage applications. The practical experiment proves that all critical points which obtained from moving solar system is better than fixed solar system.
Introduction
Solar energy technology is one of the important expected sources of future energy supplies, because not only it is a valid supply of nonpolluting source but also the limited supply sources of non-renewable fuels.
Also, fossil fuels have many side effects due to combustion products that produce pollution which cause acid rains and global warming. Therefore, solar energy conversion is one of the clean energy sources which would enable the world to improve the life quality all over the earth planet [1].
The generated electric power using a photovoltaic power generation system can be used for many applications, such as water desalination, domestic water heating, and power generation [2]. Solar Tracking System technology enhance the efficiency of the solar cells by tracking the sun [3]. There were many ways for maximizing the rate of useful energy; optimizing the conversion of the absorber level by properly choosing the absorber materials, and increasing the incident radiation rate by using tracking systems, and many other methods were reported [4]. Tracking systems are mechanical systems that incorporate mechanics, electronics, and information technology. These mechanisms were driven by rotary or linear actuators, which controlled to ensure the optimal positioning of the PV modules [5,6]. Solar cells were fixed on different conventional places, such that fixed panels which were commonly placed at equal latitude tilt angle. As a result, solar cells were unable to receive maximum light because the position of sun changed with time. Since the energy conversion was more efficient when the rays fell vertically on the solar panels [7].
Sun radiation position varies with both time and seasons of the day. Solar cells are conventional fixed on different places, such that the commonly fixed panels are fixed at equal latitude tilt angle. As a result, solar cells are unable to receive maximum light because the position of sun changes with time. Energy conversion is more efficient when the rays fall perpendicular on the solar panels [8].
Sun tracking systems had been studied for different applications to improve the efficiency of solar system by adding tracking equipment's to these systems. A tracking mechanism must be reliable and able to follow the sun with certain degree of accuracy and returns the solar panels, (or flatplate collector), to their original position at the end of the day, (or during the night). Also tracking during cloud covered [9].
There are two basic types of tracking systems; single-axis and dual-axis. The 1st system spins on around its axis to track the sun, (facing east in the morning and west in the afternoon).
The tilt angle of this axis equals the latitude of the installation site facing directly to the sun; in consequence to this type of tracking system a seasonal tilt angle adjustment is necessary. Therefore, the dual-axis tracking systems which have two degrees of freedom that acts as axes of rotation. Also, these trackers are able to precisely follow the sun path along the period of one year. This is why the dual-axis tracking systems are more efficient than the single one, in spite that they are more expensive because of the usage of expensive components in the system [10]. Figure 1 illustrates the tracking systems. Generally solar tracking systems will increase the efficiency of the solar panel by 20-62% higher than that of the fixed systems, depending on where you are in the world. Solar trackers were basically microcontroller or sensor based, (in some cases passive tracking systems were used. In microcontroller based systems, different mathematical calculations were used to get the sun's apparent position according to the programming logic which track sun. In case of sensor based systems the sun was tracked depending upon the signal from the sensors integrated in the system. Typically, the sun sensors were mounted on the controller base or around the panel itself, and were used to feed information regarding the amount of sunlight. This information which (was) is in a form of analogue to digital conversion, ADC, was fed into a closed loop control circuit, where, the amount of sunlight was used to continuously monitor the position of the sun [11].
Previous work researches, related to solar tracking system was investigated the control of two-axis tracking system and used LDR relays and microcontroller. Where, stepper motor is used to move the system panel, the result of output power about 30-45% [12]. Using the pilot scheme as a search technique, LDR as a detector for the sun rise and set, the PIC18f452 microcontroller to control the solar panel and pilot movement, the DC motors also (were) used [13].
The dual-axis solar tracker in this study is the angular height position of the sun in the sky in addition to following the sun's east-west movement. The gain of the dual-axis tracking system is about 40% compared with the fixed system, The gain of the single-axis tracker systems is about 28% compared with the fixed system, so a compromise between maximum power collection and system simplicity is obtained [14].
Micro Controller Based Solar Tracking System using DC gear Motor, the efficiency of Micro Controller Based Solar Tracking System is improved by 24%. This tracking system does track the sun in a continuous manner. And this system is more efficient and cost effective in long run [15].
Problem Description
The conversion principle of solar light into electricity, (which is called Photovoltaic, PV, conversion), is not new, but the efficiency improvement of the PV conversion still one of top priorities for many academic and/or industrial research groups all over the world. Among the proposed solutions for improving this efficiency is solar tracking [16]. Trackers direct solar panels or modules toward the sun for changing their orientation through the day for maximizing energy capture. The accumulation of dust on the surface of solar panel reduces the efficiency by 30% in high dust areas. A self-cleaning system is also, proposed to maintain the stability of the output power all over the year.
The Proposed System Description
The project which is based on microcontroller and light sensors, will develop a dual-axis sun tracking system where design and study of a seasonal angle (tilt angle) at different conditions is considered.
The main objective of the proposed design is to develop the performance of sun tracking system to improve the efficiency of overall electricity generated from the solar energy. Using a proposed dual-axis tracking at two conditions only of seasonal angle (tilt angle) which changes the daily at noon time and at the beginning of the changing seasons (like summer or winter). The PV solar panel will be tilted around the x-axis through June 21 to December 21 (2017) in one direction and through December 22 to June 20 in the opposite direction [17]. The electromechanical system of this proposed tracking system is very simple and easy used in residential societies. It consists of two drivers, (DC-motors), of 12 V, the first is for adjusting the tilt angle and the second is for the east-west, E-W, tracking.
The main components of the control circuit are; LDR, Potentiometers, Dual Full-Bridge Driver L298, RTC, PIC microcontroller, batteries, charge regulator, low power liquid crystal display, LCD, and DC geared motors. The three LDRs each acts as a sensor; one for judging the weather (cloudy or sunny), and others are responsible for tracking sun from east to west. The programming of PIC18F452 will calculate the voltage difference between east-west sensors and give a pulse to the L298 for moving both the E-W motor and north-south, N-S, liner actuator motor.
The effect of dirt and dust on the efficiency of the generated electricity that represents the maximum output of the solar panel is considered. The microcontroller software program is developed to read data from RTC to solve the problem of reducing the output power due to accumulation of dust determines the cleaning periods, and to update the PV panel position through both the seasonal and the day hors.
The results from comparison between the proposed dualaxis tracking system and the fixed system reveal that the former increase the efficiency of generating electricity by 65% more than of the fixed system.
System Design
The controller board consists of three parts as shown in Figure 2; the PIC microcontroller, the sensing part, and the motors. The tracking proposed system is a combination of the active and passive tracking systems. It is composed of photoelectric tracking of daily and tilt angles. It has advantages of all the advantages of the passive and active systems in order to make the proposed system more accurate and stable.
The proposed system consists of three LDRs which produce the input signals to the PIC unit; as clear in item 3, moreover day/night are determined and consequently it can deal with the problem of battery usage; whether the main or the spare battery.
The microcontroller gives the output signals required to drive the DC motors, (geared & linear actuator), to adjust the panel position according to the schedule of the natural direction of the sun to adjust the tilt angle. The RTC gives the current date and time database to microcontroller using inter integrated circuit (I 2 C) transmitting method. The microcontroller's proposed software is developed for adjusting the tilt-angle at the beginning of March 30 and continue tile Sept. 12 (2017).
More detailed schematic diagram of the proposed automatic solar tracking is shown in the following Figure (Figure 3), where its main components are summarized as;
Sensors and Limit-Switches
The LDRs are placed as shown in Figure 4. The E-W LDR sensors are separated by holder which will create shadow on one of the LDRs if the solar panel is not perpendicular to the sun rays results in difference of the value of the resistance between two LDRs. See Figure 5.
Some Improvements were added to raise the sensitivity of light by laying the LDR inside a plastic tube. so as to increase the shadow. The ratio between the height of the holder and that of the tube is taken as 1:2.
Each LDR sensor is placed in series with a resistor of 10KΩ to form a voltage divider. The output analogue voltage of this combination is given by the following equation, (Eq. 1). And is connected to the PIC through analog to digital converter, ADC, pin. * Ω (1) Where; V Reference is equal to VDD considered +5 V. these analogue voltages are converted into digital value according to following equation, (Eq. 2).
PIC will activate the E-W motor direction Based on both E-W LDRs and weather LDR; when the output voltage of the weather LDR is greater than or equal to Threshold Value (which is an integer). If the E-W LDR sensors give the same output voltage, thus the motor is stopped, also if the output voltage of the weather LDR is less than the threshold Value, this means that a cloudy day [18].
The E-W motor stops if the atmosphere remains cloudy for more than 60 minutes, where this system able to move the solar panel toward the ideal situation and set it at an angle, 45°, towards the south (in fixed systems of Egypt site) and then re-search process.
Main Controller and Motor Drive Circuit
Control board is responsible for giving the required orders processing information coming from the light sensors, and from the other parts, and the motor drive circuit consists of three transistors, L298 dual H-bridge and external bridge of diodes, as shown in Figure 6 to control the direction through operating motors.
Tracking system is controlled by micro-controller with necessary interface. Limit switches are used to bring back the panel to morning position after each day without human interference [19]. PIC is used to control the rotation of the platform (in bidirectional), and send data to form the DC-geared motor and the linear actuator to make the solar panel perpendicular towards the Sun. DC-geared motor is connected to L298 to control the rotation of the DC-motor, i.e., its terminal voltage of out1 or out2 is positive the motor turns either to clockwise or anticlockwise, Figure 7 shows the hardware circuits of controller and L298 DC-Motors driving, whereas signal from its terminal out3 and out4 [20]. The external bridge of diodes D1 to D4 is made by four fast recovery elements that must be chosen of a voltage flower, VF, as low as possible at the worst case of the load current. The brake function (Fast motor stop) requires that the absolute maximum rating of 2 Amps must never be overcome.
DC-motor's winding may cause electrical spikes through switching process (on and off), this problem may cause rebooting or lock-up of the PIC. Therefore, the external diodes bridge for each motor are used as protection circuit, to solve this problem specially when inductive loads are driven, the control system block diagram refer to pervious
Self-Cleaning System Design
Cleaning the solar panels is also a problem. The normal way to clean the solar panels is washing them, which need time and spending money to the cleaning agency. As the cleaning should be frequently from time to time, this means spending more and more money for cleaning process. Dust decrees the output PV energy, so in [21] a mathematical relationship model presented to solve problem of losing electrical energy output from PV power plant. A simple method of proposed self-cleaning process which gives good results [21]. This proposed cleaning system is shown in the flowing (Figure 8). Its idea is a plastic pipe which sprays water through perforated holes of 1 mm in the direction of the solar panel surface. The microcontroller reads data from DS1307 RTC and gives a command to open the water valve according to a specific timetable, which allows water to flow from top to down of the entire surface, where the cleaning process is completed.
The proposed cleaning system depends on the difference in the proportion of efficiency up to 18% and after only 15 days, the accumulation of dust and dirt on the surface of the panel.
The cleaning process is executed before the end of the night, where the solar panel is wet with dew drops which penetrated inside atoms of dust which accumulated on the surface of panel. Once the water is sprayed that will remove nearly 90% of the soil.
Mechanical Design and Hardware
Using the proposed mechanism the module rotates according to the movement of the sun; where the sun rays fall exactly perpendicular to the module throughout the day. This increases the power generation by the photovoltaic solar cells in the module, and thereby increasing the efficiency The time between the sunrise and sunset is approximately 12 hours through the daily angle which is approximately equal 180°. the mechanism design can be adjusted the PV-Panel to perpendicular of the sun light during that period, and automatically returns using the proposed control system, where the program used controls the mechanism either by depending on light sensor signal or RTC, where sun complete its half revolution (180°) in 12 hours, (where the sun rotation per Hour = 180°/12 = 15°/Hour). This control method can only be used when microcontroller cannot receive any signal from LDR sensors, [22,23]. The working of dual-axis is similar to that of the single axis but the former captures more the solar energy and is more effectively due rotating in the horizontal as well as the vertical axes. The proposed model for dual axis tracker is shown in Figure 9.
In the latter Figure the arrangement of proposed model for dual-axis tracker mechanism, for the hardware sets up a support structure of the base platform, fitted with DC motor.
Designed with the proposed dimensions is capable to remain steadfast even in poor weather conditions, (1). This is designed with two degree of freedom to track the sun according to tilt and azimuth angle. Circular shaft, (2), which is diameter equal 3 inch it placed inside the tube of base and connected from the bottom by the DC-Geared motor, and from top by platform which supported to install the solar panel holder, O-ring for facilitation of shaft movement, (3). The DC motor controlling the horizontal movement (E-W Directions) is placed in lower end of the base (4). Actuator, of linear movement controls the vertical movement. It is supported on shaft with holder and coupled with stand frame (5). Solenoid valve, controls the water flow at cleaning times, and is installed on one lead of base (6). Water pipe, is made of plastic and has many holes with diameter 1mm toward the panel surface to spray the water after solenoid valve is active (7). Light sensor stand, of wood material is installed on the panel frame, where light sensor circuit is installed on it (8). Solar panel, (80W) of mono-crystalline type is used. It is installed on the frame platform (9). Panel platform frame, is of aluminum martial (10). Solar panel holders (11). Hardware components and mechanical materials are listed in the following table 1.
Control Software Program
The proposed control software has been developed to determine the optimum position of the panel during day light hours, i.e., how much deviated from maximum output power. And also, developed to adjusting the tilt-angle at different seasons of year, remove accumulated dust on the surface of the solar panel every specific time period and track the panel to optimal position in cloudy day, [24]. The program for the solar tracker is written using C-Language, the Mikro C for PIC is used. Figure 10 shows the proposed flow chart of the control software.
If the output voltage value of weather-LDR is more than the threshold value, the day is sunny and there are two possibilities either E-LDR is under the shadow, or W-LDR is under the shadow, and the PIC commands the motor either to rotate the panel towards east direction or towards west direction according to shadow occurring on E-W LDRs, or motor stop if shadow is not found. If the output voltage value of the weather-LDR is less than the threshold value, there are two possibilities either the day reaches night or the weather is cloudy, i.e., two sensors are under shadow.
At night the PIC command makes the motor to rotate and the tracker moves to the reset position waiting for the sun from the east, consequently the PIC enters the sleeping mode to save power.
If the output voltage value of weather-LDR is more than the threshold value, the day is sunny and there are two possibilities either E-LDR is under the shadow, or W-LDR is under the shadow, and the PIC commands the motor either to rotate the panel towards east direction or towards west direction according to shadow occurring on E-W LDRs, or motor stop if shadow is not found.
If the output voltage value of the weather-LDR is less than the threshold value, there are two possibilities either the day reaches night or the weather is cloudy, i.e., two sensors are under shadow. At night the PIC command makes the motor to rotate and the tracker moves to the reset position waiting for the sun from the east, consequently the PIC enters the sleeping mode to save power.
When the weather is cloudy, the system can automatically make solar panels pointed to the best location at noon, which makes the sun tracking system moves into a fixed solar system. The best location can be computed by PIC, which uses the light sensor technique, Limit switching and RTC. Thus, the method deals with the problem properly when the system cannot find the best location in cloudy days [25].
Summarized recent development and challenges according to respective functions of their vital components (sensitizer, substrate, electrolytes and counter electrode, semiconductor film,…) as well as their effects on photoelectric conversion efficiency [25]. When the intensity of light cannot be detected by the LDR sensors, the system can check the clock. If the time is between 6 am and 6 pm, the system is set to cloudy state. The program makes the panel to rotate to the default optimal position angle using both the limit switches and RTC, and turning off the energy-consuming parts. When weather becomes sunny, the state of the system will be changed, where the system will search for sunlight again. When the battery is discharged, the system will automatically use the spare battery.
Experimental Results
By placing the designed tracking system towards sun radiations. Table 2 shows the obtained data of voltage, current and output power received from both the fixed solar panel and proposed solar tracking system at two conditions adjust in tilt-angle for a day at different times. Figure 11 shows the comparison of electric power characteristic curves from fixed solar panel and proposed solar tracking system. It shows that proposed solar tracking system is able to receive more sunlight and consequently generate more power as compared to fixed solar panel. The results show that the output performance of solar array is significantly improved through optimized layout and the output power or energy is decreased when considering thermal effect [25].
There are also other factor that affects the output efficiency, dust which accumulates on the surface of the solar panel. Readings under accumulated dust are compared with the results after cleaning solar panel by the proposed cleaning system. It reveals that the average output power under the dust accumulated is 66.63W and output power on the next day after the automated cleaning is 76.50 W, the power gain improved by 18.366%.
Conclusion
By applying the proposed system using the PIC microcontroller which is based on an efficient solar tracking system with real time clock is developed and described. The proposed system provides a variable indication of their relative angle to the sun by comparing with predefined measured readings.
The tracking mechanism is capable of tracking the sun automatically so that the direction of beam propagation of solar radiation is perpendicular to the PV panel. The mechanical structure was very simple and reliable. The controller circuit has been designed with a minimal number of components, is integrated on two boards for simple assembly. By using this proposed system, the solar tracker is successfully maintaining a solar tracking at a sufficiently perpendicular angle toward the sun. The average obtained power increases gain over that of the fixed system was in excess of 74.1545%, during the months from March to September. and the proposed self-cleaning system improve the output power of the dual axis tracking system with 17.83%. The proposed design is achieved with low power consumption, high accuracy and low cost. The proposed constructed system can be applied in the residential area for alternative electricity generation especially for low power appliances. | 2019-05-26T13:18:26.015Z | 2019-05-08T00:00:00.000 | {
"year": 2019,
"sha1": "34d7f443b721089a0d132b437db8c4337330f81a",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijecec.20190501.11.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a1b86af863eb83cae05128f5cc5a65fc51aad395",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216048774 | pes2o/s2orc | v3-fos-license | DUB-independent regulation of pVHL by OTUD6B suppresses hepatocellular carcinoma
Oxygen is vital for most living organisms. During the course of evolution, animals have developed a highly conserved and elegant pathway to regulate oxygen sensing that converges on the heterodimeric transcription factor called hypoxia-inducible factor (HIF), which contains HIF-1α, a labile alpha subunit and HIF-1β, a stable beta subunit (Wang et al., 1995; Kaelin &Ratcliffe, 2008). In the presence of oxygen, HIF-1α is hydroxylated on the proline-402 and proline-564 residues by the family of Egg-Laying Defective Nine dioxygenases (EglN), which are also called Prolyl Hydroxylase Domain (PHD) proteins (Bruick & McKnight, 2001; Epstein et al., 2001; Ivan et al., 2002). The proline hydroxylation post-translational modification subsequently recruits theCullin 2 E3 ubiquitin ligase complex which comprises the von Hippel-Lindau tumor suppressor (pVHL), Elongin B, Elongin C, Rbx1 and Cullin 2 (Zhang et al., 2019). Specific recognition of the proline-hydroxylation modification by Cullin 2 leads to the ubiquitination and subsequent proteasomal degradation of HIF-1α. As such, under low oxygen conditions, deficit in the proline hydroxylation of HIF-1α would lead to its stabilization and activation, thus promoting the transcription of hundreds of target genes, such as vascular endothelial growth factor and erythropoietin (Zhang et al., 2019). TheseHIF-1α target genes normally serve to promote acute or chronic adaptation to hypoxia, facilitate angiogenesis and thus favor the growth of solid tumor (Wilson & Hay, 2011). pVHL functions largely as a tumor suppressor and germ line mutations in the VHL gene cause von Hippel–Lindau disease, a hereditary neoplastic disease associated with clear-cell renal-cell carcinomas (ccRCCs) (Gossage et al., 2015). Disruption of VHL, by somatic mutation, hypermethylation of its promoter or chromosomal deletion, is the most recurrent mutation in sporadic ccRCC (Gossage et al., 2015). However, the function and regulation of pVHL in other cancer types such as hepatocellular carcinoma (HCC) remains largely elusive. Moreover, although pVHL dictates the ubiquitination and degradation of HIF-1α, pVHL itself is also unstable and actively undergoes ubiquitination and degradation. Several ubiquitinating enzymes such as E2-EPF ubiquitin carrier protein (UCP) (Jung et al., 2006) and WD repeat and SOCS box-containing protein 1 (WSB1) (Kim et al., 2015) have been previously reported to regulate pVHL protein stability through promoting its ubiquitination and degradation. Although the VDU1 deubiquitinase (DUB) has been reported to interact with pVHL, it was validated as a pVHL downstream substrate (Li et al., 2002). However, the identity of the physiological DUB that stabilizes pVHL by antagonizing its ubiquitination process remains largely unknown. In a recent remarkable study published in Advanced Science, Lingqiang Zhang group reported the ovarian-tumor domain containing protein 6B (OTUD6B) in regulating pVHL protein stability to impact HCC metastasis (Liu et al., 2020). Liver cancer in which HCC is the major form is the third leading cause of cancer deaths in the world, and more than 50% of HCC patients are in China (Bray et al., 2018). Using siRNA-based targeted screening, the authors found that OTUD6B, but not other OTU family members, could significantly suppress HCC cells migration and metastasis. To explore the underlying molecular mechanism, through RNAsequencing, they found that HIF-1α-related transcriptional signatures are relatively enriched in OTUD6B knockdown cells. As such, depletion of endogenous OTUD6B leads to stabilization of HIF-1α, while ectopic over-expression of OTUD6B promotes the ubiquitination of HIF-1α. These results coherently suggest that it may serve as a negative regulator of HIF-1α. Xiaoming Dai and Jing Liu have contributed equally to this work.
Oxygen is vital for most living organisms. During the course of evolution, animals have developed a highly conserved and elegant pathway to regulate oxygen sensing that converges on the heterodimeric transcription factor called hypoxia-inducible factor (HIF), which contains HIF-1α, a labile alpha subunit and HIF-1β, a stable beta subunit (Wang et al., 1995;Kaelin & Ratcliffe, 2008). In the presence of oxygen, HIF-1α is hydroxylated on the proline-402 and proline-564 residues by the family of Egg-Laying Defective Nine dioxygenases (EglN), which are also called Prolyl Hydroxylase Domain (PHD) proteins (Bruick & McKnight, 2001;Epstein et al., 2001;Ivan et al., 2002). The proline hydroxylation post-translational modification subsequently recruits the Cullin 2 VHL E3 ubiquitin ligase complex which comprises the von Hippel-Lindau tumor suppressor (pVHL), Elongin B, Elongin C, Rbx1 and Cullin 2 (Zhang et al., 2019). Specific recognition of the proline-hydroxylation modification by Cullin 2 VHL leads to the ubiquitination and subsequent proteasomal degradation of HIF-1α. As such, under low oxygen conditions, deficit in the proline hydroxylation of HIF-1α would lead to its stabilization and activation, thus promoting the transcription of hundreds of target genes, such as vascular endothelial growth factor and erythropoietin (Zhang et al., 2019). These HIF-1α target genes normally serve to promote acute or chronic adaptation to hypoxia, facilitate angiogenesis and thus favor the growth of solid tumor (Wilson & Hay, 2011).
pVHL functions largely as a tumor suppressor and germ line mutations in the VHL gene cause von Hippel-Lindau disease, a hereditary neoplastic disease associated with clear-cell renal-cell carcinomas (ccRCCs) (Gossage et al., 2015). Disruption of VHL, by somatic mutation, hypermethylation of its promoter or chromosomal deletion, is the most recurrent mutation in sporadic ccRCC (Gossage et al., 2015). However, the function and regulation of pVHL in other cancer types such as hepatocellular carcinoma (HCC) remains largely elusive.
Moreover, although pVHL dictates the ubiquitination and degradation of HIF-1α, pVHL itself is also unstable and actively undergoes ubiquitination and degradation. Several ubiquitinating enzymes such as E2-EPF ubiquitin carrier protein (UCP) (Jung et al., 2006) and WD repeat and SOCS box-containing protein 1 (WSB1) (Kim et al., 2015) have been previously reported to regulate pVHL protein stability through promoting its ubiquitination and degradation. Although the VDU1 deubiquitinase (DUB) has been reported to interact with pVHL, it was validated as a pVHL downstream substrate (Li et al., 2002). However, the identity of the physiological DUB that stabilizes pVHL by antagonizing its ubiquitination process remains largely unknown. In a recent remarkable study published in Advanced Science, Lingqiang Zhang group reported the ovarian-tumor domain containing protein 6B (OTUD6B) in regulating pVHL protein stability to impact HCC metastasis (Liu et al., 2020).
Liver cancer in which HCC is the major form is the third leading cause of cancer deaths in the world, and more than 50% of HCC patients are in China (Bray et al., 2018). Using siRNA-based targeted screening, the authors found that OTUD6B, but not other OTU family members, could significantly suppress HCC cells migration and metastasis. To explore the underlying molecular mechanism, through RNAsequencing, they found that HIF-1α-related transcriptional signatures are relatively enriched in OTUD6B knockdown cells. As such, depletion of endogenous OTUD6B leads to stabilization of HIF-1α, while ectopic over-expression of OTUD6B promotes the ubiquitination of HIF-1α. These results coherently suggest that it may serve as a negative regulator of HIF-1α.
Xiaoming Dai and Jing Liu have contributed equally to this work.
Given the fact that pVHL is the well-characterized upstream negative regulator of HIF-1α, the Zhang group went on to explore the potential regulation of VHL by OTUD6B. Indeed, further biochemical studies showed that, OTUD6B binds directly with pVHL rather than HIF-1α. More importantly, OTUD6B protects pVHL from proteasome dependent degradation via decreasing pVHL Lys 48 ubiquitination, but this function appears to be largely independent of OTUD6B's enzymatic activity. Moreover, mutant forms of OTUD6B with deletion of OTU domain or mutating the putative catalytic active sites could still suppress the ubiquitination of pVHL, which is consistent with previous report showing that OTUD6B is incapable of cutting any di-ubiquitin in vitro (Mevissen et al., 2013). Instead of the OTU catalytic domain, the N-terminal of OTUD6B seems to play a major function in binding and protecting pVHL from degradation. Hence, it is possible that OTUD6B might function as a scaffold to couple pVHL with Elongin B/C to form a stable Cullin 2 VHL E3 ligase complex, which protects pVHL from proteasomal degradation. On the other hand, depletion of OTUD6B results in the dissociation of Cullin 2 VHL complex and the degradation of pVHL presumably by known upstream E3 ligases such as WSB1 (Kim et al., 2015) or E2-EPF-UCP (Jung et al., 2006). Consistent with this mechanism, over-expression of pVHL could antagonize OTUD6B depletion-induced effects in HCC cell migration and metastasis. However, further investigation is warranted to identify the physiological DUB that can remove the polyubiquitination chain from pVHL to antagonize the functions of the E3 ligases towards pVHL. Given that OTUD6B is integrated into the Cullin 2-Elongin B/C complex, it's interesting to speculate whether it can form additional E3 complex besides pVHL to play a more general role together with Cullin 2-Elongin B/C. In addition, why does pVHL only bind with OTUD6B but not other OTU family members, such as OTUD6A? To this end, further structural study regarding the difference between the N-terminal domain (NTD) of OTUD6B and OTUD6A could provide more insights. Furthermore, the in vivo biological function of OTUD6B remains unclear, and Otud6b KO mice will be very helpful to address it in the future studies. Interestingly, in keeping with an important role of OTUD6B in regulation of pVHL as a critical component of the oxygen sensing pathway, OTUD6B expression was markedly induced under hypoxic condition, suggesting OTUD6B may be a transcriptional target of HIF. Through luciferase reporter assay and chromatin immunoprecipitation assay, the authors found that HIF-1α binds with the promoter of OTUD6B. Hence, these data suggest that as a transcriptional target gene of HIF, OTUD6B expression could be induced under the hypoxic condition to stabilize pVHL and promote the degradation of HIF-1α, thus forming a negative feedback loop in regulating HIF activity and oxygen sensing homeostasis.
Taken together, this work reveals a new layer of molecular mechanism for the stability regulation of the pVHL tumor suppressor and reveals its potential clinical importance in HCC metastasis and treatment. To this end, OTUD6B was identified through a siRNA screening as a new subunit of the Cullin 2 VHL complex, which functions to promote the binding between pVHL and Elongin B/C, thereby protecting the complex from proteasome mediated degradation (Fig. 1). This elegant work therefore adds a new layer for the regulation of oxygen sensing machinery and sheds light on targeting hypoxic microenvironment for HCC therapy.
ACKNOWLEDGMENTS
This work was supported by the NIH grants R01CA177910 and R01GM094777 to WW.
COMPLIANCE WITH ETHICS GUIDELINES
Xiaoming Dai, Jing Liu and Wenyi Wei declare that they have no conflict of interest. This article does not contain any studies with human or animal subjects performed by the any of the authors.
OPEN ACCESS
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by/4.0/. | 2020-04-22T14:40:22.443Z | 2020-04-22T00:00:00.000 | {
"year": 2020,
"sha1": "dec718bd174111e44118a17e08f1d719439f4c55",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13238-020-00721-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dec718bd174111e44118a17e08f1d719439f4c55",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
264407276 | pes2o/s2orc | v3-fos-license | Craig Driver & Ross Warren (architects) will present examples of an innovative waymaking (wayfinding) concept from a current development in Norwich, UK
Abstract A new interpretation of the normalised “Wayfinding” design task offers the opportunity to become an important element of the larger clinical and architectural project for a new “core” expansion of a large regional psychiatric hospital in the South-East of the UK. We call this new approach Waymaking, as it goes beyond signage, leveraging our deep-set knowledge and understanding of the entire project at all scales. Waymaking at the Rivers Centre for Mental Health (Rivers) begins with the exploration of movement narratives into and around its larger site. It turns a classic design task into a design opportunity on all scales, starting with an urban design and planning perspective, through to the architectural and landscape design decisions outside of the building and into the specific on-ward atmospheres in a manner integrated with the detail interior design decisions of colour, built-in-furniture and others. Rivers has been carefully composed out of existing structures as well as smaller new-build and extension buildings. These are all set within a large, sloping site of noteworthy natural beauty. As such, Rivers can well be understood as a hillside village or campus of health - rather than as a traditional “hospital.” As a health village, Rivers provides spatial sequencing as the landscape design directly introduces a series of smaller, more human scale spaces built and natural all of which together aid in orientation and identity across the site. This will help support the daily use of the buildings by all stakeholders. This strategy has been “baked-in“ to the architectural design as well, strategically distributed retreat/recovery spaces allow for space for de-escalation or relaxation. These can be found in the form of regular niches in the hallways and “porch” entrance spaces, usually with built-in benches and bespoke lighting elements. In addition to creating orientation affordances, these also provide opportunities for neurodivergent persons (ie. ASD, learning disabilities, etc.) to better understand and master independent movement around the Centre. Disclosure of Interest None Declared
Abstract: Sociopetal design methods can offer interesting means to support therapeutic concepts within ward environments.They can help to forge group identities through offering patients, staff and visitors opportunities to identify with the spaces they inhabit."Sociopetal space" has been defined as "spaces which help bring people together"; but how does this actually work and what role can these types of spaces play in a hospital ward setting?Some of these elements operate at a detail level and can be rather simple to deploy.Normalising the environment by making "regular" design decisions such as by using real rather then simulated materials (ie., actual wood rather then "wood patterned" furniture); or through offering a mix of lighting (ie., artificial and natural sources in variation) can create more homely spaces for patients and staff alike.Ultimately, design decisions at the detail scale can create phenomenal elements which can play a large role towards generating a favorable atmospheric experience on the ward.It is also possible to explore how specific moments or places within a psychiatric ward might be designed to support patient agency, even on a closed ward.Sociopetal elements such as well-sited sitting spaces can offer moments of safety or retreat, leading to a greater sense of control.This can help patients feel more open to positive interactions with their colleagues and staff because they can safely observe or choose less committed ways of participation in daily or group activities.Zooming out from these details, we will also look at the layout of a psychiatric ward (ie.accommodations) to help identify where opportunities such as those listed can be found.Simple gestures such as a slight widening of the corridor leading to important shared areas or better access to light or views of nature have been shown to improve outcomes for patients.What other design elements can be placed on or within wards to further this approach?Recent and ongoing projects within our practice will be shared to help workshop participants gather literacy in case they may be involved in future design projects.Abstract: I will point out the important role of a thorough planning process in which all stakeholders work together starting in early phases of the design process ("phase 0") and engage in a truly interdisciplinary and iterative process throughout the entire planning phase as well as the building phase (where often ad hoc decisions have to be made in order to adjust to unforeseen circumstances).I will examine the terms "Consensus Design" and "Evidence-Based Design" and relate them to lived reality by giving a number of examples from own experience.Here I will contrast different approaches in carrying out the planning process and demonstrate how only a truly interdisciplinary and iterative process can result in individualised and optimised therapeutic environments, strengthen identity and reduce stigmatisation.As a support to future projects which workshop participants may be involved in, I will share some of the basic methods and tools which I have seen or used to help build and maintain this type of collaborative conversations throughout project phases.
W0019
Craig Driver & Ross Warren (architects) will present examples of an innovative waymaking (wayfinding) concept from a current development in Norwich, UK of the larger clinical and architectural project for a new "core" expansion of a large regional psychiatric hospital in the South-East of the UK.We call this new approach Waymaking, as it goes beyond signage, leveraging our deep-set knowledge and understanding of the entire project at all scales.Waymaking at the Rivers Centre for Mental Health (Rivers) begins with the exploration of movement narratives into and around its larger site.It turns a classic design task into a design opportunity on all scales, starting with an urban design and planning perspective, through to the architectural and landscape design decisions outside of the building and into the specific on-ward atmospheres in a manner integrated with the detail interior design decisions of colour, built-in-furniture and others.Rivers has been carefully composed out of existing structures as well as smaller new-build and extension buildings.These are all set within a large, sloping site of noteworthy natural beauty.As such, Rivers can well be understood as a hillside village or campus of health -rather than as a traditional "hospital."As a health village, Rivers provides spatial sequencing as the landscape design directly introduces a series of smaller, more human scale spaces built and natural all of which together aid in orientation and identity across the site.This will help support the daily use of the buildings by all stakeholders.This strategy has been "baked-in" to the architectural design as well, strategically distributed retreat/recovery spaces allow for space for de-escalation or relaxation.These can be found in the form of regular niches in the hallways and "porch" entrance spaces, usually with built-in benches and bespoke lighting elements.In addition to creating orientation affordances, these also provide opportunities for neurodivergent persons (ie.ASD, learning disabilities, etc.) to better understand and master independent movement around the Centre.
W0020 Menthal health of internally and externally displaced persons in war period
The presentation also provided a system of therapy and rehabilitation for internally and externally displaced persons, as well as an evaluation of their effectiveness.
W0021
The involvement of Croatian psychiatrists in helping the displaced persons from Ukraine M. Rojnic Kuzman 1,2 Abstract: After two years of pandemic with COVID-19 Europe is facing a war, which has already caused numerous death and injuries, mass displacement, and aggravated the economic and energy crisis and has left most countries completely unprepared and created a humanitarian crisis.The COVID-19 pandemics crisis pointed out the unpreparedness of the health (including mental health) sectors for the emergency situations.However, we also learnt some of the practices that proved effectiveincluding the fast creation of collaborative networks on a larger scale that also allowed fast spread of good practices and practical organisation of help.The European Psychiatric Association as well as individual national psychiatric association have started an informal network of solidarity for Ukraine on February 28 th , 2022 to respond to the needs of people in Ukraine as verbalized by the Ukrainian mental health professionals, but also to the need of surrounding countries where people from Ukraine fled to.Through this network several actions, including financial support, medical donations and education.The Croatian Psychiatric Association took the lead in the organisation of education for first line helpers and volunteers from Ukraine and countries surrounding Ukraine where displaced persons fled to, in collaboration with many partners.
W0022
High number of refugees in Germany -how is the mental health care dealing with this major challenge?Abstract: Europe is again confronted with a new dramatic emergency, a war which has already caused civil victims, mass displacement and even fear about a nuclear war and energy crisis.Again, Europe is facing new waves of war refugees, forcibly displaced people.There is increasing evidence that a large proportion of refugees or forcibly displaced persons suffer from the consequences of traumatic events and exhibit psychological problems or develop mental disorders, including post-traumatic stress disorder, depressive and anxiety disorders, and relapses in psychotic episodes.European countries are trying to face with an extraordinary surge Interest: None Declared W0018 Applying "Consensus Design" in the Development of Psychiatric Facilities M. Voss Department of Psychiatry and Psychotherapy, Charité University Medicine & St. Hedwig Hospital, Berlin, Germany doi: 10.1192/j.eurpsy.2023.173 | 2023-08-10T15:06:46.119Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "9ad89a974e1374d310229c5373844de89e590beb",
"oa_license": "CCBY",
"oa_url": "https://vue.metrocenter.steinhardt.nyu.edu/article/id/19/download/pdf/",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0b0317569ceafe6fca2cbd6fad5f64a703577697",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
9390356 | pes2o/s2orc | v3-fos-license | Chi-square analysis of the reduction of ATP levels in L-02 hepatocytes by hexavalent chromium
This study explored the reduction of adenosine triphosphate (ATP) levels in L-02 hepatocytes by hexavalent chromium (Cr(VI)) using chi-square analysis. Cells were treated with 2, 4, 8, 16, or 32 µM Cr(VI) for 12, 24, or 36 h. Methyl thiazolyl tetrazolium (MTT) experiments and measurements of intracellular ATP levels were performed by spectrophotometry or bioluminescence assays following Cr(VI) treatment. The chi-square test was used to determine the difference between cell survival rate and ATP levels. For the chi-square analysis, the results of the MTT or ATP experiments were transformed into a relative ratio with respect to the control (%). The relative ATP levels increased at 12 h, decreased at 24 h, and increased slightly again at 36 h following 4, 8, 16, 32 µM Cr(VI) treatment, corresponding to a “V-shaped” curve. Furthermore, the results of the chi-square analysis demonstrated a significant difference of the ATP level in the 32-µM Cr(VI) group (P < 0.05). The results suggest that the chi-square test can be applied to analyze the interference effects of Cr(VI) on ATP levels in L-02 hepatocytes. The decreased ATP levels at 24 h indicated disruption of mitochondrial energy metabolism and the slight increase of ATP levels at 36 h indicated partial recovery of mitochondrial function or activated glycolysis in L-02 hepatocytes.
Hexavalent chromium, Cr(VI), is a well-documented human carcinogen and is widely found in human living environments as the result of industrial production or discharge (1). Recently, Cr(VI) pollution has been found in crops due to the uptake of Cr(VI) from the soil and in rivers in some regions of China (2,3). The toxicity caused by oral Cr(VI) ingestion is thought to be due to toxicity to the liver, which is the main organ of biological metabolism, and liver damage (or hepatotoxicity) from Cr(VI) exposure has been confirmed in animal experiments and in cultured L-02 hepatocytes, which showed hepatocyte ultrastructure disruption, mitochondrial damage and apoptosis (4)(5)(6).
Mitochondria are the main site of ATP synthesis, which is produced by the tricarboxylic acid cycle (TCA) and oxidative phosphorylation (OXPHOS) in the inner mitochondrial membrane (7). At present, the detailed interference effect of Cr(VI) on the cellular ATP levels is not known. Generally, Cr(VI) can induce apoptosis and lead to a decrease in cell survival or cell number, and differences in cell number result in the different cellular ATP levels. And in this case manual adjustment of cell number is commonly applied to balance differences in cell number in each group. Small sample t-tests or one-way ANOVA were applied to compare the differences between the Cr(VI) treatment groups and control. However, in our view, adjusting the cell number is not the only way to analyze the toxic effects of Cr(VI) in vitro. The chi-square statistical test (χ 2 ) can also be used to analyze the physiological or toxicological effects of Cr(VI) in vitro.
The χ 2 test is a commonly used statistical method and consists of the Pearson chi-square, linear-by-linear chi-square, McNemar and Mantel-Haenszel tests, among others. Currently, χ 2 analysis is widely applied to compare the difference of a relative ratio existing between two or more groups. It is frequently used in the fields of clinical and experimental epidemiology to explore etiological www.bjournal.com.br Braz J Med Biol Res 45 (6) 2012 factors, to assess risk, and to predict trends of disease development. However, the χ 2 test is rarely applied in the field of in vitro cytotoxicity. For this reason, after cultured L-02 hepatocytes were exposed to 0, 2, 4, 8, 16, and 32 μM Cr(VI) for 12, 24, or 36 h, a χ 2 test was applied to analyze the interference effect by comparing the difference between cell survival rate and intracellular ATP levels to establish a novel method of analyzing the cytotoxicity induced by toxic chemicals in vitro.
L-02 hepatocyte culture and Cr(VI) exposure
L-02 hepatocytes were cultured on a six-well plate with RPMI-1640 medium containing 15% newborn calf serum at 37°C in a 5% CO 2 atmosphere. The culture medium was changed every 1-2 days. When the cell density reached 60% confluence, the cells were exposed to Cr(VI) for different periods of time (12, 24, or 36 h) at 37°C. Untreated cultures were used as a control group. Cell survival was analyzed by the MTT method according to manufacturer instructions.
MTT reduction assay
The MTT assay was performed according to manufacturer instructions. The growing cells were collected by 0.25% Trypsin digestion, centrifugation, and supernatant removal. Two milliliters of RPMI-1640 culture medium containing 15% newborn calf serum was added to resuspend the cells as a single cell suspension. The cell suspension was then inoculated in a 96-well culture plate at a density of 1.0 x 10 4 cells/well. The following day, the cells were grown in medium containing 0, 2, 4, 8, 16, or 32 µM Cr(VI) for 12, 24, or 36 h at 37°C. Following Cr(VI) treatment, MTT was added at a volume of 10 μL/well and cultured for 4 h at 37°C, then 100 μL formazan lysate was added, and the cells were cultured for 6 h at 37°C. Finally, the 96-well culture plate was removed from the incubator and continually shaken for 5 min on a micro-oscillator to completely dissolve the formazan. Immediately, cell vitality was analyzed by measuring absorbance at 492 nm with a multifunction microplate reader (Thermo Varioskan Flash 3001, USA).
ATP bioluminescence assay
L-02 hepatocytes were seeded at a density of 2.5 x 10 5 cells/well on three six-well plates. When the cells reached 60% confluence, they were exposed to Cr(VI) for 12, 24, or 36 h. Following Cr(VI) treatment, intracellular ATP levels were determined using a bioluminescent ATP assay kit. The cells were disrupted in 200 μL lysis buffer by mechanical disruption, and centrifuged at 12,000 g to collect the cell supernatant. Meanwhile, an aliquot (100 μL) of an ATP detection working solution was added to each well of a black 96-well culture plate and incubated for 3 min at room temperature. Then, four replicates of 40-μL samples of the cell lysate from each group were added to the wells. After allowing the reaction to take place for a few seconds, the luminescence value was measured. In addition, the 96-well plates also contained serial dilutions of an ATP standard solution to generate a standard curve, and the ATP levels in L-02 hepatocytes were calculated by comparison with the ATP standard curve.
Data analysis
Data were analyzed statistically with Microsoft Office Excel 2003 and SPSS 13.5. The results of the ATP and MTT assays are reported as means ± SD. The statistical significance of differences between means was determined by an F-test (ANOVA analysis) followed by least significant difference (LSD) post hoc tests. The survival rate of the cultured cells (from the MTT assay) and the relative ATP levels are reported as percent (%) change from control. Statistical significance was determined by Pearson chi-square or linear χ 2 tests. For the purpose of χ 2 analysis, the compared groups were divided by the same number to achieve a gain of less than 100%. A P < 0.05 values (two-sided test) was accepted as statistically significant.
ATP level in L-02 hepatocytes
Following 12 h of Cr(VI) treatment, the ATP levels of L-02 hepatocytes were increased. However, after 24 h of treatment, intracellular ATP levels decreased significantly with Cr(VI) exposure, except for a slight increase in the 2www.bjournal.com.br Braz J Med Biol Res 45 (6) 2012 µM Cr(VI) group. Following 36 h of Cr(VI) treatment, the low ATP levels showed a slight up-regulation, while the ATP levels in the 16 and 32 µM Cr(VI) groups remained lower than control. The graphic change of relative ATP levels was described as a "V-shaped" curve (Table 2, Figure 1).
χ 2 analyses comparing cell survival rate and relative ATP levels
Following 12 h of Cr(VI) treatment, the χ 2 test showed a significant difference in ATP levels in the 8, 16, and 32 µM groups (P < 0.05). Following 24 h of Cr(VI) treatment, the χ 2 test showed a significant difference in ATP levels in the 4, 8, 16, and 32 µM groups (P < 0.05). Following 36 h of Cr(VI) treatment, the χ 2 test showed a significant difference in ATP levels in the 8 and 32 µM Cr(VI) groups (P < 0.05) ( Table 3).
Discussion
Cr(VI) is a common environmental pollutant that is widely used in electroplating, metal refining, printing, dyeing, tanning, and other industrial and agricultural processes, and its carcinogenicity has been documented by the International Research Agency of Cancer (IRAC) (1). In China, epidemiological studies suggested that occupational Cr(VI) exposure led to chronic damage of liver, lung, nasal mucosa, skin and other organs, and an increased risk of cancer incidence (8)(9)(10). Furthermore, the study of Cr(VI) cytotoxicity revealed that Cr(VI) could readily cross the cell membrane through nonspecific anion channels, resulting in excessive generation of reactive oxygen species. Consequently, induced oxidative stress, genetic damage, mitochondrial dysfunction, activation of apoptosis-related caspases, and mitochondrial-mediated apoptosis were observed (11)(12)(13), while chromium-induced genotoxicity and apoptosis were closely associated with Cr(VI) carcinogenesis (14).
Mitochondria are the main site of ATP synthesis, which is produced mainly through the TCA cycle and OXPHOS, named as mitochondrial aerobic respiration (7). Under normal physiological conditions, mitochondrial aerobic respira-tion is the main way of energy provision, while glycolysis in the cytoplasm is negligible due to the low effectiveness of ATP production (7). Interestingly, the glycolysis metabolism is activated as a compensatory means of energy production existing in many cancer cells (15)(16)(17). At present, it is unclear whether toxic chemicals cause also the activation of glycolysis in the process of toxicity. In an adverse environment of exposure to toxic chemicals, several studies have shown that the disorder of energy metabolism induced by toxic chemicals was closely associated with mitochondrial dysfunction. For example, acute ethanol exposure led to suppression of mitochondrial ATP generation and fatty acid oxidation and decreased respiration and accessibility of mitochondrial adenylate kinase in permeabilized hepatocytes (18,19). Exposure to 5 and 10 µM Pb reduced decreased cellular ATP levels in the neuronal cell lines PC-12 and SH-SY5Y, which correlated with voltage-dependent anion channel (VDAC) transcription and expression (20). VDAC is an important protein located on the outer mitochondrial membrane, which controls mitochondrial life and death (21). At present, the effect of Cr(VI) hepatotoxicity on cellular ATP levels remains ambiguous; therefore, it is important to elucidate the interference effect of Cr(VI) on ATP levels in L-02 hepatocytes.
Different doses of Cr(VI) can lead to differences in cell survival rates and cell number from control and consequently alter intracellular ATP levels. Therefore, it was interesting to scientifically evaluate the interference effect of Cr(VI) on ATP level in cells. For the first time, a chi-square test was used to analyze experimental data on the toxicity of Cr(VI), which is a novel method of analysis of the toxicological effects induced by Cr(VI). Chi-square testing was applied to compare differences between cell survival rates and ATP levels. If there were significant differences between the variables, this would indicate that the change in intracellular ATP levels is not related to changes in cellular survival rates, which could indicate that Cr(VI) interferes with ATP synthesis in L-02 hepatocytes.
The experimental results showed that Cr(VI) led to a gradual decrease of cell survival rate in L-02 hepatocytes at 12, 24, or 36 h of exposure, and the 32 µM Cr(VI) treatment was able to significantly decrease the cell survival rate. Meanwhile, the relative ATP level showed a pattern of Cr(VI) interference with ATP levels described as an increase at 12 h, a decrease at 24 h, and a new slight increase at 36 h, looking like a "V-shaped" curve. Furthermore, the results of the Pearson χ 2 test showed that doses of 8, 16, and 32 µM Cr(VI) induced a significant increase of ATP levels at 12 h, while 4, 8, 16, and 32 µM Cr(VI) doses induced a significant decrease of ATP at 24 h. However, after Cr(VI) treatment for 36 h, the ATP levels increassed slightly again, but the ATP levels in the 16 and 32 µM Cr(VI) groups were still lower than control. In our view, the increase of ATP level at 12 h indicated activation of mitochondrial aerobic respiration, the decreased ATP levels at 24 h indicated disruption of mitochondrial energy metabolism and, interestingly, the slight increase of ATP levels at 36 h indicated partial recovery of mitochondrial function or activated glycolysis in L-02 hepatocytes.
In summary, the χ 2 test enabled us to distinguish the confounding effects of decreased cell survival rate from changes in intracellular ATP content. This study is the first to demonstrate that exposure to 32 µM Cr(VI) leads to a significant increase in cellular ATP at 12 h, a decrease at 24 h, and a slight increase again at 36 h. Furthermore, in future studies, the χ 2 statistical test could also be considered as a reference for exploring cytotoxicity or pharmacological mechanisms of other chemicals. It would be interesting to further explore the molecular mechanism of mitochondrial energy metabolism-or glycolysis-related genes by the χ 2 method during Cr(VI) toxicity. | 2016-05-04T20:20:58.661Z | 2012-03-23T00:00:00.000 | {
"year": 2012,
"sha1": "fab06ca1da741d430550eb88ac7406d4b80c79bc",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/bjmbr/v45n6/1341.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0049c5a24cf86950233e5b805efffd35df599cc1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
220253191 | pes2o/s2orc | v3-fos-license | DiMSum: an error model and pipeline for analyzing deep mutational scanning data and diagnosing common experimental pathologies
Deep mutational scanning (DMS) enables multiplexed measurement of the effects of thousands of variants of proteins, RNAs, and regulatory elements. Here, we present a customizable pipeline, DiMSum, that represents an end-to-end solution for obtaining variant fitness and error estimates from raw sequencing data. A key innovation of DiMSum is the use of an interpretable error model that captures the main sources of variability arising in DMS workflows, outperforming previous methods. DiMSum is available as an R/Bioconda package and provides summary reports to help researchers diagnose common DMS pathologies and take remedial steps in their analyses.
Background
Deep mutational scanning (DMS), also known as massively parallel reporter assays (MPRAs) and multiplex assays of variant effect (MAVEs), enables parallel measurement of the effects of thousands of mutations in the same experiment [1,2]. In a basic DMS experiment, a library of sequence variants is constructed and deep sequencing before and after selection for an in vitro or in vivo activity is used to quantify the relative activity ("molecular fitness") of each genotype. Beyond assaying point mutations, the high-throughput nature of DMS facilitates the comprehensive study of combinations of mutations and their genetic interactions (epistasis) where fitness effects of individual mutations depend on the presence of other (background) mutations [3]. The resulting fitness landscapes are informative of protein [4][5][6], RNA [7][8][9], and regulatory element [10][11][12][13][14][15][16][17][18] function and have provided mechanistic insight into biological processes including the regulation of gene expression [10,19], protein-protein interactions [20], alternative splicing [21,22], and molecular evolution [7]. Deep mutational scans have the potential to improve human variant annotation [23,24] and protein and RNA structure determination [25][26][27]. In recognition of the growing number and importance of DMS assays in biomedical research, a dedicated platform for sharing, accessing, and visualizing these datasets has recently been launched [28].
A key feature of a DMS experiment is that it preserves the link between quantitative phenotypic effects and their underlying causal genotypes measured for many variants simultaneously (Fig. 1a). The three main steps can be summarized as follows: (1) construction of a library of DNA variants corresponding to the assayed biomolecule (genotype), (2) selection (or separation) of variants according to a given molecular function (phenotype), and (3) quantification of the variant abundances before and after selection by DNA sequencing (measurement), which is either done by counting sequencing reads of variants directly or unique barcodes previously linked to them [29][30][31][32][33]. A fitness score for each variant is then calculated by comparing its relative abundance (with respect to a reference sequence, e.g., wild-type) before and after selection. Moreover, often multiple independent biological replicates of the experiment are performed to help estimate the error of variant fitness scores, that is, a measure of fitness score reliability.
Several software packages have been developed to simplify and standardize the calculation of fitness scores for each variant from deep sequencing data [34,35], including the estimation of errors for these fitness scores [36]. Unbiased estimation of fitness score reliability is crucial for the interpretation of DMS experiments, for example, when assessing the effects of a variant in a disease gene, and more generally for all kinds of hypothesis testing and when assessing genetic interactions. The large-scale construction and high-throughput readout of thousands to hundreds of thousands of variants at once can, however, complicate basic quality control and identification of potential error sources and artifacts arising in DMS workflows. On the one hand, the many experimental steps of a DMS workflow can contribute errors to the final fitness measurements, especially when "bottlenecks" restrict the variant pool at certain steps in the workflow (Fig. 1a). On the other hand, libraries with a hierarchical variant abundance structure, arising from the combinatorial explosion of variants with multiple mutations (Fig. 1b), lead to distinct sources of error differentially affecting specific subsets of variants (see below). Moreover, the hierarchical variant abundance structure in combination with the typically low complexity of the genotype pool can lead to artifacts introduced by sequencing errors [37,38].
To tackle these issues, we developed DiMSum, a pipeline that allows the end-to-end processing of DMS datasets using an interpretable model for the magnitude and sources of errors in fitness score. The workflow is freely available as an R/Bioconda package (DiMSum) that represents a complete solution for obtaining reliable variant fitness scores and error estimates from raw sequencing files.
Results and discussion
Overview of the DiMSum pipeline The DiMSum pipeline is implemented as an R/Bioconda package and a command-line tool that can be easily configured to handle a variety of DMS experimental designs (see the "Methods" section). The pipeline is organized in two separate modules (Fig. 1c): WRAP processes raw read (FASTQ) files to produce sample-wise variant counts, and STEAM uses these sample-wise variant counts to estimate variant fitness scores and their measurement errors.
DiMSum WRAP performs the following sequence processing steps: (1) assessment of raw read quality using FastQC [39], (2) error-tolerant removal of constant regions (not subjected to mutagenesis but required for primer binding and isolation/amplification of variables regions) using cutadapt [40], and (3) alignment and filtering of paired-end reads in a base quality-aware manner using VSEARCH [41] if required. DiMSum STEAM accepts a table of counts, (4) isolates substitution variants of interest, and then (5) performs statistical analyses to obtain associated fitness scores and error estimates. Briefly, an error model is fit to a high confidence subset of variants to determine count-based, additive, and multiplicative errors of variant fitness scores for all replicates (see below).
To increase flexibility, WRAP or STEAM can each be run in stand-alone mode if desired, e.g., to obtain fitness scores from a user-generated table of variant counts (Fig. 1c, "option B" or to obtain sample-wise variant counts for a custom downstream analysis ( Fig. 1c, "option A"). A detailed R markdown report-viewable with any web browserincluding summary statistics, diagnostic plots, and analysis tips is also generated.
Estimates of variant fitness scores and associated errors
DiMSum calculates variant fitness scores as the natural logarithm of the ratio between sequencing counts in a replicate's output and input samples relative to the wild-type variant. It then uses replicate-specific error estimates to produce a weighted average of fitness scores across replicates for each variant.
DMS experiments are typically replicated to judge the reliability of fitness score estimates due to random variability in the workflow. However, the number of replicates performed is usually low (e.g., 3 to 6), and estimates of measurement errors on a variant-by-variant basis can thus lack statistical power. DiMSum instead estimates measurement errors of fitness scores by sharing information across all assayed variants to increase statistical power (Fig. 2, see the "Methods" section for full detail).
We assume that the error in fitness scores is, to a first approximation, primarily arising due to the finite sequencing counts, and thus, variants with similar counts in input and output samples should have similar measurement errors [42,43]. If the error was purely arising due to sampling of variant frequencies by sequencing, the error could be well approximated by a Poisson distribution, with variance equal to the mean [44]. However, count data have been found to often be over-dispersed compared to this baseline Poisson expectation [45,46]. To account for such over-dispersion, we introduce additive and multiplicative modifier terms of the baseline error, which has been shown to accurately describe variability in transcriptomic count data [47][48][49].
Multiplicative error terms modify the overall error proportional to the error resulting from a variant's sequencing counts and likely describe error sources in workflow steps linked to sequencing (see below for a discussion of potential sources and experimental remedies). Across different DMS datasets, we find such multiplicative error terms to range from one all the way to more than 100, suggesting that over-dispersion can be a grave issue in DMS experiments (Fig. 2, Table 2).
Additive error terms are independent of a variant's sequencing read counts, thus affecting all variants to the same extent, which we attribute to variability arising from differential handling of replicate selection experiments (see below). Additive error terms are typically small (s. d. < 10%) and therefore only become apparent in variants that have small errors from sequencing counts (those with many counts), constituting a lower error limit (Fig. 2, Table 2).
We assume that both multiplicative and additive error terms can differ between replicates but are the same for all variants in each replicate; our error model therefore has 3n parameters (where n is the number of replicates), which are estimated by minimizing the squared difference between the empirical and model-predicted variance of fitness scores across replicates for all variants simultaneously (see the "Methods" section).
Manipulating a DMS dataset to artificially increase either multiplicative error terms or additive error terms in one replicate suggests that the DiMSum error model is capable of accurately estimating the magnitude of the error model terms (Additional file 1: Fig. S7).
Error model benchmarking
To benchmark the error model, we performed leave-one-out cross-validation on published DMS datasets. Here, error model parameters were trained on all but one experimental replicate of a dataset. The resulting error estimates were used to judge whether the fitness scores of variants differ between the training replicates and the held-out test replicate. We find fitness score differences between training and test replicates are normally distributed with the magnitude predicted by the error model (Fig. 3a).
Consequently, when testing for significant differences between the training and test replicates (using a z test), P values are uniformly distributed (Fig. 3b), as would be expected for replicates from the same experiment and indicating that the model correctly controls the type I error rate (rate of false positives).
We find that the DiMSum error model accurately estimates errors in fitness scores across twelve published DMS datasets that display various degrees of over-dispersion [6]. Empirical variance (blue dots show average variance in equally spaced bins, error bars indicate avg. variance × (1 ± 2/ # variants per bin)) is over-dispersed compared to baseline expectation of variance being described by a Poisson distribution (black dashed line). The bimodality of the count-based error distribution results from the relatively low number of single nucleotide mutants which have high counts (thus low count-based error) and the many double nucleotide mutants which have low counts (thus higher count-based error). The DiMSum error model (red line) accurately captures the deviations of the empirical variance from Poisson expectation. Inset: bold cyan and magenta lines indicate multiplicative error term contributions to variance corresponding to input and output samples, respectively (dashed thin lines give input or output sample contributions to variance if multiplicative error terms were 1). The horizontal green line indicates the additive error term contribution. The red line indicates the full DiMSum error model. b The same as a but for a deep mutational scan of FOS [20] that shows more over-dispersion. c-f Multiplicative (c, e) and additive (in s.d. units, d, f) error terms estimated by the error model on the two datasets. Dots give mean parameters, error bars 90% confidence intervals In turn, error models are trained on all but one replicate of a dataset, and z-scores of the differences in fitness scores between the training set f train and the remaining test replicate f test are calculated (i.e., fitness score differences normalized by the estimated error in the training set σ train and test replicate σ test ; importantly, σ test is estimated from error model parameters fit only on the training set replicates). Because fitness scores from replicate experiments should only differ by random chance, if the error models estimate the error magnitude correctly, z-scores should be normally distributed, and corresponding P values from a z test should be uniformly distributed. The tested error models are described in the "Results and discussion" and "Methods" sections. a, c Quantile-quantile plots of z-scores in TDP-43 290-331 library (a) and FOS library (c) compared to the expected normal distribution. b, d Quantile-quantile plots of P values from two-sided z test in TDP-43 290-331 library (a) and FOS library (c) compared to the expected uniform distribution. e Estimated error magnitude relative to the differences observed between replicate fitness scores in twelve DMS datasets in leave-one-out cross-validation (see the "Methods" section). Relative error magnitude = 1 means the estimated magnitude of errors fits the data. Relative error magnitude < 1 means the estimated errors are too small. Boxplots indicate median and 1st and 3rd quartiles (box), and whiskers extend to 1.5× interquartile range ( Fig. 3, Table 2). Moreover, errors are accurately estimated no matter whether they are driven by low sequencing counts (variant with low counts, often higher-order mutants) or whether they appear to be independent of sequencing counts (variants with high counts, such as single mutants), suggesting that both multiplicative and additive error terms help to accurately model error sources in DMS experiments (Additional file 1: Fig. S8).
We compared the DiMSum error model performance to several popular alternative approaches that have previously been used to model error in DMS data (see Table 1). We note that this is not an exhaustive comparison against all statistical models previously used before to estimate measurement errors in DMS datasets. The chosen alternative approaches differ in whether they estimate errors for each variant from the observed variability of fitness scores or the sequencing counts, or a combination thereof, and how much information sharing across variants they allow.
On the one hand, several studies have used the empirical variance of fitness scores across replicates to calculate errors for each variant individually [8,9,20,22,31,50]. Such error estimates are under-powered due to the typically low number of replicates in DMS studies, resulting in errors that are too large for some variants but too small for others. The latter results in an inflation of type I errors (Fig. 3b, d, "s.d.-based"). Error estimates improve with an increasing number of replicates, but type I error inflation persists even for a DMS dataset with 6 replicates (Fig. 3e, Domingo et al. [7]).
Building on this empirical variance approach, Weile et al. [51] used a Bayesian regularization of the empirical variance proposed by Baldi and Long [52], which uses a linear regression estimate of empirical variance across all variants as a prior. We find that this approach improves over only using the empirical variance to calculate errors, but still leads to inflation of type I errors (Fig. 3, "Bayes-reg s.d.").
On the other hand, several studies have assumed that errors can be modeled by a Poisson process based on a variant's sequencing counts [5][6][7]53]. Not unexpectedly, the performance of the "Poisson-based" approach depends on the over-dispersion of the data. It works well for datasets with little systematic over-dispersion but fails dramatically in those cases where the DiMSum error model estimates high multiplicative or additive errors ( Fig. 3 and Additional file 1: Fig. S8, "Count-based"). Enrich2 [36] uses a random-effects model to account for over-dispersion over and above the count-based Poisson expectation on a variant-by-variant basis. In short, variant-specific random-effects terms increase the modeled error towards the empirical variance if it is larger than the count-based Poisson expectation. While this leads to accurate error estimates in datasets with little systematic over-dispersion (small multiplicative error terms, Fig. 3a, b, "Enrich2"), the under-powered estimation of the variant-specific random-effects terms leads to an inflation of type I errors in those datasets with systematic over-dispersion, similar to the other approaches based on variantspecific empirical variances (Fig. 3c, d, e, "Enrich2").
In summary, the DiMSum error model captures the major error sources arising in DMS workflows and improves in accuracy over previous approaches, while needing fewer replicate experiments and having fewer, but interpretable model parameters.
DiMSum provides diagnostic plots similar to Fig. 3a, c to help judge whether errors have been accurately modeled. Failure of the model to accurately estimate the errors suggests shortcomings, potentially due to systematic error sources in the DMS workflow which cannot be accounted for by the error model, urging further action by the user (see below).
Potential sources of increased error in fitness score estimates
In what follows, we provide suggestions for error sources that might be captured by DiM-Sum's additive and multiplicative error model terms as well as error sources that cannot be captured by the error model and how their impact on DMS experiments can be minimized.
Additive error terms are independent of variant read counts and therefore likely result from differential handling of replicate selection experiments. Because these error terms are typically small compared to errors resulting from low sequencing counts, they most often only affect fitness score estimates of very abundant variants, such as single mutants of the wild-type sequence in question. However, if such highly abundant variants are of interest, increasing sequencing coverage will not lead to reductions in the measurement errors of their fitness scores. DiMSum performs a simple scale and shift procedure to minimize interreplicate differences in fitness score distributions prior to estimating error model parameters, therefore minimizing additive error terms that arise from linear differences between replicate selection experiments (see the "Methods" section and Additional file 1: Fig. S4b,c). Additional mitigation strategies to reduce additive error contributions should focus on streamlining the handling of replicate samples through the workflow (e.g., using master mixes, increasing pipetting volumes, reducing time lags in time-sensitive steps) as well as increasing the number of replicate experiments [53], even at similar overall sequencing coverage, as this will lead to a reduction of errors for variants that are dominated by sequencing-independent errors due to the weighted averaging of fitness scores across replicates.
Multiplicative error terms increase variants' errors by a multiple of their sequencing count-based error estimate. Potential error sources are thus likely linked to the sequencing steps in the DMS workflow, in particular, related to the start of the selection step, DNA extraction from input, and output samples as well as the subsequent PCR amplification for sequencing library construction.
First, consider a bottleneck at the DNA extraction step, which arises if the number of unique DNA molecules extracted from the input/output samples does not exceed the number of molecules that are subsequently sequenced, i.e., the extracted variant pool is "over-sequenced." This restriction in the numbers of variant molecules along the workflow will introduce additional random variability in variant frequencies that significantly contribute to-or even dominate-the overall count-based error, and errors calculated solely from the number of downstream sequenced molecules will thus be an underestimate of the true error.
In addition, Kowalsky et al. [54] found that PCR amplification protocols for sequencing library construction can introduce additional random variability to variant frequencies. Using our DiMSum error model, we find that multiplicative errors differ fivefold between the three PCR protocols tested (see the "Methods" section), thus showing that multiplicative errors can arise during the PCR amplification steps of the DMS workflow.
Lastly, another source of multiplicative errors that can potentially arise in input samples is a bottleneck at the start of the selection experiment. Here, if the number of variant molecules used to start the selection is similar to or smaller than the number of variant molecules extracted and sequenced from the input sample, this will randomly alter true variant frequencies at the start of the selection with error magnitudes on the order of or even larger than the error due to sequencing a finite subset of variant molecules.
For example, we recently performed a deep mutational scan of part of the protein GRB2 (Domingo et al., manuscript in preparation), for which the error model indicated a sixfold multiplicative error in the input replicates. A similar error was not observed in a second, related deep mutational scan for the same protein, suggesting a technical bottleneck specific to the input library preparation in the first experiment.
To minimize multiplicative error sources, thus reducing measurement errors and ultimately save sequencing costs, DMS workflows should ensure an excess of variant molecules (~5-10×) is used in all experimental steps upstream of the sequencing step [55] and PCR amplification protocols are optimized [54]. Additionally, sources of multiplicative errors due to bottlenecks at the DNA extraction step and other downstream steps, but not during the selection experiment, should be detectable (and correctable) if using unique molecular identifiers (UMIs) ligated to variant molecules during PCRbased sequencing library preparation [31,56,57].
Systematic error sources. Apart from sources of increased measurement error due to random error in DMS workflows, there are potentially also sources of systematic error that the DiMSum error model cannot account for and which might therefore inflate error or bias fitness scores in undetectable ways.
One potential source of systematic errors is (non-linear) differences in the replicate selection experiments. For example, we recently used DMS to quantify the toxicity of variants of TDP-43 when expressed in yeast in which we mutagenized two sections of the Cterminal prion-like domain [6]. Variants displayed a range of fitness values relative to the wild-type sequence, both detrimental and beneficial. Importantly, one replicate experiment showed a marginal fitness distribution whose shape differed from those of three other replicate experiments. In particular, non-toxic mutant variants were limited in how much faster they could grow compared to wild-type TDP-43, which perhaps resulted from nutrient limitation during the selection experiment (Additional file 1: Fig. S4c). Such non-linear effects that only affect a subset of variants (e.g., beneficial variants) cannot be corrected with simple linear normalization schemes (e.g., DiMSum's shift and scale normalization procedure) and will introduce systematic errors that the error model cannot adequately describe, thus potentially leading to biased fitness estimates as well as incorrect estimates of errors (Additional file 1: Fig. S6e,f). Thus, systematic differences in replicate selection experiments identify the need for better normalization strategies or exclusion of affected replicates, as we decided for the TDP-43 replicate [6].
In summary, the DiMSum error model and diagnostic plots can also serve to judge and improve the experimental workflow and downstream analyses of DMS experiments.
Diagnosing sources of systematic errors in DMS workflows
The particular combination of low genotype complexity and hierarchical abundance structure in DMS experiments (Fig. 1b) can lead to issues arising from sequencing errors.
On the one hand, sequencing errors in reads of highly abundant variants can contribute counts to closely related, but low abundant, variants [37,38]. That is, sequencing errors in wild-type reads will contribute counts to single mutant variants, and sequencing errors in single mutant variants will contribute counts to double mutant variants and so on. DiM-Sum displays estimates of this sequencing error-induced "variant flow" in diagnostic plots of marginal count distributions to give the user an estimate of what fraction of reads of a set of mutants might be caused by sequencing errors (Fig. 4a, left column). Mitigation strategies to lower the fraction of reads per variant from sequencing errors include using higher minimum base quality (Phred score) thresholds, using paired-end sequencing to decrease the number of base call errors, or circumventing these issues altogether by using highly complex barcode libraries that are linked to variants [38].
On the other hand, a potential pitfall linked to the combination of low genotype complexity, hierarchical abundance structure, and sequencing errors in DMS experiments is to mistake sequencing reads purely arising from sequencing errors for the presence of a variant in the assayed genotype pool. That is, at deep enough sequencing coverage, reads for any low-order nucleotide mutant variant will appear in the sequencing record, even if the variant was not actually present in the experiment.
Consider two examples from published DMS experiments. The first example of a DMS experiment in which NNS (N=A, T, C, or G; S=C or G) saturation mutagenesis was used to introduce individually mutated codons into the wild-type sequences of FOS and JUN [20]. Variants that have one mutated codon show a bimodal count distribution in the input samples (Fig. 4a, middle column). Variants in the higher peak have similar read counts no matter whether one, two, or three nucleotides were mutated, consistent with NNS mutagenesis operating on the codon level and the number of mismatched base-pairs having little impact on mutation efficacy. In contrast, read counts for variants in the lower peaks show a dependency on the number of nucleotides mutated and coincide with DiMSum's estimate for sequencing error-induced variant flow. The second example is from a DMS experiment in which doped oligonucleotide synthesis was used to introduce nucleotide mutations into a tRNA [50]. Variants with one or two mutated nucleotides show a bimodal count distribution in the input samples (Fig. 4a, right column). The read counts for variants in the upper peaks depend on the number of nucleotides mutated, consistent with read counts per variant being strongly affected by the combinatorics of mutational space (Fig. 1b). The read counts for variants in the lower peaks also depend on the number of nucleotides mutated and coincide with DiMSum's estimate for sequencing error-induced variant flow. Are variants in the lower read count peaks of these experiments really not present in the variant libraries before sequencing? And at which steps in the DMS workflow were the variants lost?
Potential bottlenecks (or inefficacies) might arise during library construction, transfer of the library into the assay cell population or at subsequent DNA extraction and sequencing library preparation steps (Fig. 1a).
We find that comparisons of count distributions between sequencing samples can provide additional support to determine whether subsets of variants purely arose from sequencing errors and help to diagnose at which workflow step variants might have been lost, in order to improve future DMS experiments and serve to inform the strategy to avoid systematic errors in fitness calculations for a present dataset. We exemplify this in Fig. 4b on simulated bottlenecks in a deep mutational scan of TDP-43.
If variants have not been constructed or have been lost at initial library preparation steps and therefore are not present in any replicate experiment, count distributions between replicate input samples should be highly correlated and the same variants should fall into the same peaks of bimodal read count distributions (Fig. 4b, "library bottleneck"), as is also apparent in the FOS-JUN dataset (Additional file 1: Fig. S9). Variants in the lower peak of the distribution should be discarded from all replicates, e.g., using DiM-Sum's "hard" read count thresholds for variant filtering (Fig. 4c), and downstream analysis should proceed as normal, as in the published analysis of the FOS-JUN dataset [20].
In contrast, if the variant loss was replicate-specific, e.g., if transformations into replicate cell populations were incomplete, read count distributions should display "flaps"-subsets of variants that appear at high counts in one replicate (variant was assayed) but at low counts in another (variant counts arise solely from sequencing errors) (Fig. 4b, "replicate bottleneck"). A conservative approach to avoid systematic errors in fitness score calculations is to use "hard" read count thresholds to discard all variants appearing in lower read count peaks in any replicate. Additionally, DiMSum allows the user to choose a "soft" threshold to discard variants only in the replicates where they appear in the low count peaks, therefore allowing their (See figure on previous page.) Fig. 4 Effects of bottlenecks on variant count distributions and fitness scores. a Input sample count distributions of previously published DMS experiments [20,50]. For FOS and FOS-JUN datasets, counts of single AA variants with one, two, or three nucleotide substitutions in the same codon are shown. For the tRNA dataset, all variants with one, two, or three nucleotide substitutions are shown. Wild-type counts are indicated by the black dashed line. Expected count frequencies purely due to sequencing errors are indicated by red and green dashed lines for single and double nucleotide substitution variants, respectively. Black arrows indicate sets of variants that have likely not been assayed but whose sequencing reads are arising due to sequencing errors. b Simulation of bottlenecks at various steps of the DMS workflow based on a previously published DMS dataset [6]. Scatterplots show input and output sample counts for variants with one or two nucleotide substitutions in the original data or after simulating 3% library, replicate, or DNA extraction bottlenecks (from left to right). Hexagon color indicates the number of nucleotide substitutions and fill number of variants per 2d bin (see legend). Black arrows indicate sets of double nucleotide variants whose sequencing reads solely originate from sequencing errors. Dotted (or dashed) horizontal/vertical lines indicate soft (or hard) variant count thresholds used in downstream DiMSum analyses (see c). c Comparison of fitness scores from simulated datasets with (y-axis) or without (x-axis) the indicated bottlenecks. Variants are categorized by their robustness to filtering with hard (variants have to appear above the threshold in all replicates) or soft thresholds (variants have to appear above the threshold in at least one replicate) of 10 read counts. For the DNA extraction bottleneck, read count thresholds were also applied to output samples. Pearson correlation coefficients are indicated. The dashed line indicates the relationship y = x. Note that correlation coefficients are lower for soft than hard thresholds, because a subset of variants has fewer replicate measurements fitness to still be estimated from replicates in which they are actually present, resulting in an increased number of variants that can be used for downstream analyses (Fig. 4c). Finally, if variants were lost at the DNA extraction steps, this should not only show up as flaps in count distributions between replicate input samples, but also between input and output samples of the same replicate (Fig. 4b, "DNA extraction bottleneck"), as it is observed for the tRNA dataset (Additional file 1: Fig. S9). Here, in order to avoid biased fitness estimates, all variants that do not appear in the high read count peak of both input and output samples from the same replicate experiment need to be discarded to avoid systematic errors in downstream analyses. Often, fitness differences between variants also result in bimodal output count distribution, meaning that practically, it can be hard or impossible to assign whether variants with low counts in output samples are due to low fitness or because they were not assayed. As for the replicate bottlenecks, "soft" thresholds can be used to obtain fitness estimates for all variants that appear in the input and output samples of at least one replicate, therefore increasing the number of variants that can be used for downstream analyses (Fig. 4c).
To further illustrate how experimental bottlenecks can adversely affect the conclusions of a study, we evaluated their impact on the central conclusion of a previous publication. We previously showed that the fitness effects of amino acid substitutions in the prion-like domain of TDP-43 are correlated with the increase in a principal component of amino acid properties (PC1) strongly related to the hydrophobicity of the protein [6]. Repeating this same analysis after simulating library, replicate, or DNA extraction bottlenecks in the original data results in lower correlations in all cases (Additional file 1: Fig. S10, left column). Imposing both hard and soft minimum read count filtering as described above, the result is an increase in the correlation between measured fitness of amino acid substitutions and their corresponding predicted effects on PC1/hydrophobicity (Additional file 1: Fig. S10, middle column).
Together, this demonstrates that it is crucial to discard variants purely arising from sequencing errors to avoid systematic errors and shows how DiMSum can be used successfully to prioritize variants, minimize biases in downstream analyses, and improve biological conclusions.
Conclusions
We have developed a customizable pipeline-DiMSum-that provides a complete solution for the analysis of DMS data. DiMSum is easy to run, can handle a wide variety of different library designs, provides detailed reporting, and produces fitness and error estimates from raw DNA sequencing data in a matter of hours. Importantly, DiMSum's interpretable error model is able to identify and account for measurement errors in fitness scores resulting from random variability in DMS workflows and additionally provides the user with diagnostics to identify and deal with common causes of systematic errors. We have also shown that the DiMSum error model provides accurate error estimates across many published DMS datasets, outperforming previously used methods, and that diagnostic plots enable simple remedial steps to be taken that have the potential to dramatically improve the reliability of results from downstream analyses.
DiMSum software implementation
DiMSum is implemented as an R/Bioconda package and a command-line tool compatible with Unix-like operating systems (see installation instructions: https://github.com/ lehner-lab/DiMSum). The pipeline consists of five stages grouped into two modules that can be run independently: WRAP (DiMSum stages 1-3) processes raw FASTQ files generating a table of variant counts and STEAM (DiMSum stages 4-5) analyses variant counts generating variant fitness and error estimates. WRAP requires common software tools for biological sequence analysis (FastQC [39], cutadapt [40], VSEARCH [41], and starcode [58]) whereas STEAM has no external binary dependencies other than Pandoc. A detailed R markdown report including summary statistics, diagnostic plots, and analysis tips is automatically generated. DiMSum takes advantage of multi-core computing if available. Further details and installation instructions are available on GitHub (https://github.com/lehner-lab/DiMSum).
DiMSum data preprocessing
FastQ files from paired-end sequencing of the TDP-43 290-331 library [6] were processed with DiMSum v1.1.3 using default parameters with minor adjustments. First, 5′ constant regions were trimmed in an error-tolerant manner ("cutadaptErrorRate" = 0.2). Read pairs were aligned, and those that contained base calls with posterior Phred scores (posterior score takes both Phred scores of aligned bases into account) below 30 were discarded ("vsearchMinQual" = 30, "vsearchMaxee" = 0.5). Finally, variants with greater than two amino acid mutations were removed ("maxSubstitutions" = 2). One out of four input replicates (and all associated output samples) was discarded ("retainedReplicates" = 1,3,4) from all results shown in main text figures because the shape of its fitness distribution significantly differed from those of three other replicate experiments (see Additional file 1: Fig. S4b,c). Note that Additional file 1: Fig. S1-6 show DiMSum summary report plots when using all four replicates.
Simulated bottlenecked datasets were similarly processed with DiMSum v1.1.3 using hard, soft, and no filtering. For datasets with library and replicate bottlenecks, filtering was performed on the input samples only ("fitnessMinInputCountAll" = 10 for hard threshold or "fitnessMinInputCountAny" = 10 for soft threshold), whereas for datasets with DNA extraction bottlenecks, output samples were additionally filtered ("fitnessMi-nOutputCountAll" = 10 for hard or "fitnessMinOutputCountAny" = 10 for soft threshold).
DMS datasets for leave-one-out cross-validation were processed with DiMSum v1.1.3 except the data for Protein G B1 domain (GB1 [5]) whose variant counts were obtained from Otwinoski [59]. tRNA datasets [50] obtained from SRA (SRP134087) were analyzed using DiMSum with default parameters except fitnessMinInputCountAll = 2000 and fit-nessMinOutputCountAll = 200 to remove flaps likely due to DNA extraction bottlenecks, resulting in an average number of 2400 variants that could be analyzed per selection experiment. The use of soft thresholds would result in an average increase in variant counts of 200% across the four selection experiments. For datasets with only one input sample (GB1 and tRNA), we replicated the input sample to create as many matched input-output samples as necessary for the error model analysis. All experimental design files and bash scripts with command-line options required for running DiMSum on the above datasets are available on GitHub (https://github.com/lehner-lab/dimsumms).
DiMSum fitness estimation and error modeling
DiMSum calculates fitness scores of each variant i in each replicate r as the natural logarithm of the ratio between output read counts N i output and input read counts N i input relative to the wild-type variant wt: Optionally, DiMSum applies a scale and shift procedure to minimize linear differences in fitness scores between replicates. This is done by fitting a slope and an offset parameter to each replicate's fitness scores in order to minimize the sum of squared deviations between variants' replicate fitness scores and their respective averages. Moreover, it is ensured that wild-type variants have an average fitness score of 0 across replicates.
The measurement error of fitness scores is modeled based on Poissonian statistics from sequencing counts of the variant ðσ 2 ð logðNÞÞ ¼ σ 2 ðNÞ for input sample and m output r for output sample) and additive (a r ) modifier terms that are common to all variants, but specific to each replicate experiment performed, as: Note that we omit the inclusion of error terms arising from the wild-type normalization, as these error terms are typically small due to high wild-type counts.
The error model is fit to a high confidence set of variants (variants that have enough sequencing reads in the input samples to display the full range of fitness scores and for which at least one sequencing read has been observed in all output samples, see Additional file 1: Fig. S4a).
The error model parameters are estimated by sharing information across all variants, that is by minimizing the sum over all variants' squared deviation between the average error model prediction across replicates and the observed variance of fitness scores across replicates: In order to reliably estimate the replicate-specific additive error terms a r , the error model fit is performed not only with variances/error model predictions across all replicates of the DMS experiment, but across all possible subsets of replicates R of size at least two simultaneously (e.g., for three replicates, R ∈ ({1, 2, 3}, {1, 2}, {1, 3}, {2, 3}) and n R = {3, 2, 2, 2}). This is done because estimation of additive error terms depends mostly on high count variants (which have little to no error contribution from sequencing counts) and the error model cannot distinguish how much additive variability was contributed by any one replicate unless further constrained (by subsets of lower-order combinations). However, this means that if only two replicates of the DMS experiment have been performed, the error model tends to split additive error contributions equally between replicates for lack of more information, i.e., additive error terms cannot be used as diagnostic.
Moreover, squared deviations between variance and average error model predictions per variant and replicate subset are weighted (ω R, i ) according to three factors: first, the number of replicates in the replicate subset R, to account for the differential uncertainty in empirical variance estimates; second, the inverse of the average countbased error according to Poissonian statistics, to minimize relative, not absolute, deviations between the variance of fitness scores and respective error model estimates; and third, a term re-weighting all variants with the same number of mutations according to ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi maxðn m ; ffiffiffiffi n i p Þ p , where n m is the number of variants with that number of mutations and n i is the overall number of variants in the high-confidence variant pool. This is done to place more weight on the typically fewer lower-order mutant variants (e.g., single mutants) and therefore to improve estimates of additive error terms. The error model is fit 100 times on bootstrapped data. For each bootstrap, at maximum 10,000 variants are drawn with replacement from the high-confidence variant pool. The average parameters across bootstraps are used to calculate measurement error estimates.
The error estimates are then used to merge fitness scores across replicates by weighted averaging: The corresponding error of these merged fitness scores is calculated as: DiMSum reports merged fitness scores and associated errors for all variants that have been observed in at least one experimental replicate (actual merging is performed for variants observed in two or more replicates; for variants only observed in one experimental replicate, merged fitness scores and error are simply those computed for this one replicate).
DiMSum diagnoses consistency of the error model with the data by estimating how well it describes fitness score differences between replicates. If all error sources have been accounted for and model parameters accurately attribute error contributions to the different replicates, the predicted error magnitude should match the randomly arising differences in fitness scores between replicates of the same experiment, which we find to be normally distributed across all DMS datasets investigated, i.e.: We thus calculate a z-score of the fitness score differences replicates as: j;i q that should follow a normal distribution centered on zero and with unit standard deviation. DiMSum outputs quantile-quantile plots of z j, i as well as its corresponding mean and standard deviations and the P value distribution from a two-sided z test (Additional file 1: Fig. S6e,f). Mean values of z j, i different from zero suggest that fitness score estimates are biased, which suggests the presence of systematic errors not accounted for by the "scale and shift" normalization procedure and the error model. A standard deviation different from one suggests that the error has been over-estimated (s.d. < 1) or underestimated (s.d. > 1).
Error model validation and benchmarking
Artificial increases in multiplicative and additive error terms in Additional file 1: Fig. S7 In order to show that the error model can accurately capture multiplicative and additive error sources, we performed a manipulation of the TDP-43 290-331 library (using only replicates 1, 3, and 4) in which we artificially increased multiplicative input terms (by multiplying input count reads by factor 3 or 10) or additive error terms (by adding values of 0.3 or 1 to normalized fitness scores immediately before error model fitting) for replicate 1.
Leave-one-out cross-validation
To benchmark the error model and compare it against alternative approaches to quantify measurement error, we performed leave-one-out cross-validation on published DMS datasets. In contrast to the error model benchmarking performed as a diagnostic output from the DiMSum pipeline (see above), we trained the error models on all but one replicate of a dataset in turn. These error models were then used to calculate a zscore of the fitness score differences between the unseen replicate and the average over the training replicates (r ≠ j) as: where hσ 2 r≠ j;i i is the prediction of error in the test replicate j using the error model parameters of the training replicates. In Fig. 3a
Alternative error models
We compared the DiMSum error model to four alternative error models. First, a "variance-based" error model, where the error of fitness scores for each variant is calculated from the empirical variance of fitness scores between replicates, i.e., with the measurement error of fitness scores merged across replicates as Second, an error model using a Bayesian regularization of the empirical variance, as introduced by Weile et al. [51]. Here, the empirical variance of each variant's fitness scores between replicates is regularized with a prior, which is a regression of the empirical variance on input sequencing counts and fitness scores. Here, the measurement error of fitness scores merged across replicates is , with d as the degrees of freedom of the regression, σ 2 i;prior as the prior estimate of the variance for variant i, and n as the number of replicate experiments. For the variance-based error models, the measurement error for the unseen test replicate was estimated as the average of individual training replicates. The z-scores for the variance-based error model in the leave-one-out cross-validation were thus calculated as: z variance − based Third, a minimal "count-based" error model, where the error of fitness scores is estimated from sequencing read counts in input and output samples under the assumption The inverse standard deviation of the z-score distribution from leave-one-out cross-validation (see the "Methods" section) Square root of additive error term a gives a standard deviation-based estimate of lower variability bound that sequencing counts follow a Poisson distribution, i.e., as for the DiMSum error model but without multiplicative or additive terms. Fourth, the Enrich2 error model by Rubin et al. [36], which is based on sequencing counts but modified with variant-specific correction terms ("random-effects model"). Here, error estimates are calculated from input and output sequencing read counts under Poisson assumptions, but with an additional variant-specific random-effects term. This term corrects error estimates if the observed variability of fitness scores across replicates is larger than the estimated count-based error alone. That is, if σ 2 i < varð f i Þ , the random-effects term s 2 i is estimated greater than 0 such that σ 2 , the error estimate becomes equivalent to that of the variance-based error model described above. To calculate z-scores in the leave-one-out cross-validation for the Enrich2 error model, we estimated random-effects terms across the training replicates and then also used them to modify the count-based error estimate of the test replicate, i.e., z Enrich2
Multiplicative errors from PCR amplification
Kowalsky et al. [54] previously reported increased variability in sequencing read counts due to PCR amplification protocols (see Table S3 of Kowalsky et al. [54]). The raw sequencing data for the three PCR amplification protocols tested was obtained from the authors. Paired-end reads were merged with USEARCH [60] using the usearch -fastq_ mergepairs command with a minimum per base posterior Qscore of 20, and reads for unique variants were counted using the usearch -fastx_uniques command. To allow estimation of multiplicative and additive error terms, we treated the sequencing data from each PCR amplification protocol as replicate experiments. Variant fitness scores were calculated as the natural logarithm of read count frequency (read counts divided by the total number of reads in each replicate). Error of fitness scores was calculated as the inverse of variant read counts. DiMSum error model was adjusted to only fit one multiplicative error term and the additive error term per replicate. Additive errors were small compared to the variability observed. Multiplicative error terms were 1.9 ± 0.4 for method A (using one amplification cycle with all primers at once), 1.4 ± 0.1 for method B (two amplification cycles interspersed with a ExoI degradation step) and 6.4 ± 0.6 for method C (two amplification cycles).
Simulated bottlenecks in a previously published DMS dataset
We used a DiMSum processed DMS dataset from Bolognesi and Faure et al. [6] (290-331 library) to simulate the effects of various experimental bottlenecks.
Simulating a library bottleneck
A library bottleneck of size α = 0.03 (meaning that only 3% of molecules pass through the bottleneck) was simulated based on the observed average frequencies of variants in the input samples. A bottleneck factor b i ¼ PoisðN input i  αÞ=N input i was calculated to capture the subsequent changes in read count frequencies that would occur during such a bottleneck. For variants with high counts in input samples, the bottleneck factor will be close to α. However, for low count variants, the bottleneck factor will vary considerably. Some variants, especially variants with N input 1;i < α, will not pass through the bottleneck, i.e., b i = 0, while others may pass through the bottleneck even though there was only one molecule of that variant present in the pool, i.e., b i = 1.
To simulate how read counts in sequencing samples (both input and output sequencing samples) change due to this bottleneck, we sampled N times from a multinomial distribution Mult(1, π r ) where N is the total number of sequencing reads in the "original" sample s, and π s is a vector of probabilities given by: π s ¼ π s;1 ; π s;2 ; …; π s;k À Á where k is the total number of different variant sequences, and π r, i is the frequency of variant i in sample s after the library bottleneck (e.g., for replicate 1 output sample): To simulate sequencing errors in the new modified data, we assumed that the probability that a given sequencing read is misidentified to be 0.02, based on a length of the mutated sequence of 126 nt and a per base misread frequency of 0.0001 [6], and that all errors involve WT molecules being misclassified as single mutants, or single mutant molecules being misidentified as double mutants, or double mutant molecules being misidentified as triple mutants, and triple mutant molecules being misidentified as quadruple mutants. The total number of triple mutant molecules that will be misidentified as quadruple mutants is 0.02N, where N is the total number of triple mutant reads. Those counts were randomly subtracted from the triple mutant counts and added to the counts of all the quadruple mutants. The total number of double mutant molecules misidentified as triple mutants is 0.02N′, where N′ is the total number of double mutant reads. Those counts were subtracted from the double mutant counts and randomly distributed among all the triple mutants. This process was repeated to simulate single mutants being misidentified as double mutants and WT molecules being misidentified as single mutants.
Simulating a replicate bottleneck
The procedure was similar to the library bottleneck procedure described above, but a bottleneck factor was calculated on each replicate input sample independently, allowing for different variants to be present in each replicate. | 2020-06-30T13:19:38.255Z | 2020-06-26T00:00:00.000 | {
"year": 2020,
"sha1": "9a6bbc4431beb895b73c0337625ee7878d9453da",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-020-02091-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8d5bacde69c450fb6d28734659c634ca7fc8129",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
]
} |
9374241 | pes2o/s2orc | v3-fos-license | Developing shared understandings of recovery and care: a qualitative study of women with eating disorders who resist therapeutic care
Background This paper explores the differing perspectives of recovery and care of people with disordered eating. We consider the views of those who have not sought help for their disordered eating, or who have been given a diagnosis but have not engaged with health care services. Our aim is to demonstrate the importance of the cultural context of care and how this might shape people’s perspectives of recovery and openness to receiving professional care. Method This study utilised a mixed methods approach of ethnographic fieldwork and psychological evaluation with 28 women from Adelaide, South Australia. Semi-structured interviews, observations, field notes and the Eating Disorder Examination were the primary forms of data collection. Data was analysed using thematic analysis. Results & Discussion Participants in our study described how their disordered eating afforded them safety and were consistent with cultural values concerning healthy eating and gendered bodies. Disordered eating was viewed as a form of self-care, in which people protect and ‘take care’ of themselves. These subjectively experienced understandings of care underlie eating disorder behaviours and provide an obstacle in seeking any form of treatment that might lead to recovery. Conclusion A shared understanding between patients and health professionals about the function of the eating disorder may avoid conflict and provide a pathway to treatment. These results suggest the construction of care by patients should not be taken for granted in therapeutic guidelines. A discussion considering how disordered eating practices are embedded in a matrix of care, health, eating and body practices may enhance the therapeutic relationship.
Background
It is well recognised that recovery is a contested term in the eating disorder literature and that 'there is no single definition or description of [this concept]' ( [1] p4). A number of studies point to inconsistencies with the way criteria for recovery is used and defined in clinical trials, making it difficult to compare research and reach consensus [2,3]. Current clinical definitions of recovery incorporate the presence of minimal eating disorder psychopathology (i.e., within one standard deviation of the range of healthy populations), the absence of disordered eating behaviours, and achievement of a healthy body mass index [3].
There has been a movement towards recovery-oriented practice and service delivery [4]. Those with lived experience of a mental illness and support organisations have emphasised the recovery model primarily within a social justice movement aimed at restoring the human rights and full community inclusion of people with mental health issues [1]. Australia's National Framework for Recovery-Oriented Mental Health Services reflects this momentum, recognising the value of lived experience, the diffuse lines of recovery, and respecting clients' knowledge and choice alongside that of health professionals [1].
The recovery model is a central theme in the Royal Australian and New Zealand College of Psychiatrists Clinical Practice Guidelines for the Treatment of Eating Disorders [5]. It is intended to provide current evidence based guidance on the assessment and clinical treatment of people with eating disorders in the Australian and New Zealand context [5]. The guidelines state 'care for people with eating disorders should be provided within a framework that supports the values of recovery-oriented care' ( [5] p983). This document for the clinical management of eating disorders has been well received and represents the work of a collaboration of health care academics and professionals, and wide consultation with key stakeholders and the community. In their systematic review Hay et al., point out that 'most people make a sustained recovery with treatment' , including 'people with anorexia nervosa, where up to 40 % of adults (and a higher percentage of adolescents) will make a good five-year recovery, a further 40 % a partial recovery and those with persistent illness may yet benefit from supportive therapies' ( [5] p979). Research indicates that 50 % of those with bulimia nervosa fully recover and the outcomes with treatment for binge eating disorder obtain similar results [5].
However, Ben-Tovim et al.'s highly cited study on eating disorder outcomes in South Australia [6] concludes that 'many patients make a good recovery without accessing specialised treatments of any kind' , including treatments such as lengthy admissions for weight gain or long-term outpatient care, pointing to the need to explore other contributing factors in people's lives ( [6] p1257). The course of natural recovery may differ depending on the eating disorder, with one study finding that 5-year prognosis for bulimia nervosa was poor, while the majority of those people with binge eating disorder were recovered [7]. Other studies have found that it is common for presentation for treatment to occur many years after onset of an eating disorder, and into late middle-age [5,8,9], highlighting that a large population of people with eating disorders are not engaged with treatment. These findings point to the diversity of recovery experiences, and to the importance of exploring qualitative experiences of disordered eating and recovery to understand what impedes and encourages recovery.
There are a growing number of qualitative studies on recovery from eating disorders [10][11][12][13][14][15] that focus on patient perspectives. Such studies also identify obstacles to recovery. For example, qualitative studies show that the pursuit of low weight addresses a sense of ineffectiveness, makes the person feel safe, helps communicate distress related to possible rejection and abandonment, and moderates the experience of negative emotions [11,13,16]. Bjork and Ahlstrom argue that qualitative approaches allow for different dimensions to be explored that would risk being lost in quantitative research. In their qualitative study of patient's experiences of recovery from chronic anorexia nervosa, Dawson et al. note that the accounts of the women they interviewed should be understood within their gendered and cultural context [14]. While the women 'did not greatly examine the sociocultural context from which their AN developed and recovery took place' ( [14] The national framework on recovery acknowledges the subjective experiences of recovery beyond medical and psychiatric classification, with a focus on collaboration between people with disordered eating, carers and health professionals. However, the recovery model does not currently engage with people's cultural perceptions and experiences of eating and care, despite the aim of recovery-oriented treatment being to encourage people to seek professional health care and practice self-care. The national framework includes sections on 'understanding cultural idioms' and 'keeping diversity in mind' which focus on people from culturally and linguistically diverse backgrounds; Aboriginal and Torres Strait Islanders; refugees and asylum seekers; LGBTI people; and other minority groups [1]. In the eating disorder therapeutic guidelines, an exploration of culture is limited to the inclusion of the section Indigenous care, a dimensional and culturally informed approach to diagnosis and treatment ([5] p983). Culture is not an external attribute or independent variable (such as one's ethnicity), but involves the myriad of taken-for-granted and embodied practices that give meaning to our everyday worlds. Anthropologists have long pointed out that culture is practiced through 'the shared … (implicit and explicit) values, ideas, concepts, and rules of behaviour that allow a social group to function and perpetuate itself' ([17] p345). All groups and societies (including researchers and health care professionals) have a number of co-existing, overlapping and competing subcultures [17]. Leading cultural psychiatrists (e.g., Kirmayer and Minas 2002) and the recent Lancet Commission on culture and health [18] support the view that culture is fundamental both to the causes and course of psychopathology and also to the effectiveness of systems of healing and health care. Population health literature also suggests that social factors, rather than medical interventions, are the main determinants of recovery from mental ill-health [19][20][21] (see also [22] for concept of 'recovery capital').
Therefore, while the recovery-oriented framework for treatment promotes inclusive service delivery, it lacks an interrogation of the cultural contexts of recovery and care. The main aim of this paper is to explore the cultural contexts in which a person experiences an eating disorder and how this is critical to how they approach recovery. Healthy eating and lifestyle discourses act as ubiquitous cultural signposts for people wishing to maintain eating disorder practices ('watch what you eat' , 'you are what you eat') and often compete with medical and psychiatric advice. Dutch anthropologist Annemarie Mol has written widely about eating, bodies and care practices in health care settings [23]. Her 'logic of care' is a useful framework to discuss how understandings of care and recovery might differ between people with eating disorders and practitionersand why people might not seek therapeutic care in the initial phases of disordered eating or in the case of severe and enduring eating disorders [24]. In the RANZCP 'Clinical Practice Guidelines for the Treatment of Eating Disorders', 'meaningful engagement in therapy' is singled out as being 'a crucial component in all treatments for anorexia nervosa' ( [5] p988). Expanding on what 'meaningful engagement' looks like in practice would be beneficial, and we argue a framework of care may be valuable for thinking through the different understandings of care held by patients and practitioners. Furthermore, the national framework on recovery offers insights which could be expanded to include a discussion on perspectives of care. These include the framework urging health professionals to be aware of 'a person's explanatory models of illness, distress and wellness' and 'the impact of the practitioner's own language, cultural beliefs and values on the therapeutic relationship barriers to service' ([1] p14). A therapist's capacity to understand how a person with disordered eating may perceive their practices as a form of self-care and health [25,26] is an example of recognising an individual's explanatory model and personal agency.
A recently commissioned report found that of the one million Australians who suffer from an eating disorder, less than 30 % engage with treatment [27]. Research to date has mainly focused on people who engage with treatment services [28], but we know very little about the significant number of people who do not seek help, or delay seeking help for many years. This paper thus offers new insights into why people might not even consider accessing recovery pathways, or take many years to do so. The results reported in this study are part of a larger project that aimed to identify why people with eating disorders deny they have a problem, or delay and resist professional care. In working with a group who are significantly under-researched, we aimed to demonstrate how behaviours were rationalised as part of a cultural milieu in which care of one's self, demonstrated through careful eating and physical exercise, was culturally legitimated and widely sanctioned. In attending to how people understand their behaviours (as normal and 'not sick'), we hypothesized that this would illuminate important cultural contexts that underpin and potentially obfuscate a need to attend to recovery.
Participants and recruitment
Data collection occurred over 15 months (January 2013 to March 2014) in Adelaide, South Australia and involved 28 women, ranging in age from 19 to 52. The criteria for recruitment included women who were over 16 years of age and had not seen a health professional for disordered eating, had not been given an eating disorder diagnosis, or had been diagnosed but had delayed seeking treatment or did not wish to pursue treatment.
Participants were recruited through snowball sampling methods, with posters being placed around two metropolitan university campuses. The majority of posters were placed on the backs of toilet doors and posed questions such as ' Are you continually thinking about your food and your weight?' and 'Do you enjoy the feeling of not eating or excessive exercising?'. Privacy was crucial to the locations of the recruitment information due to the social stigma associated with eating disorders and the nature of this study seeking participants who have not previously disclosed their eating issues. This allowed the potential participant to seek out information on the study privately by emailing or phoning Author 1. As this was a difficult sample to recruit, participants were also recruited through mental health networks and advertising on social media websites such as Facebook groups South Australian Body Esteem Activists and Supporting Eating Disorders for South Australia. Most of the recruited women were under 30 years of age, university students and of Anglo-Australian backgrounds.
Design
Through a mixed methods approach including ethnographic fieldwork and psychological evaluation, this study focused on examining the cultural contexts of women, food and disordered eating, with the aim of developing strategies for early intervention. The research team was multidisciplinary, and included a social scientist, a medical anthropologist skilled in gender analysis, and a psychiatrist and psychologist (both of whom specialised in eating disorders). In taking a multidisciplinary approach, the authors attempted to re-examine the experience of eating disorders not from a clinical or tertiary point of view, but from a mixed method approach framed by a sociocultural perspective. This approach led to a questioning of taken-for-granted concepts such as health, illness, eating and recovery, not only providing a platform for exploring how these categories are culturally constituted, but also providing a framework for questioning the categories that underpin therapeutic understandings of recovery and care.
Data collection
Data collection began with a pilot phase that included three women who partook in at least 2 semi-structured interviews, the Eating Disorder Examination (EDE) and a diary writing phase. The pilot interviews gave Author 1 and 2 the chance to collaboratively reflect on the interview schedule and seek participants' feedback, adapting the study design where possible.
From the pilot phase the research team deduced that the most appropriate order for conducting the interviews was to begin with a semi-structured interview in the first meeting (allowing for rapport to be built with the participant). In the second meeting the EDE was administered in order to ascertain if participants might fit the diagnostic criteria of an eating disorder. The inclusion of the EDE was important to examine how participants responded to such evaluations, and provide them with information for resources and services. EDE results were sent to a researcher trained in the use of the EDE (who analysed the data using SPSS and reported back to the team). The third meeting began with a debriefing session about the EDE, and then continued with the semi-structured interview. The interviews were guided by an interview schedule, which asked questions that explored what type of practices participants engaged in on a daily basis (i.e. how they ate, exercised, engaged in activities); if they considered their activities 'a problem'; what cultural 'norms' helped to support their eating and exercise activities; and if they had ever considered seeking help. Due to the exploratory nature of qualitative research, the interview schedule was flexible and follow-up interviews with each participant provided opportunities to explore their everyday lives in more detail. In total, sixty-eight semi-structured interviews took place in people's homes, in interview rooms at one of the universities, in cafes and in public places. 1 In addition, recruitment for this study could be slow and some participants were non-responsive. Four of the women who partook in one or two interviews stopped responding to Author 1's efforts to schedule more interviews. In attempting to locate a population that does not identify as having 'a problem' , faces social stigma, and is reluctant to come forward and engage with services, the recruitment and data collection processes highlight issues of accessibility and privacy with such a hard to reach group.
Semi-structured interviews and observation are key methods of data collection in ethnographic and qualitative approaches to research. Field notes taken during and after interviews were critical to data collection as they captured observations made during the research encounters, such as non-verbal cues, emotional reactions performed through bodily dispositions, appearances, the research setting, as well as reflexive notes on how the researcher may react to the participant's narrative (which adds to research rigor including research bias and how the researcher may impact the research process) [29].
As disordered eating is associated with secrecy and shame, participants were also given the opportunity to engage in a diary writing phase for 8 weeks, in which they wrote about the everyday moments, activities or events that supported their disordered eating behaviours, and their fears, pleasures and desires around food and their body. Collecting diaries from participants also gave Author 1 another opportunity to discuss the research experience with the participant.
Analysis
Grounded theory principles guided the research methods, coupled with thematic techniques of data collection and analysis [30,31]. Grounded theory is a qualitative approach which prioritises deriving analytic categories and themes directly from the data, not from pre-conceived concepts or hypotheses [30]. All interviews (including semi-structured and EDE interviews) were professionally transcribed, and field notes were written up following each interview. To become closer to the data Author 1 transcribed the pilot interviews and open coded them within the same week afterwards. During the pilot phase of the study a list of codes were developed around certain themes, for example, 'help seeking' , 'food' , 'protection' , 'ambivalence' , to then form the basis of the thematic analysis of the interview and diary data. Following the established coding process of open, axial and selective coding, the interview manuscripts and field note data was firstly open coded on the computer in a Word document, and then through the software programme NVivo by Author 1. Open coding involved reading the transcripts and diaries line by line to identify and develop any ideas, themes or issues from the data [29]. In the collaborative meetings that followed between Author 1 and 2, axial (or secondary) codes were developed. This stage of data analysis involved making comparisons across the data, so that the final stage of selective coding could occur. Selective coding involved taking core themes and positioning these as key theoretical frameworks for analysis, and critically examining their concordance (or not) with the wider literature.
Participant descriptive
Of the 21 participants who consented to undertake the EDE (N = 21), the mean global EDE score was 3.48 (SD = 1.06), with a range from 0.92 to 5.57. The majority (90 %) met criteria for an eating disorder. Most (81 %) fell into the Eating Disorders Not Otherwise Specified (EDNOS) category, and 2 met the diagnostic criteria of anorexia nervosa (See Fig. 1). Of the total sample who participated in the semi-structured interviews (25), six had a previous eating disorder diagnosis (anorexia nervosa) from a health care professional, and had had varying, but limited contact with health providers, and no desire to recover (in clinical terms). The other nineteen participants had not previously sought professional help and had never received a diagnosis.
As shown in Table 1, participants self-reported when they believed their disordered eating begun and for most participants, issues had begun in childhood and adolescence. While experiences differed greatly, we report on two key findings (disordered eating as producing safety, and culturally dominant ideals of health) that are both understood as practices of care, thereby negating the need for therapeutic care.
Disordered eating is perceived as 'safe'
People's experiences of disordered eating were often described as 'safe'. Maintaining safe spaces, doing safe things (like having the same plate to eat from day after day), maintaining routines and eating 'safe foods' were common themes. Forty five year old Morgan (who has experienced 30 years of eating disorders) said: 'It's safest not to have too much variety: more variety seems to make you hungrier or something. It's weird'. Another participant aged in her 50s who had lived with eating disorders for 30 years (and had enduring anorexia) described the safety and comfort that her practices afforded her: the ritualistic side of it where you feel safe if you're sticking to your normal, you know, that's why you do it … you feel safe if you know what to expect if you stay on this sort of a routine and a diet.
Twenty year old Lucy, who had developed disordered eating at age 12 and never sought help (and whose EDE revealed EDNOS), similarly described her experiences as 'kind of safe'yet recognised the contradictory nature of safety and suffering that she endures: There is kind of two sides to it I guess, it's like comforting but it's also exhausting at the same time. Michelle (aged 27), who had swung between a diagnosis of anorexia and EDNOS for more than 10 years stated that the only time she feels 'okay' about herself, is when she is 'sticking to [her] routines'. Her routines involve only eating safe foods ('lettuce and stuff like that') in order to create safety: it is very, very much a safe space and almost like a, I guess being invincible almost, like nothing can touch me while I'm here, like I'm managing to do this and I'm managing to stick through it all. So yeah, "What can really defeat me if I'm living on nothing?" if that makes any sense at all… This strong sense of safety (which was sometimes described as comfort, control or familiarity) was contrasted with the fear of seeking treatment. Some said they were 'petrified' of seeing a psychiatrist, because 'only crazy people see psychiatrists'. Others said 'I don't think my eating is a problem' and 'it's not an illness … it's only a food thing'. Charlotte (who had travelled to the US for treatment) explained that going into treatment was anxiety provoking as it was an exercise in 'fattening up' , where the primary focus was on weight gain as an indicator of wellness.
I refused to go somewhere where I would be monitored at that level. I was over that, I found it humiliating, I wasn't going to go there and they do the whole you know fatten you up, kick you out type thing.
Because her eating disorder was such a safe and familiar space for over 17 years, Charlotte was unsure if recovery was even possible: 'I'm conflicted because I know that you can recover to a point, you know after a long journey … but then I also know or discovered that you can be almost ED free for a number of years and think it's totally behind you, and then something happens and it's old and familiar'.
Clinicians and therapists will be familiar with this characterisation of eating disorders as 'safe spaces'. Ethnographic work by Author 2 has also highlighted the ways in which people describe anorexia as a 'safety net' ([32]
Recovering in a culture where an obsession with thinness and dieting is the norm
The women in our study highlighted how cultural understandings of healthy eating and exercise (the constant bombardment of cultural imagery that thin is healthy and self-discipline is morally superior), made the impetus towards recovery appear somewhat contradictory and defeating. Women remain disproportionately diagnosed with eating disorders, and cultural preferences for thin, weight-managed female bodies are deeply embedded and valued in most western cultures. This bodywork, as Hardin [34] and others note, is highly gendered and informs everyday cultural practices around food and eating. Charlotte explained during an interview: 'I found at one point when I was doing really well that I was recovered to the point where I had a healthier relationship with food and body than every other normal woman around me. And that was really disturbing. And really challenging'. She constantly struggled with all the information about what foods one should and shouldn't eat, and the imperative to take care of one's self through the making the right, healthy choices: I kept going back to the pantry, trying to find something that fit the criteria that would be okay to eat. And I could discount everything in the pantry for one reason or another, based on antioxidants, or fibre or glycaemic index, or the level of refinement or preservatives, or colourings or sugars or, you know? There wasn't a single thing in that pantry that was okay, if I put all of our society's messages and health professionals' advice together about what's okay and what's healthy to eat.
Rochelle demonstrated the contradictions imbued in being healthy and 'normal' , revealing that recovery does not occur in a vacuum but rather in a particular gendered and cultural context. She said: There's so much health promotion but how much of its healthy, it's difficult to say. I once read that recovery isn't like going into a healthy lifestyle and being able to eat foods with fat, having that anxiety and things like that and when you look at Michelle Bridges 2 and all those 12 week things, your whole day is still centred around food and I've tried to do those kind of things but it's like I still get the anxiety.
Several scholars have noted the ways in which people with eating disorders hide their practices within normative cultural ideals around food and bodies [25,34]. This might be through excuses about food allergies, special diets or intolerances, and the pursuit of health enhancing activities and self-discipline (such as wearing Fitbits) that are culturally valued and understood to demonstrate moral virtue. During an interview Sarah joked how easy it was to continue her excessive exercise routine in a 24-h gym where no one looked sideways at her because 'most of the people there are like high risk for heart attacks, on steroids and things'. The acceptance of constantly working on and pushing one's body to extremes was normalised and accepted as part of the visible performance of bodily discipline and virtue.
In a time when fatness is stigmatised and associated with ill health and deviance [35][36][37], LaMarre and Rice suggest that 'adding body size to the recovery equation highlights difficulties with following prescriptions for recovery in a society that positions weight gain as wholly negative' ( [38] p138). Participants in Malson et al.'s study pointed to the 'culturally constituted tension between, on the one hand, treatment goals of reducing weight concerns and, on the other, culturally normative idealisations of slenderness and the near-ubiquity outside of the eating disorder ward of body image concerns' ([39] p29). Moreover, setting goals towards weight gain or target weights, while obviously vital to survival and cognitive functioning, are seen as antithetical to current cultural discourses about weight reduction as taking care of one's health. As Tamara explained, 'I think it can be even more painful when you are weight restored but people don't understand that you're still suffering'. These examples demonstrate how it is important to understand recovery in its cultural context, including how disordered eating practices are intimately entangled in gendered practices of care and healthy lifestyles.
Discussion
To maintain disordered eating, participants engaged in high levels of self-discipline, and found pleasure in the perceived safety that starving, bingeing and purging afforded. Participants felt protected, and in doing so, they took care of themselves by not having to care, not having to feel. Unlike physical illnesses, disordered eating was described as serving a purpose: 'Like if you break your arm you know something is wrong whereas when you have an eating disorder you're doing it to escape from something else'. This escape was often a distancing from gendered trauma, of sexual abuse and violence. For Sarah, childhood abuse and neglect led to 'playing with food' as a way to 'distract' and 'switch everything off'. Starving, was thus positioned as a way to keep her safe from 'dangerous' circumstances in which 'someone might have an interest in you that is sort of not what you want'.
Understanding people's experiences of how disordered eating is a form of care is key to why people may not come forward to engage in professional care. A critical exploration of the multiple meanings of care; the daily practices of care giving and experiences of receiving care, may provide insight into the tensions discussed above. For participants in our study good care was often talked about and formed a rationalisation for not seeking therapeutic care [25]. Care was being on a strict raw food vegan diet to prevent obesity. Care was bingeing on junk food as a reward for weeks and months of extreme restricting. Care was only consuming a liquid diet because solid food brought on a desire to binge. Care was starving and shrinking the body to repel unwanted sexual attention. Eating disorders were practiced through careful attention to changing bodies, surroundings, tastes, textures, desires, hunger and relationships. For the therapist, carer, family member and friend, Winace argues that 'to care is to be sensitive to the attachments that support people, attachments which are sources of both constraints and opportunities, which are openings and closures' ( [40] p110). Being attentive to the way people experience different modalities of care through their disordered eating practices presents possibilities for therapists to broaden their practices of good care and nurture a therapeutic relationship.
If we take Lavis's contention that 'caring is cyclical as care of self necessarily instigates caring for [the eating disorder] so that it may continue to 'look after you' ( [26] p104), we can begin to understand 'the sense of being cared for by the illness' ( [16] p71). In taking this insightful premise, the disordered eating becomes not just a problem of the individual patient, but part and parcel of one's social world. Thus wider cultural factors are brought to bear, and can be used to broaden current understandings of eating disorders beyond 'egosyntonic disorders' ( [41] p845).
It is critical for the development of a good therapeutic relationship to broaden our understanding of obstacles to recovery within the recipients of treatment, which can portray the client as "'hostile' , 'oppositional' , 'uncooperative' , and 'impervious to treatment'" ( [39] p26), to an understanding of people's experiences of self-care and health. Boughtwood and Halse argue 'tension between patients and clinicians over treatment can undermine the therapeutic relationship, which is the social contract between patient and clinician to communicate and collaborate on their shared goals and objectives for treatment' ( [42] p84). Furthermore, they point out that the literature on the therapeutic relationship is written largely from the perspective and goals of researchers and clinicians with the aim of improving treatment and identifying variables affecting treatment outcomes [42]. The voices and experiences of those living with disordered eating in clinical and research settings therefore may offer valuable insight into why from their perspective the therapeutic relationship and treatment is failing.
Mol's logic of care proposes that 'patient choice' and 'good care' often clash in health care environments, and instead of pitting choice and care against each other, Mol views care practices as attending to 'the unpredictability's of bodies with disease' , rather than a battle for control ([24] p14). It could be said that it is the daily practices of 'good care' that become important to strive for in cases of severe and enduring eating disorders rather than expectations of medical recovery. This is somewhat acknowledged in the proposal for a harm minimisation approach which centres on improving quality of life and reducing distress rather than focusing on symptom reduction [14,43].
In his keynote presentation to the Australian and New Zealand Academy for Eating Disorders 2015 conference, Ivan Eisler called for 'a shift from control to caring'. He discussed how within Family Based Therapy there needs to be a focus on getting parents and carers to 'care better' instead of focusing on taking control of their children's eating. This highlights how often in eating disorder institutions good care has come to signify control of patients; control of their bodies, consumption, spaces and routines. Broughtwood and Halse argue such approaches define patients by their eating disorder behaviours and that it would create greater understanding in the therapeutic relationship if instead clinicians attended to individuals' 'creative negotiations of hospital practices' and assisted patients 'in utilising their creativity to confront their illness in positive ways' ( [42] p92). Therefore, it may be useful to approach the actions of people with disordered eating through a prism of care, rather than an escalation of control measures when patients present as 'difficult'. This is consistent with recent research from inpatient settings in Montreal which illustrate that autonomous motivation was a significant predictor of change in severity of eating symptoms and attitudes such that patients with higher pre-treatment levels of autonomous motivation showed larger posttreatment reductions on these indices [44]. No such effects were associated with controlled motivation. It is also consistent with seminal work of Touyz and colleagues, which showed that a lenient program for anorexia nervosa did not have poorer results than a strict operant program [45].
If the focus is on controlling the patient, the body or the symptoms, greater emphasis will be placed on the failures of the person or clinician involved. Hay et al. argue 'because patients with anorexia nervosa are extremely ambivalent about therapy and have starvation related cognitive deficits, current change-oriented treatments may actually be counterproductive and give patients another experience of failure rather than being helpful' ( [46] p1142). Mol conceptualised the logic of care as a way to practice and view care in a way that 'does not impose guilt, but calls for tenacity' and 'for a sticky combination of adaptability and perseverance' ([24] p91). Such an approach to care giving may be useful for those with severe and enduring experiences of disordered eating.
Conclusion
This paper has explored how differing perspectives of care hinder shared understandings of recovery. The women in our study highlight how dominant models of recovery take-for-granted and overlook the ways in which the safe spaces of disordered eating and attention to healthy lifestyle mantras are in themselves, a form of care. These differences become a barrier to seeking therapeutic care and recovery. In addition, the women's narratives demonstrate how recovery is tied to subjective experiences and embedded in one's cultural environment, not just treatment of medical and psychiatric symptoms. It is important to acknowledge that for people with disordered eating, their practices can be seen through a lens of self-care, in which recovery thus becomes positioned as unnecessary. Our work confirms the findings of Lavis's UK study with women diagnosed with anorexia, in which she found that 'although selfstarvation may be clinically framed as an expression of a lack of self-care, it emerges from informants' narratives as a modality of self-care that is simultaneously a response and precarious solution to pain' ( [16] p68).
Endnotes
1 At a later stage in the project we conducted a focus group with a small number of women who considered themselves to be in recovery. The purpose of this focus group was to ask the women about strategies for early intervention. We do not report on the focus group findings in this paper. 2 Michelle Bridges is a personal trainer on the Australian version of 'The Biggest Loser' and has various weight loss products (including a 12 week body transformation program). | 2017-06-27T20:04:19.169Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "28d6565a72b02bd6df6fdc110ffcc929b0849ea8",
"oa_license": "CCBY",
"oa_url": "https://jeatdisord.biomedcentral.com/track/pdf/10.1186/s40337-016-0114-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28d6565a72b02bd6df6fdc110ffcc929b0849ea8",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219096590 | pes2o/s2orc | v3-fos-license | A Multivariate Signature Based On Block Matrix Multiplication
An oil and vinegar scheme is a signature scheme based on multivariate quadratic polynomials over finite fields. The system of polynomials contains n variables, divided into two groups: v vinegar variables and o oil variables. The scheme is called balanced (OV) or unbalanced (UOV), depending on whether v = 0 or not, respectively. These schemes are very fast and require modest computational resources, which make them ideal for low-cost devices such as smart cards. However, the OV scheme has been already proven to be insecure and the UOV scheme has been proven to be very vulnerable for many parameter choices. In this paper, we propose a new multivariate public key signature whose central map consists of a set of polynomials obtained from the multiplication of block matrices. Our construction is motivated by the design of the Simple Matrix Scheme for Encryption and the UOV scheme. We show that it is secure against the Separation Method, which can be used to attack the UOV scheme, and against the Rank Attack, which is one of the deadliest attacks against multivariate public key cryptosystems. Some theoretical results on matrices with polynomial entries are also given, to support the construction of the scheme.
Introduction
Multivariate public key cryptosystems (MPKCs) were first introduced in 1988 by Matsumoto and Imai [1] with their scheme, called C* or MI. The public key of an MPKC is a system of multivariate polynomials-mostly quadratic-over a finite field. In general, the structure of an MPKC can be described, as follows. Let k be a finite field with q elements. A public key is a map F : k n → k m , which is constructed asF = L 1 • F • L 2 , where L 1 and L 2 are two random invertible affine transformations over k n and k m , respectively. The central map F : k n → k m is a non-linear multivariate polynomial map which has the property of being easily invertible (i.e., computationally). The key to building a good MPKC is to find a good polynomial system F which makes the cryptosystem secure.
The security of an MPKC is based on the fact that solving a set of multivariate polynomial equations over a finite field, in general, has been proven to be an NP-hard problem [2]. However, this does not guarantee that MPKCs are secure. Nevertheless, this property makes the family of MPKCs a good candidate for the Post Quantum Cryptography (PQC) era, if well designed. On the other hand, due to Shor's algorithm [3], the well-known number theoretic-based cryptosystems(e.g., RSA, ECC, and the Diffie-Hellman key exchange scheme) have been proven to be insecure if a quantum computer is built.
These facts have inspired many researchers to become involved in the area of MPKCs, which underwent very fast development in the late 1990s. Since then, there have been many attempts to build MPKCs. Unfortunately, most of the existing MPKCs have problems, due to the facts that randomness has not been well-used and that cryptanalysts usually exploit the structure of the family of polynomials involved to attack the MPKCs (see [4,5,6,7,8,1,1,1,1]). Direct attacks using algorithms to solve the multivariate systems are also often used to attack MPKCs [1, 1, 1, 1, 1, 1]. As mentioned in [1], the deadliest attacks for MPKCs are Rank attacks [8], which consist of finding some quadratic forms with low rank associated with the central map. Even if the parameters are carefully chosen, there still exist few successful designs, such as the Rainbow scheme proposed by Ding and Schmidt [2,2], the Simple Matrix Scheme for Encryption [1], and the HFEv − [2, 2, 2]. Indeed, this work was mostly inspired by the constructions in [1,2]. We use the multiplication of block matrices to design our new proposed scheme. The arguments that prove its security are very similar to those used in [1,2].
The rest of this paper is organized as follows. We recall the description of a UOV scheme from [2,2] in Section 2. In Section 3, we introduce some theoretical groundwork concerning matrices with polynomial entries. These results support the construction of the new proposed scheme, which is introduced in the second part of Section 3. Section 4 discusses the security of our scheme and Section 5 concludes the paper.
Preliminaries
The initial Oil and Vinegar scheme was defeated with the separation method attack. However, a huge number of multivariate schemes have been proven to be vulnerable to the MinRank attack. In this section, we recall the descriptions of these two algebraic attacks. A short description of the UOV scheme is also given.
Multivariate Public Key Cryptosystems and UOV Scheme 2.1.1. Multivariate Public Key Cryptosystems
The main characteristic of a Multivariate public-key cryptosystem is that its public keys consist of a set of non-linear algebraic polynomials To encrypt a message or to verify a signature, one needs only to evaluate this set of polynomials at a given point (a 1 , ..., a n ). Decryption and signing are done with the help of the private key by solving the system p 1 (z 1 , ..., z n ) = 0, ..., p m (z 1 , ..., z n ) = 0. (1) However, without the private key, solving the system should be impossible (or, at least, very hard) to ensure the security of the cryptosystem. To build a secure system, we start by very carefully choosing a trapdoor which is easy to solve. That is, given y = (y 1 , ..., y m ) ∈ k m , we have an efficient method for computing the solutions of Then, denoting by GL i (k) the set of all i × i invertible matrices with entries in k, we choose (L 1 , L 2 ) ∈ GL m (k) × GL n (k) and compose f with L 1 and L 2 from the left and right, respectively, to obtain where x = (x 1 , ..., x n ). In some cases, L 1 or L 2 may be the identity of GL m (k) or GL n (k), respectively. The private key of these systems consists of (L 1 , L 2 ) ∈ GL m (k) × GL n (k) and the polynomial f 1 , . . . , f m , while the public key consists of the field k and the set of algebraic polynomials: p = (p 1 (x 1 , ..., x n ), ..., p m (x 1 , ..., x n )) ∈ k[x 1 , ..., x n ] m mentioned above.
Oil and Vinegar Polynomials
In this subsection, we give a quick description of the Unbalanced Oil and Vinegar (UOV) scheme and its known cryptanalysis, for illustrative purposes. The basic building block for an OV or UOV scheme is the Oil and Vinegar polynomial.
An Oil and Vinegar polynomial is a quadratic multivariate polynomial with o + v = n variables, where o represents the number of oil variables and v the number of vinegar variables. The non-linear terms appear only in the following two cases: between vinegar variables, or with one vinegar variable and one oil variable. In other words, there is no quadratic term with oil variables only. More precisely, let k be a finite field with q elements, x 1 , x 2 , ..., x o be the o oil variables, and x 1 , x 2 , ..., x v be the v vinegar variables. An Oil and Vinegar polynomial is any (total degree two) polynomial f ∈ k[x 1 , ..., where a ij , b ij , c i , d j , e ∈ k. The trapdoor for an OV or UOV scheme is a set of Oil and Vinegar polynomial maps, where the public key is a map In the context described above, L 1 is the identity of GL o (k) and composition by L 2 ∈ GL n (k) is carried out to mix the oil and vinegar variables. The private key is L 2 and the central map is F. For the OV and UOV schemes, there is no need to use a second linear transformation L 1 . These schemes are designed only for the signature.
To sign a message y = (y 1 , y 2 , ...., y o ), we need to find a vector w = (w 1 , w 2 , ..., w n ) such that p(w) = y. To do so, we first choose v random values for the vinegar variables x 1 , x 2 , ..., x v and substitute them into the system to obtain o linear equations in the o variables x 1 , x 2 , ..., x o . This linear system has a high probability of having a solution. If it does not, we change the values of the vinegar variables x 1 , x 2 , ..., x v and try again until a solution in k o is found. Then, we apply L −1 2 ∈ GL n (k).
To verify whether w is a signature for y, it suffices to check that p(w) = y.
Attacks against the UOV Scheme
In this subsection, we present two of the most well-known attacks against the UOV scheme; namely, the Separation Method attack and the MinRank attack, which was performed for the first time on the HFE scheme.
Separation Method Attack
The separation attack was introduced by Kipnis and Shamir [8], in order to defeat the original Oil and Vinegar scheme. It has been extended to many other systems containing two different sets of variables. The idea consists of finding an invariant subspace of the subspace spanned by the n polynomials of the public key. This invariant subspace represents the Oil subspace and its complement is the Vinegar subspace. Once this separation is done, one can easily forge arbitrary signatures.
MinRank attack
As mentioned earlier, one of the deadliest attacks against multivariate public key cryptosystems is the MinRank attack, which is an attack based on the MinRank problem. This problem can be formulated as follows: Given positive integers N, n, r with r ≤ n and N matrices M 1 , ..., M N of dimension n × n, find a non-trivial linear combination M of M 1 , M 2 , ..., M N such that Rank(M ) ≤ r. If r = n − 1, the MinRank problem has been proven to be NP-complete. However, for small r, it may be easily solvable. Therefore, all MPKCs which have the property that some quadratic form associated to their central maps has a low rank are vulnerable to this attack. We give an illustration by describing the MinRank attack on the HFE scheme [2]. The attack was first performed by Kipnis and Shamir [8], who showed that the security of HFE can be reduced to a MinRank problem.
The HFE Scheme
The HFE cryptosystem was proposed by Jacques Patarin in [2]. It can be described as follows: Let q = p e , where p is a prime number and e ≥ 1. Let K be an extension of degree n of the finite field k = F q . Clearly, K ∼ = k n .
Let φ : K → k n be a k-linear isomorphism map between the finite field K and the n-dimensional vector space k n . The central map of HFE is a univariate polynomial F (x) of the following form where α ij , β i , γ ∈ K and r is a small constant, chosen in a way such that F (x) can be efficiently inverted. The public key is given by where T : k n −→ k n and S : k n −→ k n are two invertible linear transformations and the private key consists of T, F, and S.
MinRank Attack on HFE
In [8], Kipnis and Shamir showed that an attacker can ignore lower degree monomials and still be able to recover the key. Furthermore, the public key P and the transformations S, T, T −1 satisfy the following theorem.
Theorem 1. For the maps S, T, T −1 given in the HFE, there exist maps G * , S * , T * , T * −1 over K such that and G * (x) = T * (F (S * (x))). Moreover, G * (x) can be expressed in the form: The theorem implies the identity where F = [α ij ] over K, G * k and W are two matrices over K whose respective (i, j) entries are g q k i−k,j−k , and s q i i−j , where i − k, j − k, and i − j are computed modulo n.
As the rank of W F W t is no more than r, recovering t 0 , t 1 , . . . , t n−1 can be reduced to solving a MinRank problem; that is, finding t 0 , t 1 , . . . , t n−1 such that Once the values t 0 , t 1 , . . . , t n−1 are found, T and S can be easily computed. Therefore, the key point in the HFE attack is to solve the MinRank problem.
Just as for the HFE, many other multivariate schemes have been proven to be insecure using the MinRank attack. In [1], Billet and Gilbert used the MinRank attack against the Rainbow scheme [19] with the parameters (2 8 , 6, 6, 5, 5, 11), which forms a layer-based variant of the UOV scheme.
Our New Scheme
In this section, we describe the proposed scheme. As stated in the introduction, we were mainly inspired by the construction of the Simple Matrix Scheme [1] and the Unbalanced Oil Vinegar Signature Scheme [2, 2] to conduct this work. Some theoretical results needed in the description are also presented.
Theoretical Groundwork
We start with the following theorem. It plays a crucial role in the signing process.
Theorem 2. Let k be a finite field and denote by k * the non-zero elements of k. Let A = (a ij ) u×u be an invertible u × u matrix with a ij ∈ k and C any (s−u)×u matrix with entries in k. Let B be a u×(s−u) matrix whose entries are random multivariate linear polynomials.
Then, the block matrix is invertible and the entries of M −1 are multivariate affine linear polynomials with coefficients in k.
and assume that there exist matrices U, V, X, and Y of dimension u × u, (s − u) × (s − u), (s − u) × u, and u × (s − u), respectively, satisfying Then, we have By equating the two forms of M, we obtain That is, which can be inverted, as A −1 and (D − CA −1 B) are invertible. We have The fact that the entries of M −1 are multivariate affine linear polynomials with coefficients in k follows directly from the entries of the matrices A, B, C, and D.
The matrix in Theorem 2 will play a crucial role in the design of our new scheme. As we will see in the description of the scheme, the polynomials in the public key are the entries of a matrix obtained by multiplying M with another matrix whose entries are random polynomials. The matrix M −1 will be used in the signing process. This will help to create a system of linear equations whose solution is the signature x of a given document y.
Description of the New Scheme
Let n, m, s ∈ N be integers satisfying m = s 2 and 4 3 ≤ n ≤ 2m. For i ∈ N, let k i denote the set of all i-tuples of elements of k and let (x 1 , x 2 , . . . , x n ) ∈ k n and (y 1 , y 2 , . . . , y m ) ∈ k m . The polynomial ring with n variables in k is denoted by k[x 1 , . . . , x n ]. Let L 1 : k n → k n and L 2 : k m → k m be two linear transformations; that is where L 1 is an n × n matrix and and L 2 is an m × m matrix with entries in k, x = (x 1 , x 2 , . . . , x n ) t , y = (y 1 , y 2 , . . . , y m ) t , and t denotes matrix transposition.
The Central map
The central map of the new scheme is obtained after performing a series of operations on matrices with polynomial entries. The idea is inspired by the construction of the Simple Matrix Scheme for Encryption, which was the first in this new generation of multivariate polynomial cryptosystems which use matrix multiplication to generate a public key.
For i = 1, ..., s, let p i , p i ∈ k[x 1 , ..., x n ], be 2s 2 random affine polynomials. Define , be a block matrix such that A is invertible and only one of the matrices B and C has linear polynomial entries and the other one has scalar entries.
where L 1 : k n → k n and L 2 : k m → k m are as defined above, andf i ∈ k[x 1 , . . . , x n ] are m multivariate polynomials of degree three. The secret key and the public key are given by: Secret Key: The secret key is comprised of the following two parts: 1) The invertible linear transformations L 1 , L 2 .
2) The matrices M and P .
Public Key: The public key is comprised of the following two parts: 1) The field k, including the additive and multiplicative structure; 2) The mapsF or, equivalently, its m total degree three components Signing: A signer will sign a message y 1 , ..., y m with x 1 , ..., x n satisfying (y 1 , y 2 , . . . , y m ) =F(x 1 , x 2 , . . . , x n ).
As H = M P , we have P = M −1 H. Notice that M is an invertible matrix with polynomial entries and, so, Theorem 2 can be used to find its inverse.
Some Remarks on the signing process: • The matrix M used in the description of the new scheme satisfies the conditions of Theorem 2. Therefore, the existence of the inverse M −1 is guaranteed by the theorem and the entries of M −1 are all multivariate affine linear polynomials with coefficients in k. • Step 3 is necessary, in case some of the p i are not linearly independent.
In such a case, there will be no solution and the values for the p i should be changed.
After few tries, a solution will be found: the probability of obtaining at least one solution is very high, as the probability of an n × n matrix over F q being invertible is (1 − 1 q )(1 − 1 q 2 ) · · · (1 − 1 q n−1 ) (see [2]). • The relation between m, n, and s may be ignored and the values may be chosen arbitrarily, in general.
• Contrary to the decryption process in [1], there is no failure in the signing process.
The following toy example is based on Theorem 2 and uses aB with linear polynomial entries.
Security Analysis
Further analysis of the security, as well as the choice of parameters and the efficiency of our new scheme, will be left for future work. We give, here, some observations that make us believe that our new proposed scheme has good security, if the parameters are carefully chosen.
In the separation attacks introduced by Kipnis and Shamir [8], the Oil variables and Vinegar variables must be separated to forge arbitrary signatures. Its improvement by Kipnis, Patarin, and Goubin to attack the UOV scheme [2] proposes finding some hidden invariant subspaces from the public polynomials that will allow for separation of the Oil variables and Vinegar variables and forging an arbitrary signature.
The Rainbow Band Separation attack and its generalization [1,1] need to use the missing cross-terms of the variables to find an equivalent set of keys, in order to forge an arbitrary signature. Therefore, none of these attacks pose a real security threat to our new proposed scheme, due to its structural design whihc focuses on polynomials, rather than variables.
For the MinRank attack, an attacker needs to find a non-trivial linear combination of matrices with minimal rank associated with the components of the set of public polynomials. After finding these low-rank linear combinations, the linear map L 2 can be recovered and, therefore, the secret key of the scheme is exposed. For the High-Rank Attack, the attacker tries to find linear combinations corresponding to variables with minimum appearances in the central map to recover the linear map L 1 and, subsequently, the secret key of the scheme as well. however, as in the previous cases, the structural design of the new scheme uses a product of randomly chosen affine linear polynomials and, hence, the entries of the matrix P are random multivariate quadratic polynomials. This guarantees that the rank of any non-trivial linear combination of matrices associated with the public polynomials will be close to n. Furthermore, as all variables appear in each of the central polynomials approximately the same number of times, neither of the two rank attacks can be used against our new scheme. Considering the above arguments, we can conclude that the most likely successful attack against our new scheme must be a direct attack and, so, we can choose the parameters accordingly to guarantee acceptable security, due to the following observation: Let us assume that an attacker wants to solve the equation (y 1 , y 2 , . . . , y m ) =F(x 1 , x 2 , . . . , x n ) to find the signature x 1 , x 2 , . . . , x n of the message y 1 , y 2 , . . . , y m . Assume that an oracle O gives the attacker the values (ȳ 1 ,ȳ 2 , . . . ,ȳ n ) (without knowing L 2 , one of the secret keys) and they can obtain the matrix At this point, the attacker still needs to find a way to get the entries of the matrices M −1 . Even if they succeed in finding the entries of the matrix M −1 H without knowing M −1 explicitly, to be able to forge a signature, they will still need to solve the system P = M −1 H, which is a system of multivariate quadratic equations with randomly chosen coefficients.
Conclusion
We have proposed a new multivariate signature scheme whose central map is obtained from the multiplication of matrices with random multivariate polynomials as entries. This implies that the central map is composed of cubic polynomials which are the sum of the products of completely randomly chosen affine linear polynomials, with no specific form. Multiplication from the left by the block matrix M causes any tentative factorization of the polynomials in the central matrix extremely difficult. Due to its structural design, the only feasible attack against this new scheme is the direct attack, and we conjecture that its security can be reduced to the NP-hard problem of solving a non-linear system of equations. Finally, we need to mention that this paper focuses more on the design and the theoretical approach of the scheme, and further study to establish the provable security, determine secure parameters, and analyze the efficiency of the proposed scheme will be the object of future research.
Acknowledgements
The first author was supported by the Emirate Foundation through grant 21S021. | 2020-04-23T09:07:11.093Z | 2020-04-22T00:00:00.000 | {
"year": 2020,
"sha1": "ae0214e1431bff55129971c59507e32d49b2b7d1",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202004.0392/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "b71251dd8c89142d68b90e33e9e811d5d5f84ecb",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
258668903 | pes2o/s2orc | v3-fos-license | Modelling Self-Heating and Self-Ignition Processes during Biomass Storage
: A mathematical model was developed to predict the self-heating and self-ignition processes of relatively dry biomass during storage, considering in detail the effects of moisture exchange behaviour, low-temperature oxidation reaction and associated heat and mass transfer. Basket heating tests on fir pellets and powder at temperatures of 180–200 ◦ C were conducted to observe the heating process and determine the kinetics of low-temperature chemical oxidation for model validation. As a result, it was demonstrated that the developed model could reasonably represent the self-heating and spontaneous combustion processes of biomass storage. Furthermore, the numerical study and model sensitivity analysis revealed that reasonably describing the low-temperature oxidation and associated heat and mass transfer process with reliable estimations of kinetic and thermophysical parameters of the biomass material is necessary for predicting the self-ignition, considering the effect of water exchange behaviour is essential to predict the self-heating process even for relatively dry biomass, such as pellets, with the moisture content up to 15–20%.
Introduction
As an important renewable energy source, biomass is utilised on large scales for heat and power generation and biofuel production.However, due to the low energy density and the regional and temporal distribution, a logistic chain-including the storage, transportation and handling of a bulk mass of biomass-is essential to ensure a stable supply of feedstock for bioenergy conversion systems [1].To achieve higher energy density and reduce supply chain costs, raw biomass can be pre-dried, milled and compressed into pellets, which have been an important form of solid commercial biofuels and are widely used for heat and power production [2].
During the storage, as well as transportation and handling, biomass in piles or heaps generates heat due to physical processes related to moisture condensation and wetting, biological reactions associated with microbial degradation and chemical oxidation reactions [3][4][5][6].As the thermal conductivity of biomass material is generally poor, the heat produced within the biomass pile may not be sufficiently dissipated to the surroundings.Consequently, the inner temperature rises, and the elevated temperature, in turn, enhances biological and chemical oxidation reactions and their heat generation, leading to a continuous increase in the temperature (i.e., a self-heating process occurs).The ongoing increase in the temperature may trigger spontaneous combustion [7], leading to economic losses and endangering the safety of operators [8].Even if spontaneous combustion does not occur, a low degree of self-heating can cause losses of mass and energy and emissions of harmful greenhouse gases [9][10][11][12].Therefore, it is highly important and necessary to prevent self-heating and self-ignition of biomass piles for efficient and safe management of biomass storage.
The self-heating of stored biomass materials involves various physical processes and biological and chemical reactions [3,5,9].Moisture exchange behaviours and chemically oxidative reactions contribute heat to the self-heating process even for relatively dry biomass, such as biomass pellets [13,14].The involved reactions can occur in series or parallel, which is of great complexity, not to mention the heat and mass transfer processes involved as well.All of these determine that self-heating depends on the properties of the biomass material (e.g., moisture and particle size), the configuration of the pile (e.g., size and shape), and environmental conditions of storage (e.g., temperature and humidity) [3], making reasonable description and prediction of self-heating and spontaneous combustion processes challenging.Conventional prediction methods are based on basket heating experiments [15], but they only predict the tendency of a material to self-ignite.Moreover, the experimental temperatures are relatively high, and the spontaneous combustion and its prior process are measured rather than the occurrence and development of the self-heating process.Largescale storage experiments can also be carried out to monitor the self-heating process [13,14].However, the experimental design, environmental conditions, and especially the operating parameters are difficult to be manipulated, and the experiments are also time-consuming and costly.In contrast, the modelling approach can overcome the drawbacks of the experimental methods by establishing and solving mathematical models that couple the reactions and heat and mass transfer process to describe and predict the self-heating and spontaneous combustion processes [4,16].
For a long time, model description of the spontaneous combustion of solid fuels such as biomass and coal has been mainly based on the Frank-Kamenetskii (F-K) model of thermal explosion theory [4, [17][18][19], which considers an Arrhenius form chemical reaction and neglects the consumption of reactants.Sidhu et al. [20] extended the F-K model by including the oxygen consumption and related mass transfer, and also the contribution of microbiological activity in heat generation.Based on this, Fu et al. [21] applied a model to consider the moisture exchange and transport processes to investigate the effect of ambient humidity variation on the self-heating process of a wood bark pile.Gray et al. [22] developed a model in which, in addition to chemical oxidation, water-mediated oxidation was taken into consideration for the storage of moist biomass.The heat generated through the evaporation and condensation of moisture was also included in the model, as well as the mass transfer processes of oxygen, liquid water and vapour.Krause et al. [16] further considered the complex chemical reactions of fuel decomposition and oxidation, and the momentum, heat and mass transport processes in a model to simulate the self-ignition process of a lignite stockpile.A similar model had been developed by Ferrero et al. [4] and employed to simulate the self-heating in a stockpile of pine wood chips, which included physical processes and chemical and biochemical reactions as heat sources [4,23].
Although the heat production mechanisms and transport processes are considered and integrated systematically, the state-of-art models have been limited in their applications due to the diversity of biomass, the variation in storage forms and the rationality of mechanistic models [3].As a result, the models are far from making reliable predictions for the selfheating and spontaneous combustion processes of biomass piles.In the present work, a mathematical model was developed for describing the self-heating and self-ignition processes in the storage of relatively dry biomass such as wood pellets, taking into account the physical processes associated with water exchange and low-temperature chemical oxidation.Basket heating experiments on materials of wood pellets were conducted to determine low-temperature oxidation kinetics and measure the temperature evolution during the self-heating process, which were applied for model validation.A sensitivity analysis was also employed to evaluate model parameters' effects on prediction, including kinetics, material, and process parameters.
Models and Its Numerical Solution
Biomass pellets have experienced pre-drying and temperatures of 100-170 • C during pelletisation, in which microorganisms colonising in biomass material are inactivated.The resulting pellets have a low moisture content (<10%), insufficient to support microbial activity and the heat production due to microbial activity is very limited [11,14].Therefore, the physical process related to water exchange (i.e., evaporation and condensation), lowtemperature chemical oxidation reactions and associated heat and mass transfer processes are considered in modelling the self-heating and self-ignition processes of stored pellets and other relatively dry biomass, while the effect of microbial activity excluded.
The model is based on the energy and mass conservation of a porous solid system.The heat and mass transfer processes, coupled with the reactions and their heat effects, are considered to describe the self-heating processes within the biomass stockpile.The general formulas of the one-dimensional (1D) mathematical model are where T is the temperature, x is the position, t is the time, ρ, c P , and λ are the bulk density, effective heat capacity and effective thermal conductivity of the bulk material, respectively, and C i is the concentration of species i involved in the reaction including O 2 , water vapour and liquid.D i is the effective diffusivity of species i.In Equations ( 1) and ( 2), the lefthand side is the accumulation term of heat or mass, the first term on the right-hand side represents the thermal conduction or species diffusion, S T i is the heat source (S T phy for the physical process of water exchange and S T chem for low-temperature oxidation reactions), and r i is the mass source of species i.The model assumes that the properties of biomass and gas are independent of temperature, and the mass loss of the biomass is negligible as compared to the total biomass mass.The heat and mass transport due to convection within the pores of biomass pile are not separately considered, while the thermal conductivity and gas diffusivities are appropriately increased to accommodate their effects [4].For simplicity, the model is presented here in 1D form but can be extended to 2D or 3D applications by considering the configuration of the storage and heterogeneities of the stored biomass.
The stored biomass material can undergo the moisture exchange with the atmosphere inside the pores and the ambience through evaporation and condensation processes, leading to thermal effects and moisture transport, which are described by an Arrhenius-type model [4, 16,22,24,25], given as where r m is the condensation adsorption rate or evaporation desorption rate; EV and CD are the pre-exponential factors of the evaporation and condensation process, respectively; W and V are the concentration of liquid water and vapour, respectively; L v is the latent heat of water evaporation.Apparently, the water evaporation in Equation ( 3) is presented as a first-order reaction with regard to the local liquid water content, with an Arrhenius-type rate constant.At the same time, the vapour condensation is modelled as a first-order reaction with regard to the water vapour concentration in the gas phase, with a constant reaction rate constant.Chemical oxidation contributing to the self-heating process covers a wide temperature range from close to room temperature, at which self-heating starts, to the ignition and even subsequent spontaneous combustion.It virtually involves various oxidative reactions, including oxidation, pyrolysis and hydrolysis, proceeding simultaneously or successively in the process [3], but it is often modelled with a global oxidative reaction rather than a mechanistic model [3,4,21].Considering the effect of oxygen availability on the reaction, the low-temperature chemical oxidation of materials is described by a first-order global reaction with regard to the oxygen concentration [21,26].The oxidation rate is denoted as Energies 2023, 16, 4048 4 of 17 where A and E are the pre-exponential factor and activation energy of the dry oxidation reaction, respectively; however, moisture may catalyse or enhance the oxidative reaction.
To account for this effect, a separate water-mediated oxidative reaction is additionally included in the model following the approach of Gray et al. [27], and its rate is expressed as where A wet and E wet are the pre-exponential factor and activation energy of the wet oxidation reaction, respectively.The source terms of the energy and mass conservation equations of the self-heating model called as full model, are summarised in Table 1 (the first row).If the contribution of the water-mediated oxidation is negligible, the model is simplified as model 1.On the other hand, for dry biomass stored in a confined space, the effect of physical processes associated with moisture exchange can be ignored, and the heat mainly originates from lowtemperature chemical oxidation.In this case, the model is simplified as model 2. Because the low-temperature oxidation and its heat release are generally weak, little oxygen is consumed during the self-heating process, and the effects of oxygen and its transport may be excluded.The model is then reduced to the classical F-K model (model 3) [19].In the present work, the full model and its simplified models were compared to evaluate the effects of moisture behaviour and oxygen transport on the self-heating and self-ignition process.The models were solved by discretising Equations ( 1) and (2) based on the finite volume method (FVM) [28].The discretised equations were then calculated through the tridiagonal matrix algorithm.The model inputs for the calculations include the physical properties and reaction kinetics of the biomass.Since the moisture-related physical processes and their kinetics are less dependent on the material, the literature data [22] were used.However, for low-temperature chemical oxidation, because the kinetic model is global and empirical rather than mechanistic [3], its kinetics are dependent on the material and its properties.Therefore, the kinetics determined by experimental measurements is required to ensure the accuracy of the model prediction.
Methods for Determining the Low-Temperature Oxidation Kinetics
The kinetics of the low-temperature oxidation is determined by the basket heating method [29] based on the F-K theory.Assuming that the heat-producing oxidation reaction inside a 1D sample follows the Arrhenius law and neglects the consumption of reactants and the effects of moisture, Equation ( 1) is simplified to be Accordingly, the transients F-K method [30,31] was applied to derive the kinetics of the oxidation reaction.Namely, the thermal conduction term in Equation ( 6) can be neglected if the temperature around the centre of the sample is uniform during heating, and Equation (6) becomes Energies 2023, 16, 4048 5 of 17 Or Equation ( 8) is independent of the sample size.Therefore, the apparent kinetic parameters of the exothermic reaction can be determined by conducting experiments at multiple temperatures with only one sample size and then doing Arrhenius fitting for Equation (8).Additionally, the right-hand side of Equation ( 7) is, in fact, the heat release rate of the oxidation reaction at the determined temperature.Therefore, it can be used to represent the self-heating propensity, especially for comparison between different biomass materials.
Two transient F-K methods, i.e., the cross-point (CPT) method [30] and the heat release (HR) method [31], were developed with different assumptions used for neglecting thermal conduction.The CPT method assumes that the thermal conduction term can be neglected when the temperatures of the centre and its nearby are equal; the HR method takes the sample central temperature equal to the oven temperature to neglect the thermal conduction.Nevertheless, both methods are recommended by a standard [32] for determining the kinetics of low-temperature oxidation of granular materials, including biomass and evaluating the tendency to spontaneous combustion due to low-temperature oxidation.Moreover, the basket heating test monitors the evolution of the internal temperature of a sample stored at a constant ambient temperature.Therefore, it can also represent the process of self-heating developing into spontaneous combustion, which can be used for model validation.
Experiments on the Self-Heating Process of Biomass Pellets
Basket heating experiments on biomass materials were carried out following the standard procedure [32] to investigate the characteristics of the self-heating process of the samples and to measure the low-temperature oxidation kinetics.The raw material used for the experiments was fir pellets, typical of softwood pellets, having a diameter of around 8 mm, a length of 10-40 mm (Figure 1a) and a bulk density of 585 kg/m 3 .The composition of the pellets was 5.87% moisture, 2.22% ash, 73.5% volatile fraction and 18.41% fixed carbon.In order to study the effect of particle size, the pellets were also milled into 0.3-2 mm powder (Figure 1b). the powder has a slightly higher moisture content of 6.3% due to moisture absorption during the processing and a lower bulk density of 370 kg/m 3 .Therefore, the basket heating experiments tested the fir pellets and powder.point) and outside the basket, respectively.When the oven temperature reached and stabilised at the set value, the basket, together with the thermocouples, was put into the oven.The temperatures were recorded by the data acquisition system every 5 s until the sample temperature soared and exceeded the oven temperature by more than 60 • C or exceeded the oven temperature and then stabilised for a long time.Thereafter, the HR temperature, CPT temperature and corresponding heating rates of the samples were determined based on the temperature records; the apparent kinetic parameters E and QA were further derived by linear fitting via Equation ( 8) based on the measurement at several different oven temperatures.According to the characteristics of the samples, the oven temperature was set to 180, 185, 190, 195, and 200 • C for the basket heating experiments of both the pellets and powder, and their low-temperature oxidation kinetic parameters were determined separately.
thermocouples were placed at the centre of the basket, the quadrature p and outside the basket, respectively.When the oven temperature reach at the set value, the basket, together with the thermocouples, was put in temperatures were recorded by the data acquisition system every 5 s temperature soared and exceeded the oven temperature by more than 6 the oven temperature and then stabilised for a long time.Thereafter, the CPT temperature and corresponding heating rates of the samples were d on the temperature records; the apparent kinetic parameters E and QA rived by linear fitting via Equation ( 8
Temperature Evolution from Self-Heating to Spontaneous Combustion Pro
Figure 3 illustrates the evolution of the centre temperature of the fir der at different oven temperatures in the basket heating tests.As can be se went through similar heating stages at different oven temperatures.At fi ture of the sample centre increased nearly linearly due to physical heatin ing rate decreased significantly around 60-80 °C because of the endothe of the moisture inside the sample.The evaporation stage exhibited a sl perature.After drying, the sample temperature rose rapidly again, but slowed down significantly as approaching the oven temperature and th went slow oxidation.The heat from the oxidation, together with the ove After drying, the sample temperature rose rapidly again, but the increase rate slowed down significantly as approaching the oven temperature and the sample underwent slow oxidation.The heat from the oxidation, together with the oven heating, drove the temperature gradually increase to cross the oven temperature.After that, the centre temperature could be stable and even gradually decrease due to the generated heat offset by the heat loss through conduction, or the temperature increase could accelerate due to the enhanced oxidation to trigger a temperature runaway, i.e., self-ignition occurs.Figure 3 shows that the oven temperature mainly affected the heating rates at the evaporation and slow oxidation stage.The higher the oven temperature, the shorter the evaporation period due to the fast oven heating and rapid water evaporation.The higher the oven temperature, the faster the heating rate of the slow oxidation stage and the shorter the time required for the sample to develop from self-heating to spontaneous combustion.
Figure 3 indicates the main differences in heating characteristics between fir pellets and powder.The heating rates of the pellets were obviously faster than those of the powder in the heating stages before and after the moisture evaporation stage, which was mainly due to the pellets having higher bulk density and lower thermal resistance, resulting in faster heating and interior temperature rise.The pellets sample had a shorter evaporation stage and faster temperature rise at the evaporation stage also because of the fast heating as well as its slightly lower moisture content.In the slow oxidation stage, the temperature rise of the pellets was slower mainly because of the better thermal conductivity and resulting more rapid heat transfer.Moreover, the larger particle size of the pellets meant slower oxidation, resulting in a slower self-heating rate under the same oven temperature and longer development from self-heating to spontaneous combustion.After heating for 9 h at 180 • C, the central temperature of the fir pellets was stabilised at 173-176 • C without any further rising tendency.However, self-ignition occurred for the pellets when the oven temperature was 185 • C.These indicated the critical self-ignition temperature of the fir pellets sample with the size of 0.001 m 3 to be between 180-185 • C. In contrast, the powder spontaneously ignited after prolonged oxidative heating at the oven temperature of 180 • C, implying its critical self-ignition temperature to be below 180 • C. It is evident in Figure 3 that fir powder was more prone to self-heating and spontaneous combustion than the pellets under the same conditions [33].The stronger heat transfer performance, as well as lower moisture content of the pellets, resulted in a shorter heating time as compared to the powder, reflecting that the heat transfer and moisture evaporation, to a certain extent, determined the time for the sample to be heated from room temperature to oven temperature.On the other hand, the slower low-temperature oxidation and faster heat transfer of the pellets led to the time from heating to spontaneous combustion longer than that of the powder at the same temperature.These differences were more obvious at lower oven temperatures (Figure 3). the temperature gradually increase to cross the oven temperature.After that, the centre temperature could be stable and even gradually decrease due to the generated heat offset by the heat loss through conduction, or the temperature increase could accelerate due to the enhanced oxidation to trigger a temperature runaway, i.e., self-ignition occurs.Figure 3 shows that the oven temperature mainly affected the heating rates at the evaporation and slow oxidation stage.The higher the oven temperature, the shorter the evaporation period due to the fast oven heating and rapid water evaporation.The higher the oven temperature, the faster the heating rate of the slow oxidation stage and the shorter the time required for the sample to develop from self-heating to spontaneous combustion.
(a) (b) Figure 3 indicates the main differences in heating characteristics between fir pellets and powder.The heating rates of the pellets were obviously faster than those of the powder in the heating stages before and after the moisture evaporation stage, which was mainly due to the pellets having higher bulk density and lower thermal resistance, resulting in faster heating and interior temperature rise.The pellets sample had a shorter evaporation stage and faster temperature rise at the evaporation stage also because of the fast heating as well as its slightly lower moisture content.In the slow oxidation stage, the temperature rise of the pellets was slower mainly because of the better thermal conductivity and resulting more rapid heat transfer.Moreover, the larger particle size of the pellets meant slower oxidation, resulting in a slower self-heating rate under the same oven temperature and longer development from self-heating to spontaneous combustion.After heating for 9 h at 180 °C, the central temperature of the fir pellets was stabilised at 173-176 °C without any further rising tendency.However, self-ignition occurred for the pellets when the oven temperature was 185 °C.These indicated the critical self-ignition temperature of the fir pellets sample with the size of 0.001 m 3 to be between 180-185 °C.In contrast, the powder spontaneously ignited after prolonged oxidative heating at the oven temperature of 180 °C, implying its critical self-ignition temperature to be below 180 °C.It is evident in Figure 3 that fir powder was more prone to self-heating and spontaneous combustion than the pellets under the same conditions [33].The stronger heat transfer
Determination of Low-Temperature Oxidation Kinetics
Based on the temperature measurements at various oven temperatures, the cross-point temperatures and their corresponding heating rates of the two samples were determined by HR and CPT methods, respectively.The results are summarised in Figure 4.In addition, with the linear fitting of Equation ( 8), the kinetic parameters of the low-temperature oxidation for fir pellets and powder were obtained and are listed in Table 2. Based on the temperature measurements at various oven temp point temperatures and their corresponding heating rates of the two s mined by HR and CPT methods, respectively.The results are summa addition, with the linear fitting of Equation ( 8), the kinetic parameters ature oxidation for fir pellets and powder were obtained and are liste In Figure 4, cP•dT/dts are the heat release rates of low-temperatu cross-point temperatures.As can be seen, the heat release rate of fir p eral, higher than that of fir pellets at the same temperature, exhibiting reactivity, i.e., a stronger self-heating capacity of low-temperature oxi proneness to self-ignition.It is consistent with the observation on the tion of low-temperature oxidation driven self-heating to spontaneous basket heating tests (Figure 3).Similar observations were reported in biomass pellets and dust [34].
Table 2 indicates that the kinetics of fir powder obtained by the generally consistent, while those for fir pellets were quite different, a of the two methods converged at higher temperatures.The kinetic par of the two different forms of fir samples obtained with the HR metho both in the same order of magnitude.However, the activation energie HR method are greater than those by the CPT method.Such a differe their different assumptions for neglecting thermal conduction.Neve parameters of the two forms of fir samples are all in the range of the biomass materials, including wood pellets [15,19] surveyed in the lite In Figure 4, c P •dT/dts are the heat release rates of low-temperature oxidation at the cross-point temperatures.As can be seen, the heat release rate of fir powder was, in general, higher than that of fir pellets at the same temperature, exhibiting a stronger oxidative reactivity, i.e., a stronger self-heating capacity of low-temperature oxidation and a higher proneness to self-ignition.It is consistent with the observation on the temperature evolution of low-temperature oxidation driven self-heating to spontaneous combustion in the basket heating tests (Figure 3).Similar observations were reported in a study on wheat biomass pellets and dust [34].
Validation of the Self-Heating Model
Table 2 indicates that the kinetics of fir powder obtained by the two methods were generally consistent, while those for fir pellets were quite different, although the results of the two methods converged at higher temperatures.The kinetic parameters, E and QA, of the two different forms of fir samples obtained with the HR method were similar and both in the same order of magnitude.However, the activation energies obtained with the HR method are greater than those by the CPT method.Such a difference is attributed to their different assumptions for neglecting thermal conduction.Nevertheless, the kinetic parameters of the two forms of fir samples are all in the range of the kinetics of various biomass materials, including wood pellets [15,19] surveyed in the literature [3].
Validation of the Self-Heating Model
The self-heating model was validated with the temperature measurements in the basket heating tests on both fir powder and pellets, but only the validation with the tests of the fir powder is presented here.The inputs of chemical oxidation kinetic parameters for the model computation were derived from the basket heating experiments, as shown above.The model inputs for the parameters related to the biomass material properties and reaction conditions are summarised in Table 3. Taking the heating of the fir powder sample at the oven temperature of 180 • C as an example, Figure 5 compares the model-predicted evolutions of the temperature at the centre and side point with the measurements.It is worth noting that the full model considered in detail the heat effects and associated heat and mass transfer of water exchange associated with physical processes and low-temperature chemical oxidation as well as water-mediated chemical oxidation.As can be seen in Figure 5, the model generally represents experimentally observed trends of the temperature evolutions at the two locations within the sample.During the stages of heating up to the oven temperature, the model-predicted temperature evolution of the side point agreed well with the measurements; the predicted central temperature evolution was also generally consistent with the measurement except for its overestimating the temperature increase rate at the evaporation stage, which may be the result of less accurate kinetics of the evaporation/condensation.When it comes to the oxidation stage prior to spontaneous ignition, despite roughly predicting the time of spontaneous combustion, the model over-predicted the temperatures of the centre and side points as compared to the experimental values.The over-prediction may be attributed mainly to the fact that this stage corresponds to higher temperatures, which enhances heat transfer within the sample as a consequence of, for example, the thermal conductivity increasing with temperature [35,36].However, the model set with constant thermophysical properties was unable to reflect this variation, leading to an underestimate of the heat dissipation at higher temperatures.Nevertheless, the model predictions of the temperature evolution were generally in agreement with the measurements.The model calculations for the cases at different oven temperatures and for fir pellets (not shown here) also presented similar trends.Therefore, the model developed here can be applied to predicting the process of self-heating to spontaneous combustion of relatively dry biomass materials provided with experimentally determined kinetics of low-temperature oxidation.Figure 6 displays the predictions of the central temperature ev model and its simplified models (Table 1) compared with the measure ket heating test of the fir powder at 180 °C so as to examine the cont mechanistic processes to the self-heating.Without considering the wa tion, the simplified model 1 predicted the central temperature evolu coincident with that from the full model.Therefore, it implies that wa ical oxidation and its heat effect hardly affect self-heating.However, t tween the calculation results from the two models indicated that, dur stage, the wet oxidation resulted in a slight decrease in the oxygen c sample centre, as illustrated in Figure 7.Moreover, the exothermic effe tion also caused slightly higher central temperatures at the evaporat in the zoom-in figure in Figure 6.Nevertheless, these observations re tribution of the water-mediated oxidation to the self-heating process o terial, consistent with the observations in a modelling work [22].The p for this phenomenon was the low moisture content (6.3%) of fir powde biomass materials such as wood pellets, therefore, the effect of the w neglected when describing the self-heating process.1) compared with the measurement from the basket heating test of the fir powder at 180 • C so as to examine the contributions of various mechanistic processes to the self-heating.Without considering the water-mediated oxidation, the simplified model 1 predicted the central temperature evolution profile almost coincident with that from the full model.Therefore, it implies that water-mediated chemical oxidation and its heat effect hardly affect self-heating.However, the comparisons between the calculation results from the two models indicated that, during the evaporation stage, the wet oxidation resulted in a slight decrease in the oxygen concentration in the sample centre, as illustrated in Figure 7.Moreover, the exothermic effect of the wet oxidation also caused slightly higher central temperatures at the evaporation stage, as shown in the zoom-in figure in Figure 6.Nevertheless, these observations revealed a weak contribution of the water-mediated oxidation to the self-heating process of relatively dry material, consistent with the observations in a modelling work [22].The possible explanation for this phenomenon was the low moisture content (6.3%) of fir powder.For relatively dry biomass materials such as wood pellets, therefore, the effect of the wet oxidation can be neglected when describing the self-heating process.
The comparison in Figure 6 shows that, although the simplified model 2, without considering the water-exchange physical processes, may predict the onset of spontaneous combustion, it significantly overestimates the central temperatures at the evaporation and its subsequent heating stage.The overestimation is attributed to the model omitting the strong heat absorption by moisture evaporation.This demonstrates that describing the water-exchange behaviour and its influence is essential for the prediction of the self-heating process, even for relatively dry materials.As for the simplified model 3, i.e., the traditional F-K model, it predicted neither the time of self-ignition nor the temperature evolution from the self-heating to self-ignition (Figure 6) because the effects of the oxygen diffusion and consumption on the chemical oxidation as well as water behaviour were not considered.It meant that the oxygen consumption and mass transfer process had a significant impact on the self-heating and low-temperature oxidation process, even for such a small volume (0.001 m 3 ) of biomass materials used in basket heating tests.Therefore, the model and experimental method based on the F-K theory can be applied only for evaluating and comparing the propensities of biomass materials to spontaneous combustion rather than predicting the self-heating and spontaneous combustion processes of biomass storage in practice.The comparison in Figure 6 shows that, although the simplified mo considering the water-exchange physical processes, may predict the onset o combustion, it significantly overestimates the central temperatures at the ev its subsequent heating stage.The overestimation is attributed to the mode strong heat absorption by moisture evaporation.This demonstrates that water-exchange behaviour and its influence is essential for the prediction o ing process, even for relatively dry materials.As for the simplified model 3 tional F-K model, it predicted neither the time of self-ignition nor the temp
Numerical Study and Sensitive Analysis of the Model
The modelling study above shows that the self-heating process of stored biomass depends on the physical processes of moisture behaviour, low-temperature chemical oxidation, and heat and mass transfer process.Therefore, the model prediction relies on the kinetics, material properties, and process characteristics.Therefore, a sensitivity analysis was performed to investigate the influence of these parameters on the rationality and accuracy of the model to predict the self-heating and self-ignition processes.The main parameters examined and the results of their sensitivity analysis are summarised in Table 4.
It indicates that the storage parameter, ρ b , material thermophysical properties, λ b and c p,b, and biomass reactivity parameters, QA and E, were critical to predicting the self-heating process reasonably.In particular, a small variation in the reactivity parameters significantly changes the predicted trend of the self-heating process.Table 4 indicates that increasing the pre-exponential factor of the chemical oxidation, for example, for a material with stronger oxidation reactivity, led to a significantly reduced self-ignition time but had little effect on the time required for the central temperature to reach oven temperature.The reason is that the heat release of chemical oxidation was very weak before the slow oxidation stage, only in the order of 0.1 W/kg or even smaller.However, reducing QA by 10% increased the self-ignition time by 28.4%.No self-heating occurred when QA was lowered by 25%, with the central temperature stabilised at around 192 • C. A similar phenomenon was observed for the change in activation energy E, but the effect was much more sensitive.Altering E by merely 5% resulted in a dramatic difference in the central temperature evolution, increasing rapidly to spontaneous combustion or going stably without ignition, as shown in Figure 8.These reflected the importance of the oxidative reactivity of stored fuels for spontaneous combustion and the necessity of determining the oxidative kinetics for accurate prediction of self-ignition.However, the global kinetics of chemical oxidation vary widely [3], depending on biomass species and properties such as particle size [4,19,34].Determining the kinetics of specific biomass materials is inconvenient for engineering applications.It implies the necessity and importance of further developing a mechanistic model of the oxidative reaction to replace the global model in the improvement of model prediction and generalisation.
The self-heating and self-ignition processes are the most sensitive to the thermal conductivity and specific heat capacity among the material thermophysical parameters.The thermal conductivity of biomass is generally low.However, it is very varying with biomass species, particle structure and storage structure and increases with increasing particle and bulk density, moisture content and temperature [35][36][37][38].Table 4 indicates that a 75% increase in the thermal conductivity λ b enhanced the heat transfer (heating or dissipation) and remarkably inhibited the accumulation of heat in the biomass pile, with no spontaneous combustion occurring.On the contrary, a 75% reduction led to a 168.6% and 42.6% increase in the time to reach the oven temperature and spontaneous ignition, respectively (Table 4).Although lowering λ b slowed down the temperature rise in the biomass pile, it still developed to spontaneous ignition after reaching the oven temperature (Figure 9) because the heat generation was dominated by chemical oxidation reactions during the slow and fast oxidation stages and the lower thermal conductivity were favourable to heat accumulation.Such effects explain the faster heating at the heating stages and slower heating at the oxidation stage of fir pellets than those of fir powder (Figure 3), mainly because the pellet samples have a higher bulk density and, consequently, higher effective thermal conductivity.
portance of the oxidative reactivity of stored fuels for spontaneous combu necessity of determining the oxidative kinetics for accurate prediction o However, the global kinetics of chemical oxidation vary widely [3], dependi species and properties such as particle size [4,19,34].Determining the kine biomass materials is inconvenient for engineering applications.It implies and importance of further developing a mechanistic model of the oxidati replace the global model in the improvement of model prediction and gene The self-heating and self-ignition processes are the most sensitive to th ductivity and specific heat capacity among the material thermophysical pa thermal conductivity of biomass is generally low.However, it is very vary mass species, particle structure and storage structure and increases with inc cle and bulk density, moisture content and temperature [35][36][37][38].Table 4 in 75% increase in the thermal conductivity λb enhanced the heat transfer (he pation) and remarkably inhibited the accumulation of heat in the biomass spontaneous combustion occurring.On the contrary, a 75% reduction led to 42.6% increase in the time to reach the oven temperature and spontaneou spectively (Table 4).Although lowering λb slowed down the temperature r mass pile, it still developed to spontaneous ignition after reaching the ove (Figure 9) because the heat generation was dominated by chemical oxida during the slow and fast oxidation stages and the lower thermal conduct vourable to heat accumulation.Such effects explain the faster heating at the and slower heating at the oxidation stage of fir pellets than those of fir pow mainly because the pellet samples have a higher bulk density and, conseq effective thermal conductivity.As can be seen from Table 4, the effective specific heat capacity cp,b also plays a significant role in the self-heating and spontaneous combustion processes of biomass.Research has demonstrated that the effective specific heat capacity of biomass increases linearly with density, moisture and temperature [35,37,39].Therefore, when the biomass undergoes self-heating, the cp,b increases, which in turn results in a delay in all stages of the selfheating process, thus leading to a consequent increase in the time to reach the oven temperature and spontaneous ignition, respectively, even with no ignition occurred.
It is also observed in the sensitivity analysis that changing the evaporation or condensation pre-exponential factors EV and CD did not significantly affect the time to reach the oven temperature and spontaneous combustion (Table 4) and, as expected, changing EV and CD had the opposite effects on the predictions.In contrast, the variation in the moisture content M had positive correlations with the times to reach the oven temperature and spontaneous ignition (Figure 10a).The time to reach self-ignition increases linearly with increasing the content, as shown in Figure 10b.In particular, increasing the moisture content from 3% to 15% significantly prolongs the time to reach the oven temperature and, subsequently, the thermal runaway state.The main reason is that with the increased mois- As can be seen from Table 4, the effective specific heat capacity c p,b also plays a significant role in the self-heating and spontaneous combustion processes of biomass.Research has demonstrated that the effective specific heat capacity of biomass increases linearly with density, moisture and temperature [35,37,39].Therefore, when the biomass undergoes self-heating, the c p,b increases, which in turn results in a delay in all stages of the self-heating process, thus leading to a consequent increase in the time to reach the oven temperature and spontaneous ignition, respectively, even with no ignition occurred.
It is also observed in the sensitivity analysis that changing the evaporation or condensation pre-exponential factors EV and CD did not significantly affect the time to reach the oven temperature and spontaneous combustion (Table 4) and, as expected, changing EV and CD had the opposite effects on the predictions.In contrast, the variation in the moisture content M had positive correlations with the times to reach the oven temperature Energies 2023, 16, 4048 15 of 17 and spontaneous ignition (Figure 10a).The time to reach self-ignition increases linearly with increasing the content, as shown in Figure 10b.In particular, increasing the moisture content from 3% to 15% significantly prolongs the time to reach the oven temperature and, subsequently, the thermal runaway state.The main reason is that with the increased moisture content, more heat is required for moisture evaporation, which prolongs the heating of the evaporation stage.In contrast, the model calculations indicated that the effects of the enhanced mass, heat transfer, and wet oxidation are not considered for the moisture content up to 20%.Therefore, even for relatively dry fuels (with M = 3-15%), the description of the moisture exchange behaviour in biomass piles is essential for a reasonable prediction of the self-heating process.
perature and spontaneous ignition, respectively, even with no ignition occurred.
It is also observed in the sensitivity analysis that changing the evaporation or condensation pre-exponential factors EV and CD did not significantly affect the time to reach the oven temperature and spontaneous combustion (Table 4) and, as expected, changing EV and CD had the opposite effects on the predictions.In contrast, the variation in the moisture content M had positive correlations with the times to reach the oven temperature and spontaneous ignition (Figure 10a).The time to reach self-ignition increases linearly with increasing the content, as shown in Figure 10b.In particular, increasing the moisture content from 3% to 15% significantly prolongs the time to reach the oven temperature and, subsequently, the thermal runaway state.The main reason is that with the increased moisture content, more heat is required for moisture evaporation, which prolongs the heating of the evaporation stage.In contrast, the model calculations indicated that the effects of the enhanced mass, heat transfer, and wet oxidation are not considered for the moisture content up to 20%.Therefore, even for relatively dry fuels (with M = 3-15%), the description of the moisture exchange behaviour in biomass piles is essential for a reasonable prediction of the self-heating process.
Conclusions
A mathematical model describing the self-heating and self-ignition process during storage of relatively dry biomass was developed, with the moisture exchange behaviour and low-temperature chemical oxidation and their associated heat and mass transfer processes considered in detail.In order to obtain the model input parameters for model validation and numerical study, the basket heating experiments were carried out to derive the low-temperature oxidation kinetics of fir pellets and powder based on the two transient F-K methods and to observe the temperature evolutions inside the biomass storage.The validation demonstrated that the model could reasonably describe the temperature evolution process and predict the spontaneous ignition within the biomass storage by applying the determined low-temperature oxidation kinetics.The numerical study and sensitivity analysis showed that it is essential to reasonably describe the low-temperature oxidation and its associated oxygen consumption and mass transfer for the prediction of the low-temperature oxidation-driven self-heating process and spontaneous combustion.It implies that developing a mechanistic model to replace the global model of oxidation could improve model prediction and generalisation.Furthermore, it was found that considering the water exchange behaviour is essential to predict the self-heating process even for relatively dry biomass such as pellets with moisture content up to 10-20%, while the role of water-mediated oxidation reaction can be ignored.In addition, the sensitivity analysis revealed that the reactivity parameters, material thermophysical properties and characteristic storage parameters could significantly affect the self-heating and spontaneous combustion process, implying the importance of reliable estimations of the parameters to reasonable predictions.
Figure 1 .
Figure 1.Images of the samples of (a) fir particles and (b) fir powder.
Figure 2
Figure2exhibits the experimental setup.It consists of a well-ventilat oven, a metal mesh basket container with a cubic side length of 10 cm, therm corresponding temperature acquisition and display devices.During the ex biomass material was filled into the basket, weighed and recorded, and the thermocouples were placed at the centre of the basket, the quadrature poi and outside the basket, respectively.When the oven temperature reached at the set value, the basket, together with the thermocouples, was put into
Figure 1 .
Figure 1.Images of the samples of (a) fir particles and (b) fir powder.
Figure 2
Figure2exhibits the experimental setup.It consists of a well-ventilated isothermal oven, a metal mesh basket container with a cubic side length of 10 cm, thermocouples and corresponding temperature acquisition and display devices.During the experiment, the biomass material was filled into the basket, weighed and recorded, and then three K-type thermocouples were placed at the centre of the basket, the quadrature point (side ) based on the measurement at seve temperatures.According to the characteristics of the samples, the oven set to 180, 185, 190, 195, and 200 °C for the basket heating experiments o and powder, and their low-temperature oxidation kinetic parameters separately.
Figure 2 .
Figure 2. Schematic of the facility for basket heating test.
Figure 2 .
Figure 2. Schematic of the facility for basket heating test.
3 .
Figure3illustrates the evolution of the centre temperature of the fir pellets and powder at different oven temperatures in the basket heating tests.As can be seen, both samples went through similar heating stages at different oven temperatures.At first, the temperature of the sample centre increased nearly linearly due to physical heating.Then the heating rate decreased significantly around 60-80 • C because of the endothermic evaporation of the moisture inside the sample.The evaporation stage exhibited a slower rise in temperature.After drying, the sample temperature rose rapidly again, but the increase rate slowed down significantly as approaching the oven temperature and the sample underwent slow oxidation.The heat from the oxidation, together with the oven heating, drove the temperature gradually increase to cross the oven temperature.After that, the centre temperature could be stable and even gradually decrease due to the generated heat offset by the heat loss through conduction, or the temperature increase could accelerate due to the enhanced oxidation to trigger a temperature runaway, i.e., self-ignition occurs.Figure3shows that the oven temperature mainly affected the heating rates at the evaporation and slow oxidation stage.The higher the oven temperature, the shorter the evaporation period due to the fast oven heating and rapid water evaporation.The higher the oven temperature, the faster the heating rate of the slow oxidation stage and the shorter the time required for the sample to develop from self-heating to spontaneous combustion.Figure3indicates the main differences in heating characteristics between fir pellets and powder.The heating rates of the pellets were obviously faster than those of the powder in the heating stages before and after the moisture evaporation stage, which was mainly due to the pellets having higher bulk density and lower thermal resistance, resulting in faster heating and interior temperature rise.The pellets sample had a shorter evaporation stage and faster temperature rise at the evaporation stage also because of the fast heating as well as its slightly lower moisture content.In the slow oxidation stage, the temperature rise of the pellets was slower mainly because of the better thermal conductivity and resulting
Figure 3 .
Figure 3. Central temperature evolution of (a) fir pellets, (b) fir powder under different oven temperatures.
Figure 3 .
Figure 3. Central temperature evolution of (a) fir pellets, (b) fir powder under different oven temperatures.
Figure 4 .
Figure 4. Derivation of chemical oxidation kinetics of fir pellets and powder b ods.
Figure 4 .
Figure 4. Derivation of chemical oxidation kinetics of fir pellets and powder by CPT and HR methods.
Figure 5 .
Figure 5. Model-predicted temperature evolutions of the centre and side poin the measurements of fir powder at 180 °C.
Figure 5 .
Figure 5. Model-predicted temperature evolutions of the centre and side points are compared with the measurements of fir powder at 180 • C.
Figure 6
Figure6displays the predictions of the central temperature evolutions by the full model and its simplified models (Table1) compared with the measurement from the basket heating test of the fir powder at 180 • C so as to examine the contributions of various mechanistic processes to the self-heating.Without considering the water-mediated oxidation, the simplified model 1 predicted the central temperature evolution profile almost coincident with that from the full model.Therefore, it implies that water-mediated chemical oxidation and its heat effect hardly affect self-heating.However, the comparisons between the calculation results from the two models indicated that, during the evaporation stage, the wet oxidation resulted in a slight decrease in the oxygen concentration in the sample centre, as illustrated in Figure7.Moreover, the exothermic effect of the wet oxidation also caused slightly higher central temperatures at the evaporation stage, as shown in the zoom-in figure in Figure6.Nevertheless, these observations revealed a weak contribution of the water-mediated oxidation to the self-heating process of relatively dry material, consistent with the observations in a modelling work[22].The possible explanation for this phenomenon was the low moisture content (6.3%) of fir powder.For relatively dry biomass materials such as wood pellets, therefore, the effect of the wet oxidation can be neglected when describing the self-heating process.The comparison in Figure6shows that, although the simplified model 2, without considering the water-exchange physical processes, may predict the onset of spontaneous combustion, it significantly overestimates the central temperatures at the evaporation and its subsequent heating stage.The overestimation is attributed to the model omitting the strong heat absorption by moisture evaporation.This demonstrates that describing the water-exchange behaviour and its influence is essential for the prediction of the self-heating process, even for relatively dry materials.As for the simplified model 3, i.e., the traditional F-K model, it predicted neither the time of self-ignition nor the temperature evolution from the self-heating to self-ignition (Figure6) because the effects of the oxygen diffusion and consumption on the chemical oxidation as well as water behaviour were not considered.It meant that the oxygen consumption and mass transfer process had a significant impact on the self-heating and low-temperature oxidation process, even for such a small volume (0.001 m 3 ) of biomass materials used in basket heating tests.Therefore, the model and experimental method based on the F-K theory can be applied only for evaluating and
Figure 6 .
Figure 6.Predicted centre temperature evolution by the self-heating model and its simplifications compared with the measurements of fir powder sample at 180 °C.
Figure 7 .
Figure 7. Model predicted evolutions of oxygen concentration at the centre of fir powder sample heated at 180 °C.
Figure 6 .
Figure 6.Predicted centre temperature evolution by the self-heating model and its simplifications compared with the measurements of fir powder sample at 180 • C. nergies 2023, 16, x FOR PEER REVIEW
Figure 7 .
Figure 7. Model predicted evolutions of oxygen concentration at the centre of fir heated at 180 °C.
Figure 7 .
Figure 7. Model predicted evolutions of oxygen concentration at the centre of fir powder sample heated at 180 • C.
Figure 8 .
Figure 8. Predicted profiles of the central temperature vary with the activation ene oxidation E.
Figure 8 .
Figure 8. Predicted profiles of the central temperature vary with the activation energy of chemical oxidation E. Energies 2023, 16, x FOR PEER REVIEW 14 of 17
Figure 9 .
Figure 9. Predicted profiles of the central temperature vary with the thermal conductivity of biomass sample λb.
Figure 9 .
Figure 9. Predicted profiles of the central temperature vary with the thermal conductivity of biomass sample λ b .
Figure 10 .
Figure 10.Effect of the moisture content M on the self-heating process of stored biomass: (a) modelpredicted profiles of the central and side point temperature varying with the moisture content and (b) the predicted time to reach the ignition temperature correlated with the moisture content.
Table 1 .
Mathematical model of heat and mass transfer and its source term.
Table 2 .
Determined chemical oxidation kinetics of fir pellets and powder.
Table 2 .
Determined chemical oxidation kinetics of fir pellets and powder.
CPT HR CPT HR E kJ/mol QA J/(kg•s) E kJ/mol QA J/(
Table 3 .
Data input for model calculation.
Table 4 .
Sensitivity analysis of the main parameters affecting the self-heating process. | 2023-05-14T15:06:06.865Z | 2023-05-12T00:00:00.000 | {
"year": 2023,
"sha1": "59165eb5b37d9a41b2683fe0defcf03b44950874",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/10/4048/pdf?version=1683887315",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fe648d08485db673ea6b2866755e8cdc318c3e3b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
269965813 | pes2o/s2orc | v3-fos-license | Impact of Governance Mechanisms on Agency Costs in CAC 40 Listed Firms: An Empirical Analysis (2005-2023)
: This empirical investigation examines the influence of corporate governance mechanisms on agency costs among firms listed on the CAC 40 index from 2005 to 2023. Agency costs were evaluated using three proxies: asset turnover ratio, selling, general and administrative expenses
Introduction
Consistent with Jensen & Meckling (1976), a firm can be reduced to a set of internal and external contractual relationships characterized by many conflicting interests, especially between managers and shareholders, giving rise to agency problems as critical challenges to the firm's continuity.Berle & Means (1932) addressed this debate early in their celebrated work "The Modern Corporation and Private Property", stating that the firm law structure in the United States imposed a separation of control from ownership in the 1930s, which increased doubts about the compatibility between shareholders and managers regarding their interests.Hence, this is an explicit statement that agency problems arise when managers prioritize their own interests at the expense of shareholders' interests when ownership and management are separated.Brudney (1985) argues that corporate governance structures provide substantial discretion for managers, often exceeding investors' expectations.Although market forces can impose constraints on that discretion, it still enables management to reward itself by transferring assets.Additionally, Ang et al. (2000) contend that empirical studies have consistently shown that agency cost is a relevant framework for explaining managerial decisions, such as dividend policies, executive compensation, capital structure choices, etc.They stated that bad decisions made by firms as a result of insufficient managerial efforts may lead to increased agency costs, as well as the excessive use of executive perquisites.Furthermore, Fleming et al. (2005) argue that incentive alignment problems become more prominent and are significantly influenced by changes in ownership structure and the separation of management from ownership.
Measuring agency costs is a widely explored issue in the literature, and Ang et al. (2000) were among the first researchers to address it.They noted that the measurement of agency costs, both in absolute and relative terms, has not received sufficient attention.Depending on the assumption of Jensen & Meckling (1976) that firms wholly owned by management are characterized by zero agency costs, they examined whether there are differences between owner-managed firms and those where management is separate from ownership regarding management cost and asset utilization ratio.Ang et al. (2000) proposed two indicators for assessing agency costs.The first is a direct metric, represented by the proportion of operational expenses to sales, to facilitate cross-sectional comparisons.The second indicator pertains to the ratio of sales to assets, reflecting the revenue loss stemming from management's inefficient utilization of assets (e.g., investments in assets with negative or lower net present value and low efforts by managers in supporting the revenue-generating process).According to Ang et al. (2000), the first proxy measures the ability of managers to control operating expenses, while the second proxy evaluates how efficiently management utilizes the firm's resources.Most researchers agree that the high asset turnover ratio reflects management's efficiency in utilizing assets to maximize sales.Conversely, the low ratio suggests that managers may implement policies harmful to the firm and demonstrate inefficiency in asset utilization.However, they emphasize certain drawbacks of this ratio; for instance, generating sales might not necessarily reflect shareholders' wealth, and managers can easily expropriate cash flows generated from sales.Despite these limitations, they still regard this proxy as a relevant indicator of agency costs.
In contrast, McKnight & Weir (2009) utilized three proxies of agency costs, including the number of acquisitions, the ratio of asset turnover, and the interplay of growth prospects with free cash flow.They acknowledge that acquisitions may be proceeding for the sake of maximizing managerial utility, as they offer managers the opportunity to spend firm resources instead of rewarding shareholders.This practice can lead to an unfair reduction in shareholders' wealth, ultimately resulting in increased agency costs.Similarly, a large free cash flow can increase agency costs, as shareholders generally prefer to receive it as dividends or through share repurchase operations, while managers may have a different perspective (Lin & Lin, 2014).
To address agency problems, Jensen & Meckling (1976) suggested several mechanisms, including separating the CEO and board chairman roles, involving external auditors and institutional investors, monitoring by nonexecutive directors, and promoting managerial ownership.Additionally, Fleming et al. (2005) mentioned the ownership and managerial structures that are likely to monitor the behavior of managers and reduce agency costs.These structures encompass concentrated ownership, family ownership, business associates, banks, and providers of venture capital.Aligned with agency theory, Ang et al. (2000) emphasized that the efficiency of related parties, along with the effectiveness of monitoring by outside directors, may limit agency costs.Furthermore, Rashid (2013) argues that three critical roles can be assigned to the board of directors: service, control, and strategic, and the ability to reduce agency costs closely ties to its effectiveness in fulfilling these roles.
For their part, McKnight & Weir (2009) argue that, within the agency model, some governance mechanisms allow a convergence of interests between managers and owners, thereby reducing agency costs.They emphasize that the classical agency model distinguishes effective governance mechanisms from less effective ones, leading to a set of optimal governance structures that effectively reduce agency costs while maximizing performance.In a different context, Doukas et al. (2000) provided significant evidence that security analysis can serve as a monitoring tool to mitigate agency costs associated with manager-shareholder conflicts.
In the French context, governance mechanisms are distinct from the Anglo-American model, where the hypotheses of agency theory have been widely explored.The business environment in France tends to be more oriented towards entrepreneurship and managerial power, thus the entrepreneur or founder family, as a major shareholder, often dominates the board of directors and directly controls the financial information preparation process (Piot, 2001).Corporate governance in French firms appears as an insider model with weak investors' protection in the legal system and regulations and more concentration of ownership (Ammari et al., 2014).Further, the role of financial markets as a source of financing for French firms is relatively limited, leading to weak monitoring of managers' behavior by market forces in addition to marginal external monitoring mechanisms (Piot, 2001).According to the findings of La Porta et al. (2000), French legislation as a civil law country provides weak protection for outside shareholders and creditors, which can be a source of interest conflict, thus generating agency problems and increasing capital costs.
The corporate governance model in France prevents the development of capital markets and the dispersion of ownership compared to Anglo-American countries.Therefore, corporate governance has known several developments in France during the past few years, starting with the first report of Vié not (1995), which was primarily interested in the mission and effectiveness of the board of directors.Specifically, it proposed reducing the number of board seats, resorting to independent directors, suppressing cross-memberships, and establishing board committees.The second report of Vié not (1999) adopted a wide perspective; it proposed separating the functions of the CEO from those of the board chairman and extending the role of independent directors.The report proposed recommendations regarding financial reporting, information about remuneration, and the general meetings of shareholders.Following the several scandals (Enron, WorldCom, Vivendi, etc.) at the beginning of the 2000s, the report of Bouton (2002) was developed to provide certain improvements regarding the board of directors (more independence, high level of formalization, information quality, assessment), board committees (nominating, remuneration, and audit committees), auditors' independence, and financial reporting.
The regulatory framework of corporate governance in France was mainly inspired by the directives of the European Parliament, derived from the Report of Winter et al. (2002).The report focused on improving corporate governance practices with ten key priorities, especially reporting for governance, shareholder rights, transparency, and executive remuneration.The reports of Vié not (1995) and Bouton (2002) were complemented by the New Economic Regulations Act, which requires separating management from control to improve supervision by the board members.Later, the law of financial security was adopted in 2003 to strengthen the role of supervisory authorities in protecting investors, improving the auditors' independence, and enhancing the reporting quality.In 2005, the framework of corporate governance in France was ameliorated by the introduction of the law on the modernization of the economy.The law requires the disclosure of information about the components of executive compensation and their assessment criteria (Dabboussi, 2018).
Previous studies have concluded that corporate governance mechanisms are effective in constraining managers' inclination to advance their interests, moderating agency costs, and improving long-term firm performance.Good governance practices promote optimal resource allocation, lower capital costs, and better relations between shareholders, managers, and other stakeholders.Thus, the purpose of this study is to investigate whether improvements made to France's corporate governance framework during the early 2000s have improved business practices and moderated agency costs.Furthermore, this study investigates the influences of debt financing and growth prospects on agency costs, given the growth opportunities of the French economy.Additionally, the business environment in France has some specificities regarding corporate financing, where the role of financial markets is still under expectation, which makes firms more dependent on bank debt.
Agency Costs and Board Characteristics
According to several previous studies, a board's effectiveness is largely dependent on its characteristics, including size, independence, and CEO duality.
Board size
Previous research has delineated two contrasting perspectives regarding the potential influence of board size on firm performance and efficiency.Advocates of the first viewpoint argue that smaller boards of directors are more able to limit agency costs than larger boards (e.g., see Chaudhary, 2022;Guest, 2009;Owusu & Weir, 2018;Rutledge et al., 2016;Singh & Davidson III, 2003).Conversely, proponents of the second viewpoint argue that larger boards foster better strategic decision-making.Florackis (2008) suggests that larger boards are crucial for effective management and supervision.This stance finds support in studies by Andreou et al. (2014), Boussenna (2020), andSetia-Atmaja (2008), which have illustrated that larger boards can enhance firm performance and mitigate agency costs.Consistent with previous studies, we expect that agency costs in French firms can be constrained by extending the board of directors.
H1: Larger boards of directors relate to lower agency costs.
Board independence
Regarding the potential impact of directors' independence, the literature also distinguishes two different viewpoints: stewardship theory and agency theory.From the perspective of agency theory, the supervisory activities of the board are enhanced when independent members control the board of directors.Florackis (2008) emphasizes that board effectiveness depends on the ratio of non-executive directors, given their ability to curtail management discretion.Additionally, Jensen & Meckling (1976) argue that such directors are less susceptible to conflicts of interest, which enables them to carry out the monitoring function more effectively.
Several studies agree that the independence of board members positively affects firm performance and reduces agency costs, including those by Haslindar & Abdul Samad (2011), McKnight & Weir (2009), Nguyen et al. (2020), Rutledge et al. (2016), Shan &McIver (2011), andZhao (2003).According to the findings of these studies, highly independent boards of directors tend to improve the board's effectiveness, mitigate agency costs, and advance shareholder interests.
From another view, stewardship theory notes that executive directors are more able to attain organizational goals, and their private information about the firm can enhance the process of decision-making (Davis et al., 1997).Therefore, boards of directors with lower independence are inclined to exhibit greater efficiency, resulting in reduced agency costs.In this context, Haslindar & Abdul Samad (2011) found that a higher agency cost is associated with higher board independence in family-owned firms.Nevertheless, several studies contradict the preceding viewpoints, showing that the relationship between board independence and firm efficiency is not significant (Andreou et al., 2014;Dian, 2014;Goh et al., 2014).
Consistent with agency theory, we expect that increasing board independence generates a reduction in agency costs in French firms, and we assume that: H2: A board of directors marked by a high proportion of independence relates to reduced agency costs.
2.1.3CEO duality CEO duality manifests when a CEO is appointed as a board chairman, which creates a complicated issue (Aktas et al., 2018).Overall, the literature distinguishes two perspectives concerning the relationships between agency cost and firm performance and CEO duality.Aktas et al. (2018) argue that managers have opportunistic tendencies and make decisions that serve their interests at the detriment of shareholders.Therefore, CEO duality is deemed undesirable because it results in serious consequences, such as heightened managerial entrenchment, diminished effectiveness of board monitoring, and decreased firm performance.
In line with this perspective, McKnight & Weir (2009) contend that when an individual simultaneously occupies the positions of board chairman and CEO, it confers significant authority to the CEO, which negatively affects the effectiveness of monitoring.Therefore, it becomes necessary to separate the two roles to maintain the independence and effectiveness of the board.Numerous studies have substantiated the agency theory viewpoint, demonstrating that the separation of the chairman and CEO roles limits agency costs and improves firm performance (e.g., Rutledge et al., 2016;Zhao, 2003).Additionally, Aktas et al. (2018) found that CEO duality results in firm inefficiency and investment misallocation, particularly when external monitoring is ineffective.
On the other hand, stewardship theory assumes that firms with a unified structure of leadership (CEO duality) are more effective in dealing with their strategic challenges.Many studies, such as those by Guillet et al. (2013) and Manafi et al. (2015), have confirmed the stewardship theory view and acknowledged the existence of advantages associated with CEO duality.
Contrary to the above, other studies showed no significant relationship between CEO duality and firm efficiency (e.g., Andreou et al., 2014;Dian, 2014;Goh et al., 2014).On their side, Mubeen et al. (2021) found that CEO duality negatively influences the performance of Chinese firms.However, it can play a pivotal role beneficial to firm performance, given that the sign of the relationship between the performance of Chinese firms and CEO duality is mainly determined by the effect of social responsibility and the firm's size as moderating variables, both of which positively affect this relationship.
In line with the assumptions of agency theory, we argue that if a CEO was simultaneously appointed as board chairman, this would lead to decreased board effectiveness and thus increased agency costs in French firms, and we hypothesize that: H3: CEO duality affects agency costs positively.
Managerial ownership
The fraction of managerial ownership indicates the degree of compatibility between the interests of directors and shareholders (Singh & Davidson III, 2003).According to Jensen & Meckling (1976), lower managerial ownership makes the managers less motivated to put in more effort, while higher managerial ownership is likely to push managers to work harder, leading to lower agency costs.Florackis (2008) further contends that managerial ownership can serve as a mechanism to align the interests of directors and shareholders.
Several studies have shown that managerial ownership can minimize agency costs and enhance firm efficiency, including those by Fleming et al. (2005), Florackis (2008), Owusu & Weir (2018), Rashid (2015), Schä uble (2019), and Singh & Davidson III (2003).In this context, McKnight & Weir (2009) state that, within the agency model, increasing managerial ownership leads to a convergence of interests between directors and shareholders.When members of the board hold the firm's shares, they are motivated to act as shareholders, so higher managerial ownership will moderate agency costs.
However, Florackis (2008) and King & Santor (2008) state that excessive managerial ownership can generate entrenchment effects and negative consequences due to the ability of managers to defend their interests to the detriment of those of other owners.Furthermore, some studies (e.g., Bonardo et al., 2007;Doukas et al., 2000;Jelinek & Stuerke, 2009;Nguyen et al., 2020;Rashid, 2016) found a non-linear relationship between managerial ownership and firm efficiency, which aligns with the view that higher managerial ownership is linked with a high level of agency costs.
In line with agency theory predictions, we expect that board members' ownership will achieve a kind of convergence between their interests and those of shareholders', causing a lower agency cost in French firms, and we assume that: H4: Managerial ownership negatively affects agency costs.
Ownership concentration
According to Singh & Davidson III (2003), holding a higher fraction of a company's shares (blockholder or ownership concentration) reflects the degree of external monitoring.Florackis (2008) argues that, given the position of equity owners and their ownership of the shares, they must actively participate in management control.Thus, concentrated ownership is an important governance mechanism for controlling management and mitigating agency problems.There are many previous studies supporting the assumption that ownership concentration is effective in monitoring management, enhancing firm performance, and reducing conflicts of interest (e.g., Ang et al., 2000;Florackis, 2008;Heugens et al., 2009).
Nevertheless, Florackis (2008) argues that the benefits arising from shareholders' control differ depending on the size of their equity stakes; for example, shareholders with small equity stakes have less incentive to exercise control behavior than other shareholders.Heugens et al. (2009) also argue that concentrated ownership provides better protection for shareholder interests when legal protections are relatively weak.Florackis (2008) states that despite the advantages of concentrated ownership, many associated costs manifest themselves clearly in the agency problems arising between minority and majority holders.
Based on previous research, we argue that an increase in ownership concentration will increase the effectiveness of external monitoring over management actions, leading to lower agency costs in French firms in the CAC 40 index.Therefore, we expect that: H5: Ownership concentration affects agency costs negatively.
Institutional ownership
Gilson & Gordon ( 2013) argue that institutional investors should mitigate managerial agency problems, given their ability to generate more active monitoring.They also point out that intermediary institutional investors are an effective tool for financial intermediation and risk-bearing.However, they include negative aspects that can be avoided by strengthening the role of activist investors, which can interact with institutional ownership to enhance corporate governance effectiveness and reduce costs related to agency conflicts.Several studies (e.g., Chaudhary, 2022;Owusu & Weir, 2018;Rashid, 2013) found that institutional investors can enhance firm efficiency and reduce agency costs due to their expertise, financial resources, and material components that allow them to effectively control management actions.
In addition, Chung et al. (2012) indicated that institutional investors can improve firm performance, and that heterogeneity exists among their roles.Furthermore, some institutional investors (such as investment advisors and long-term institutional investors) can enhance firm efficiency compared to other institutional investors.McKnight & Weir (2009) showed that a higher fraction of shareholding by institutional investors tends to be less effective in monitoring the decisions of the board and thus may not mitigate agency costs.Doukas et al. (2000) indicated that institutional investors are less effective and have a weak influence on agency costs.
Consistent with Chaudhary (2022), Owusu & Weir (2018) and Rashid (2013), we argue that increasing institutional ownership is likely to contribute effectively to monitoring managers' actions and thus reducing agency costs in French firms, and we expect that: H6: Institutional ownership affects agency costs negatively.
Agency Costs and Debt Financing
According to Florackis (2008), agency problems depend on the issues of information asymmetry and free cash flow.He suggests that debt service obligations can help reduce these issues and that bank debts are more advantageous than debt securities in monitoring firm activities.Ang et al. (2000) emphasize that the ability of banks to monitor managers complements the monitoring imposed by shareholders, indirectly reducing agency costs.This corresponds to the reality that banks push firms to operate more efficiently through optimal exploitation of their resources and judicious consumption of perquisites, intending to enhance firm performance.
Additionally, Fleming et al. (2005) indicated that debt financing furnishes complementary and/or alternative control mechanisms for family and managerial ownership, which result in reducing agency costs.Fleming et al. (2005), McKnight & Weir (2009), and Owusu & Weir (2018) have presented evidence suggesting that lender monitoring leads to more efficient asset utilization and reduces agency problems.Ang et al. (2000) revealed that default risk increases with the rise in financial leverage, motivating lenders to monitor the firm more closely to prevent the transfer of risks from shareholders to debtholders.Doukas et al. (2000) also found that increased levels of debt are instrumental in mitigating agency costs and boosting firm value.
Consistent with Fleming et al. (2005), McKnight & Weir (2009), and Owusu & Weir (2018), we expect that the monitoring imposed by lenders can enhance firms' efficiency and reduce agency costs in French firms, and hence we assume that: H7: Firms with high levels of bank debt relates to lower agency costs.
Agency Costs and Growth Opportunities
Many research studies have confirmed that firms' growth prospects are likely to influence the association of firm performance and agency costs with corporate governance.For instance, Chen (2003) demonstrated a strong relationship between equity value and the annual stock bonus for firms with higher growth opportunities.Florackis (2008) suggests differences between higher-growth firms characterized by agency problems related to underinvestment or asymmetric information and lower-growth firms characterized by agency problems related to potential disagreements about using free cash flow.In addition, the effectiveness of governance mechanisms is expected to differ according to the growth opportunities.Doukas et al. (2000) found that the interplay of growth opportunities with free cash flow can affect agency costs.Consequently, firms experiencing lower growth prospects and greater free cash flow tend to have more agency costs.In particular, Florackis (2008) expects that corporate governance will effectively moderate agency problems that relate to underinvestment or asymmetric information in higher-growth firms.Also, he estimates that these mechanisms will play a more effective role in moderating agency problems associated with disagreements about using free cash flow in lower-growth firms.
Consistent with Florackis (2008), we argue that higher-growth French firms are characterized by higher agency costs; therefore, we expect that: H8: Firms with higher levels of growth prospects relates to higher agency costs.
H9:
The influence of governance mechanisms on agency costs varies depending on the growth prospects of firms.
Data Sources
To analyze the association of agency costs with ownership structure and board characteristics, we used a dataset of French firms quoted on the stock exchange from 2005 to 2023.The reason for starting the data series in 2005 is that, as of January 1, 2005, it became necessary for European Union firms quoted on the stock exchange to prepare their financial reports following the IASB reference.Thus, prior to this year, the financial reports of French firms were prepared according to local accounting standards.The disparity in accounting systems between the two periods (pre-and post-2005) is very likely to affect the estimation of agency costs, which is undesirable.Data was manually collected from two sources: the first consists of the reference documents of firms, while the second comprises the Universal Registration Document (URD) of French firms that became applicable in France starting in 2019.These sources provide financial information that helps estimate agency costs, information about the board of directors' characteristics and ownership structure, as well as firm characteristics.
The initial sample consisted of all firms quoted on the CAC 40 index, comprising 40 major French firms.Subsequently, the sample was reduced to 31 firms after excluding nine for practical and methodological reasons (financial firms and firms with accounting closing dates other than December 31).After data collection, we observed some missing values, especially for selling, general and administrative expenses (SGA), and institutional ownership.Concerning SGA expenses, we sometimes encountered difficulties in separating this type of expense from the operating expenses due to insufficient disclosure by some French firms in their financial reports.Regarding information related to institutional ownership, it was observed that many firms quoted on the CAC 40 index neglect to disclose information about this element in their annual reports.This lack of disclosure reduced the sample size to 22 firms for the asset turnover model (AST), for the interplay of growth prospects with the free cash flow model (FCFQ), and to 20 firms for the selling, general, and administrative expenses model (SGA).Consequently, an unbalanced dataset was generated.
Regression Model Specification
To measure the impact of the characteristics of the board of directors and ownership structure on agency costs in French firms quoted on the CAC 40 index, we adopt the following Eq.(1) and Eq. ( 2):
Dependent Variable (Agency Costs)
In our study, we employed three indicators to represent agency costs.The initial indicator is the asset turnover, which has a negative relationship with agency costs and serves as a measure of management's efficiency in utilizing assets.According to Fleming et al. (2005) and Florackis (2008), firms with lower asset utilization efficiency are likely to incur higher agency costs.This proxy is measured by dividing a firm's annual net sales by its total assets at the end of the period.The second indicator is the ratio of SGA expenses, which is directly related to agency costs and encompasses expenditures that afford management a wide range of discretion.According to Ang et al. (2000) and Singh & Davidson III (2003), management can use SGA expenses to conceal expenses related to perks.Therefore, firms with higher SGA expenses are expected to have higher agency costs.This proxy is measured by dividing the amount of the firm's SGA expenses by the annual sales at the end of the year.
The third measure is the interplay of free cash flows with growth prospects.McKnight & Weir (2009) and Doukas et al. (2000) contend that substantial free cash flows enable managers to exert greater discretion, leading to increased agency costs.Furthermore, high-growth firms tend to manage their resources more efficiently, reducing the likelihood of having surplus free cash flow, as available cash is directed towards projects with positive net present value.Consequently, agency costs are more likely to be high in firms that combine high free cash flow with low-growth opportunities.Following the approach of Allam (2018), Doukas et al. (2000), and McKnight & Weir (2009), we introduced a dummy variable to denote the level of the firm's growth.Low-growth firms are assigned a value of 1, whereas high-growth firms are assigned a value of 0. We used the median growth rate for all firms for each year to distinguish firms experiencing high growth opportunities from those with low growth opportunities.If a firm's growth rate exceeds the sample median for a given year, it is classified as a high-growth firm and takes the value 0. Conversely, it is classified as a low-growth firm and takes the value 1.We then multiply the value of free cash flow for each firm and for each year by the growth dummy variable, which allows us to identify firms characterized by both high free cash flows and low-growth prospects.
Growth opportunities were assessed using Tobin's Q, expressed by the market capitalization plus the book amount of debt weighted by the amount of assets.Consistent with Doukas et al. (2000) and McKnight & Weir (2009), the (Q) dichotomous variable takes the value 1 if the sample median is greater than the firm's Tobin's Q, and 0 otherwise.Moreover, free cash flow (FCF) is measured by profit from operations before amortizations and taxes plus dividends and interest paid, standardized by market capitalization (McKnight & Weir, 2009).As a final step, we multiply the value of free cash flow (FCF) by the variable (Q).An interactive variable (FCF*Q) with a high value signifies elevated agency costs.
Independent Variables
Our empirical models encompass three sets of independent variables.The first set pertains to the characteristics of the board, comprising board size, CEO duality, and board independence.The size of the board (BS) is calculated as the total number of directors.CEO duality (DUAL) is represented by a dichotomous variable, with 1 if the CEO also keeps the position of chairman and 0 in the other case.Board independence (BI) is calculated as the percentage of independent directors on the board.The second set concerns ownership structure, including managerial, executive, non-executive, institutional, and ownership concentration.Managerial ownership (MAN) is expressed by the portion of all shares owned by board members relative to the total number of shares issued by the firm.Executive ownership (EO) represents the portion of equity controlled by executive members of the board (insider directors) of the total number of firm shares.Non-executive ownership (NEO) denotes the portion of equity owned by non-executive directors within the board of the total equity of the firm.Managerial ownership squared (MAN 2 ) is calculated by squaring the managerial ownership ratio.Ownership concentration (CON) represents the ratio of equity owned by shareholders with more than 5% ownership of the capital to the total number of shares.Institutional ownership (INST) is defined as the portion of equity controlled by institutional investors in the total shares.
The last set includes variables of control, like firm size, bank debts, and growth opportunities, along with industry-fixed effects.Firm size (FSIZE) is measured by the natural logarithm of total assets at the end of the year.Bank debts (BANK) are determined by the total of short-and long-term bank debts relative to the total firm assets at the closure date.Firm growth opportunities (FGP) are assessed using Tobin's Q.In alignment with prior research, such as Fleming et al. (2005) and Rashid (2013), and to control the potential impact of industry on agency costs, we included a control variable termed "Industry FE", which represents the industry to which the firm belongs.This variable encompasses a set of nine industries, excluding the technology and telecommunications industry, enabling us to account for industry-specific effects in our analysis of agency costs across French firms quoted on the CAC 40 index.
The study variables, their definitions, and measurement methods can be summarized in Table 1.
Data Analysis
Considering the underlying characteristics of the dataset, which involve cross-sectional and time series components, and to better investigate the association of agency costs with ownership structure and board of directors' characteristics in French firms quoted on the CAC 40 index during the period 2005-2023, it is preferable to utilize panel data models.These models encompass the three approaches: pooled regression, random effects, and fixed effects.The comparison between them relies on the outcomes of several statistical tests: the Fisher Ftest (pooled OLS vs. fixed effects model), the Breusch-Pagan LM test (pooled OLS vs. random effects model), and the Hausman test (fixed effects vs. random effects).These tests aid in identifying the most suitable model for analyzing the data in the study.It's important to mention that the data analysis and all tests were performed using Stata 17 software.
Sample Characteristics
Table 2 displays the variables' descriptive statistics, revealing that the average values for the asset turnover ratio, the SGA expenses ratio, and the interaction of Tobin Q with free cash flow among French firms quoted on the CAC 40 index are 66.2%, 23.3%, and 05.6%, respectively.The average number of board members is 13.46, with an average independence ratio of 57.6%.Furthermore, 54.4% of French firms quoted on the CAC 40 index separate the roles of board chairman from those of CEO.
Regarding ownership structure, the average managerial ownership ratio in the French firms quoted on the CAC 40 index is 1.1%, with approximately 0.2% as the average ratio of shares held by executive members and 0.9% as the average ratio of shares owned by non-executive members.The average ratio of ownership concentration is approximately 27.2%, while institutional ownership averages 64.6%.Additionally, the average short-and longterm bank debt ratio stands at 21.7%, while the Tobin's Q reaches 1.58 on average.
Referring to Table 3, we observe a negative correlation between asset turnover and board size, which is statistically meaningful at the 1% level.Similarly, a positive correlation between SGA expenses and board size is observed, which is significant at the 5% level.Additionally, the correlation matrix reveals a significant negative correlation between FCFQ and board size, also significant at the 1% level.In contrast, the results in the Table 3 demonstrate significant negative correlations between agency costs, CEO duality, and institutional ownership.Furthermore, there is a significant positive correlation between managerial ownership, managerial ownership squared, ownership concentration, non-executive ownership, and agency costs in French firms quoted on the CAC 40.However, executive managerial ownership does not exhibit a correlation with any of the agency cost indicators.Additionally, an unclear correlation is observed regarding board independence and agency costs, as the correlation was negative and significant with SGA expenses while simultaneously displaying a significant positive correlation with FCFQ.
Concerning the control variables, we find that at the 1% significance level, asset turnover and firm size are negatively correlated.However, the association between bank debt and agency costs appears unclear.Similarly, regarding firm growth, there exists a notable positive correlation with agency costs as measured by SGA expenses, while also demonstrating a significant negative correlation with the interactive variable FCFQ.
Regression Findings
To ensure reliable statistical analysis, certain conditions must be met, including the normal distribution of data, the absence of multicollinearity, homoscedasticity, and no autocorrelation of residuals.Concerning the normal distribution of data, many statisticians consider this less significant if the sample size exceeds 30.Regarding multicollinearity, these issues between independent variables are usually detected using the variance inflation factor (VIF).Collinearity is typically considered present if the VIF exceeds 10 (Gujarati & Porter, 2008).Table 4 shows that all VIF values are below 10, suggesting no collinearity issues among the predictors.However, the correlation results show the presence of collinearity between MAN and EO (r = 0.667), MAN and NEO (r = 0.792), as well as between the squared of managerial ownership between MAN 2 and EO (r = 0.940).To address this issue, MAN and MAN 2 variables were included in separate regression models.
Additionally, we conducted tests for panel-level heteroskedasticity and autocorrelation using the Likelihoodratio test and the Wooldridge test, although these were not reported separately.To address the issues of autocorrelation and heteroskedasticity, we employed the 'vce(cluster panelid)' command.This command assists in obtaining robust standard errors, which are resilient to heteroskedasticity and autocorrelation, thereby ensuring the reliability of our statistical estimates.
Table 5 displays the estimates for the relationships between the characteristics of the ownership structure, board of directors, and agency costs in French firms quoted on the CAC 40 index.In terms of the asset turnover ratio, a positive coefficient suggests low agency costs.Conversely, for proxies of SGA expenses and FCFQ, a positive coefficient implies elevated agency costs.
The Fisher test (F test) and Breusch-Pagan LM test (BP LM test) results demonstrate significance at the 1% level for all models except for model ( 6), in which the BP LM test is significant at the 5% level, suggesting that these models exhibit individual-specific effects (fixed effects or random effects).Moreover, the result of the Hausman test indicates significance at the 1% level only for models (2), ( 5), and ( 6), suggesting that we reject the null hypothesis that the random effects model is appropriate for these models.Furthermore, all models are significant at 1%, as the significance level of the Wald χ² and F-statistic were less than 1%.
Unlike the expectations of agency theory, it appears that the association of discretionary expenses ratio (SGA) with board size is negative and significant at a significance level of 10% in model ( 4), but it does not associate with asset utilization efficiency, and agency costs associate with free cash flows (FCFQ).This result suggests that French firms with large boards of directors are more effective in limiting agency costs.This result is in line with Allam (2018), Nguyen et al. (2020), Owusu & Weir (2018) and Rashid (2013), which found that boards with a greater number of directors are more efficient than those with fewer members, and it also has the ability to mitigate agency costs.Moreover, our result is not consistent with many previous studies such as Chaudhary (2022), Florackis (2008), Singh &Davidson III (2003), andTruong &Heaney (2013), which suggest that limited boards are more efficient in moderating agency costs.
As demonstrated by models (1) and (2), we found evidence indicating that CEO duality has a statistically significant adverse impact on management's efficiency in utilizing the firm's assets at the 10% significance level, thereby leading to increased agency costs.This result means that CEO duality is prejudicial to the effectiveness of boards of directors in the context of French firms.This finding aligns with the assumptions of Jensen & Meckling (1976) and the result of the study by Aktas et al. (2018), which suggest that CEO duality is considered undesirable due to its potential negative effects on corporate governance and firm efficiency.Thus, when one individual simultaneously holds both the positions of board chairman and CEO, it can weaken the board's monitoring effectiveness, leading to inefficiencies in decision-making and resource allocation within the firm, especially in situations where external monitoring mechanisms are ineffective or insufficient.
Furthermore, contrary to the assumptions of agency theory, we find that executive ownership has a significant positive influence on discretionary expenses at a significance level of 10%.Therefore, augmenting the portion of common stock owned by executive members is expected to increase conflicts of interest between directors and equity holders.This can increase agency costs for French firms.Similarly, it also appears that non-executive ownership has a statistically positive relationship with agency costs associated with both asset utilization efficiency and free cash flow at a significance level of 5%.This reveals that the ownership of firm shares by the board members does not lead to a reduction in agency costs in French firms, whether those associated with the use of assets, discretionary expenses, or free cash flows.On the contrary, the low levels of executive and non-executive ownership exacerbate agency problems in the context of French firms.Our results are consistent with the findings of Allam (2018)'s study, which demonstrated that non-executive ownership relates positively to the agency costs of investment.On the other hand, our findings are inconsistent with the results of Florackis (2008), who emphasized the effectiveness of executive and non-executive ownership as an incentive mechanism in mitigating agency costs in UK firms.
As demonstrated in models ( 2) and ( 6), contrary to the predictions of agency theory, managerial ownership relates negatively and significantly to asset utilization efficiency at the 10% level, while it relates positively and significantly to the agency costs interacted with free cash flows at 1%, revealing that greater managerial ownership will lead to a raise in agency costs in French firms.This finding is consistent with the result of Nguyen et al. (2020), which found that managerial ownership relates negatively to asset turnover ratio, leading to an increase in agency costs.Contrarily, this result is inconsistent with several studies by Doukas et al. (2000), Jelinek & Stuerke (2009), McKnight & Weir (2009), Owusu & Weir (2018), Rashid (2016), Singh &Davidson III (2003), andTruong &Heaney (2013), which indicate that when managerial ownership is low, a convergence of interests is achieved between managers and owners, leading to a reduction in agency costs within the firm.
Additionally, the squared value of managerial ownership (MAN 2 ) exhibits a significant negative relationship with agency costs associated with both discretionary expenses and free cash flows, at the 10% and 5% significance levels, respectively.This finding suggests that there exists a curvilinear relationship, resembling an inverted Ushape, between agency costs and the level of ownership held by managers.Consequently, in the context of French firms, as managerial ownership reaches high levels, the interests of managers become closely aligned with those of shareholders, which motivates them to select alternatives that benefit shareholders and augment firm value, ultimately leading to a decrease in agency costs within the firm.This result is in line with Allam (2018) and McKnight & Weir (2009), which found that intensive managerial ownership relates negatively to agency costs.However, our finding contradicts the assumptions of agency theory, and the results of Doukas et al. (2000) and Jelinek & Stuerke (2009), which suggest that higher managerial ownership is related to intensive agency costs (managerial entrenchment).
The results also show that board independence, ownership concentration, and institutional ownership have no significant effects on agency cost proxies in French firms quoted on the CAC 40 index.
In terms of the variables of control, it appears that firm size positively and statistically affects agency costs, as measured by asset turnover at 1%.This means that big-size firms quoted on the CAC 40 incur a high level of agency costs.This finding aligns with the predictions of agency theory and the findings of Allam (2018) and Rashid (2013).They suggest that large firms are characterized by greater complexity, more challenging monitoring, and greater managerial discretion, which likely leads to an increase in agency costs.
Additionally, we also find that bank debt is significantly and negatively related to asset utilization efficiency at a significance level of 1%.This suggests that firms listed on the CAC 40 with high levels of bank debt experience high levels of agency costs.However, this finding contradicts the predictions of agency theory and the results of Ang et al. (2000), Fleming et al. (2005), Florackis (2008), McKnight &Weir (2009), andNguyen et al. (2020).They noted that agency costs are expected to decrease with the increased monitoring imposed on the firm by debtholders.
Furthermore, the findings show that growth prospects (FGP) have a significant negative relationship with agency costs associated with both asset utilization efficiency and free cash flows, at a 1% significance level.This indicates that firms listed on the CAC 40 with high growth opportunities are characterized by lower agency costs.This result aligns with Ang et al. (2000), Chaudhary (2022), Fleming et al. (2005), and Rashid (2013), who assert that firms with high growth prospects may have lower agency costs.
As previously mentioned, we included industry fixed effects (Industry FE) in all models, excluding models (2), (5), and (6) due to collinearity.To assess whether there are differences in agency cost levels across industries, we conduct a joint test of the coefficients associated with the industry dummy variables.Consistent with Fleming et al. (2005), we find that the coefficients of industry, although not reported separately, are jointly significant at the 1% and 5% significance levels, where the p-value for the Wald test was less than 5%.This implies the existence of industry-fixed effects within our models, indicating differences in agency cost levels across industries to which French firms belong.This finding supports the notion that industry has an influence on agency costs.
The Influence of Growth Prospects on the Association of Agency Costs with Governance Mechanisms
Many studies (e.g., Doukas et al., 2000;Florackis, 2008) show the importance of considering the interactive effect of growth prospects when discussing the association between agency costs and corporate mechanisms.They provide empirical evidence supporting the notion that the efficacy of governance practices in mitigating agency costs is fundamentally linked to a company's growth prospects.Typically, high-growth firms involve agency problems associated with greater information asymmetry.Therefore, governance mechanisms are expected to play a greater role in mitigating these problems in high-growth firms.Similarly, low-growth firms face challenges related to the use of free cash flows.Thus, corporate governance mechanisms are expected to play a greater role in mitigating this type of problem in low-growth firms.One of the aims of our study is to explore whether the growth opportunities of the firm influence the relationship between ownership structure, board characteristics, and agency cost in French firms quoted on the CAC 40 index.For that reason, we estimated these relationships while considering the moderating role of the firm's growth opportunities (FGP).The estimation results are summarized in Table 6.Following the findings in Table 6, Fisher, Breusch-Pagan LM, and Hausman tests, although not reported separately, all indicate statistical significance at the 1% level for models (8) to (12).This implies that these models feature individual-specific fixed effects.In the case of model ( 7), the Hausman test suggests that it displays individual-specific random effects since the p-value exceeds the 5% threshold.However, it's worth noting that all models demonstrate statistical significance at the 1% level.
Consistent with our earlier finding, we find that board size has a significant negative effect on agency costs associated with free cash flows at the 1% significance level.This reveals that larger boards of directors are likely to provide more diverse perspectives, expertise, and oversight, which can help mitigate agency costs in French firms quoted on the CAC 40.This finding supports our H1.
In contrast to H2, our findings reveal that board independence negatively and significantly affects asset turnover at the 10% level.This means that boards comprising a substantial number of independent directors demonstrate lesser efficiency in employing the firm's assets.Consequently, the H2 is rejected.
One plausible explanation is that the presence of a large number of independent directors may diminish the effectiveness of the board of directors.Management might encounter challenges due to the stringent oversight and cautious decision-making imposed by independent directors, compounded by their limited familiarity with French firms.This result contradicts the predictions of agency theory, which posits that the independence of the board serves as an effective mechanism to mitigate agency costs.However, it is in line with the viewpoint of stewardship theory, which contends that executive directors possess a greater capacity to attain organizational goals and enhance decision-making within the firm.Furthermore, our results are consistent with those of Florackis (2008), who observed a positive relationship between board independence and agency costs in UK firms.
In contrast to the results in Table 5, the findings indicate that duality has no significant effect on any of the agency cost indicators in French firms quoted on the CAC 40 index.This result aligns with Allam (2018), Florackis (2008), McKnight & Weir (2009), Nguyen et al. (2020), Owusu & Weir (2018), Rashid (2013).They showed that CEO duality doesn't seem to harm a firm's efficiency or lead to higher agency costs.Therefore, the H3 is rejected.
Confirming our previous findings, the results reveal that both executive and non-executive ownership have a positive and significant influence on agency costs within French firms, represented by both the SGA expenses and the FCFQ indicator, at the 5% level.Furthermore, the results suggest that low levels of managerial ownership are related to lower efficiency in utilizing firm's assets, at the 10% significance level.However, when managerial ownership reaches high levels, it will lead to more efficient use of assets.This result confirms the existence of an inverted U-shaped curvilinear relationship between managerial ownership and agency costs in French firms.Therefore, our H4 is rejected.
Additionally, the findings indicate that ownership concentration has a significant positive effect on agency costs in French firms, as measured by an asset turnover ratio of 5%.This means that greater ownership by blockholders in French firms is likely to generate more conflicts of interest or management behavior that does not aim to maximize shareholder value, potentially leading to higher agency costs associated with mitigating these conflicts.This finding is consistent with the results of Truong & Heaney (2013), who found a negative relationship between blockholders' equity and asset turnover ratio.However, our findings do not align with those of Florackis (2008), who demonstrated that ownership concentration as a monitoring mechanism effectively reduces agency costs.Consequently, H5 is rejected.
Similarly, we have found evidence indicating that institutional ownership significantly and positively impacts agency costs related to discretionary expenses at the 10% level.This suggests that despite the advantages related to institutional ownership, such as improved oversight and governance, it can also introduce complexities and challenges that contribute to higher agency costs within the French firms quoted on the CAC 40 index.Furthermore, in the context of French firms, institutional investors may have their own objectives that may not align with those of other shareholders or the firm's management (e.g., prioritizing short-term financial gains over long-term value creation).This misalignment can lead to conflicts of interest, resulting in inefficient decision-making processes and ultimately higher agency costs for French firms.Our result aligns with those of Doukas et al. (2000), McKnight &Weir (2009), andRashid (2013), who demonstrate that increased institutional ownership is likely to lead to increased agency costs.Therefore, H6 is rejected.
Concerning the control variables, the results support our previous findings.The relationships between firm size and bank debt and asset utilization efficiency are significantly negative.This confirms that firms quoted on the CAC 40 index, characterized by large size and high bank debts, bear higher agency costs than other firms.Consequently, H7 is rejected.
The findings additionally indicate a significant adverse association of asset utilization efficiency with growth opportunities, at a significance level of 10%, which is inconsistent with our earlier finding.Additionally, the results indicate that growth opportunities are significantly negatively related to agency costs.This suggests that highgrowth firms incur lower agency costs for investments.Therefore, H8 is rejected.
As mentioned previously, it's possible that the growth prospects of the firm affect the association between governance mechanisms and agency costs within the framework of French firms.Our findings support the presence of four interaction effects.
Regarding the interactive variable between board size and growth opportunities (BS*FGP), the findings reveal a positive and statistically significant relationship with agency costs associated with free cash flow, at the 5% level.This reveals that the effectiveness of board size as a monitoring mechanism for mitigating agency costs is more pronounced for French firms with low-growth opportunities.
In terms of the interactive variable between board independence and growth opportunities (BI*FGP), the results indicate a significant and positive relationship with the efficiency of asset utilization at the 5% significance level.This suggests that the effectiveness of board independence in mitigating agency costs in firms quoted on the CAC 40 index is more pronounced for high-growth firms.
Similarly, concerning the interactive variables between executive ownership and growth opportunities (EO*FGP) as well as non-executive ownership and growth opportunities (NEO*FGP), the results indicate a negative and statistically significant relationship between these two interactive variables and agency costs associated with both discretionary expenses and free cash flows, at 1% and 5% levels of significance, respectively.This implies that the effectiveness of executive and non-executive ownership as an incentive mechanism for reducing agency costs is more pronounced in high-growth French firms.
Overall, we have evidence supporting the idea that growth opportunities can impact the relationship between agency costs and corporate governance mechanisms.Thus, H9 is accepted.
Conclusions
In this study, we have investigated the influence of corporate governance mechanisms on agency costs in French firms quoted on the CAC 40 index from 2005 to 2023.Our focus lies on examining the impact of ownership structure and the board of directors' characteristics on agency costs.Additionally, we have investigated the effect of a firm's growth opportunities on the association of agency costs with governance mechanisms in French firms.
Contrary to the assumptions of agency theory, our empirical results reveal that French firms with expanded boards are more able to mitigate agency costs.However, managerial ownership as an incentive mechanism has proven ineffective in reducing agency costs.Increasing levels of managerial ownership lead to heightened conflicts of interest and exacerbate agency problems, resulting in increased agency costs.Nevertheless, reaching high levels of managerial ownership creates a compatibility of interests between management and shareholders, thereby reducing agency costs.Thus, we find no evidence of managerial entrenchment behavior in French firms quoted on the CAC 40 index.
Furthermore, our findings suggest that increasing board independence may negatively affect management's efficiency in using the firm's assets.This result supports the view that executive directors have a greater ability to achieve organizational goals and enhance decision-making within the firm, given their extensive knowledge of the firm compared to independent directors.It also opens the discussion on the effectiveness of board independence as a monitoring mechanism under the French corporate governance code.
We also found weak evidence indicating that ownership concentration, institutional ownership, and CEO duality may negatively impact a firm's asset utilization efficiency and contribute to increased agency costs associated with discretionary expenses.These findings are very interesting and raise questions about the effectiveness of these mechanisms in the French context.
Additionally, our results confirm that large French firms tend to incur higher agency costs, aligning with agency theory predictions.Unexpectedly, increased monitoring by debtholders exacerbates agency problems and increases agency costs in French firms, contrary to agency theory predictions.
Finally, our empirical results have shown that high-growth French firms tend to incur lower agency costs than firms with low growth.Moreover, the effectiveness of certain mechanisms of corporate governance in moderating agency costs is influenced by the growth prospects of the firm.Specifically, board independence and executive and non-executive ownership are effective mechanisms for firms with high-growth prospects, whereas board size is effective for firms with limited growth prospects.
In conclusion, our study makes a substantial contribution to assessing the corporate governance framework in France and the efficacy of governance mechanisms to mitigate agency issues within French firms.Additionally, it offers valuable insights to French firms regarding ideal board structures and managerial ownership levels to diminish agency costs.Moreover, it highlights the necessity for policymakers, legal authorities, and practitioners in France to reconsider corporate governance mechanisms, given the failure of many of these mechanisms to reduce agency costs.
where, AC represents the agency costs proxies.BS is the directors' board size.BI is the board of directors' independence.
Table 1 .
Study variables and their descriptions
Table 4 .
The VIFs value
Table 5 .
The impact of ownership structure and board characteristics on the costs of agency
Table 6 .
The moderating effect of firm growth prospects in the relationship between agency costs, ownership structure, and board characteristics | 2024-05-23T15:11:42.744Z | 2024-03-31T00:00:00.000 | {
"year": 2024,
"sha1": "73faaaf1f2532e86ffe8da63c82e407b67232c30",
"oa_license": "CCBY",
"oa_url": "https://library.acadlore.com/JCGIRM/2024/11/1/JCGIRM_11.01_04.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "30e1cc323583bb5ce47d2cdbe1e83bca37d1caca",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
4834098 | pes2o/s2orc | v3-fos-license | Palaeoecological inferences for the fossil Australian snakes Yurlunggur and Wonambi (Serpentes, Madtsoiidae)
Madtsoiids are among the most basal snakes, with a fossil record dating back to the Upper Cretaceous (Cenomanian). Most representatives went extinct by the end of the Eocene, but some survived in Australia until the Late Cenozoic. Yurlunggur and Wonambi are two of these late forms, and also the best-known madtsoiids to date. A better understanding of the anatomy and palaeoecology of these taxa may shed light on the evolution and extinction of this poorly known group of snakes and on early snake evolution in general. A digital endocast of the inner ear of Yurlunggur was compared to those of 81 species of snakes and lizards with known ecological preferences using three-dimensional geometric morphometrics. The inner ear of Yurlunggur most closely resembles both that of certain semiaquatic snakes and that of some semifossorial snakes. Other cranial and postcranial features of this snake support the semifossorial interpretation. While the digital endocast of the inner ear of Wonambi is too incomplete to be included in a geometric morphometrics study, its preserved morphology is very different from that of Yurlunggur and suggests a more generalist ecology. Osteology, palaeoclimatic data and the palaeobiogeographic distribution of these two snakes are all consistent with these inferred ecological differences.
AP, 0000-0002-9312-0559; MSYL, 0000-0002-3905-0887 Madtsoiids are among the most basal snakes, with a fossil record dating back to the Upper Cretaceous (Cenomanian). Most representatives went extinct by the end of the Eocene, but some survived in Australia until the Late Cenozoic. Yurlunggur and Wonambi are two of these late forms, and also the bestknown madtsoiids to date. A better understanding of the anatomy and palaeoecology of these taxa may shed light on the evolution and extinction of this poorly known group of snakes and on early snake evolution in general. A digital endocast of the inner ear of Yurlunggur was compared to those of 81 species of snakes and lizards with known ecological preferences using three-dimensional geometric morphometrics. The inner ear of Yurlunggur most closely resembles both that of certain semiaquatic snakes and that of some semifossorial snakes. Other cranial and postcranial features of this snake support the semifossorial interpretation. While the digital endocast of the inner ear of Wonambi is too incomplete to be included in a geometric morphometrics study, its preserved morphology is very different from that of Yurlunggur and suggests a more generalist ecology. Osteology, palaeoclimatic data and the palaeobiogeographic distribution of these two snakes are all consistent with these inferred ecological differences.
Introduction
Several recent studies have shown a close correlation between the shape of part or all of the inner ear apparatus (sacculus, lagena and semicircular canals) and ecological preferences in modern squamate reptiles, i.e. lizards and snakes (e.g. [1][2][3]). In addition, these studies [1][2][3] were focused on testing whether it is possible to draw inferences about the palaeoecology of extinct taxa by comparing the morphology of their inner ears with that of modern species with recognized ecological preferences. It was demonstrated that ecology has heavily influenced the morphology of the inner ear throughout snake evolution [1], though phylogeny also plays a part [3]. A study of the semicircular canals among Greater Antillean Anolis lizard species [2] found 'ecomorph' as the most important covariate of morphology; fossil taxa were found to have different canal shapes and inferred to possess different ecological preferences from modern species. The use of three-dimensional geometric morphometrics to quantify inner ear shape variables and investigate correlations to ecological preferences in both modern and fossil taxa of squamate reptiles is thus showing promising results, particularly when balanced against phylogenetic hypotheses [2].
Wonambi naracoortensis, from Pliocene and Pleistocene deposits in southern Australia [7,[22][23][24], and Yurlunggur spp. [8], from the Late Oligocene to Middle Miocene deposits of Riversleigh, in northern Queensland, are the two best known of all madtsoiid snakes, with most cranial and postcranial elements known and described. Owing to their completeness, they represent the best sources of information on the anatomy of this extinct lineage, and may help shed light on the ecology and habitat preferences of this group. Therefore, we micro-computed tomography (CT) scanned the braincases of both snakes and obtained digital endocasts of their inner ears (figure 1). The most complete endocast, that of Yurlunggur sp. (QMF 45 111-45 391; see end of Material and methods section for the list of institutional abbreviations) was then compared with those of 81 extant species of squamate reptiles of known ecological preference using three-dimensional geometric morphometrics. The less complete inner ear of Wonambi (SAMA P30178A) could not be landmarked, and thus was compared only based on gross morphology to the inner ears of Yurlunggur and other squamates. Other cranial and postcranial features were also examined to further test our conclusions regarding the palaeoecology of these taxa.
Because the effect of ontogenetic variation on the shape of inner ear endocasts of squamate reptiles is currently unknown, we also provide here, for the first time, a quantitative analysis of ontogenetic trajectories in a selection of eleven taxa, inclusive of both lizards and snakes. The results could be of use to researchers with inner ear data for immature specimens, either extinct or extant.
Material and methods
Landmark coordinates for the inner ears of 79 squamate reptiles were taken from the electronic supplementary material in [3,25] (i.e. all except Teretrurus and Dinilysia; Teretrurus was replaced in this study by another uropeltid snake, Rhinophis, which we consider a more derived exemplar for this group; and Dinilysia was excluded because of its unknown ecology). The inner ear of Platecarpus was taken from Yi & Norell [1,26]. Micro-CT scan data for Atractaspis, Rhinophis, Wonambi and Yurlunggur (not sampled in [3]) were acquired using a Skyscan 1076 at Adelaide Microscopy (University of Adelaide, Adelaide, South Australia) (electronic supplementary material S1, table S1; see this table also for a list of specimen numbers and taxonomic authorities). The software NRecon (Bruker microCT) was used to reconstruct stacks of images (.bmp) from the micro-CT scan data, and a digital endocast of the right inner ear was produced for each of these specimens via segmentation in Avizo v. 9.0 (Thermo Scientific™).
Three of the four new digital endocasts (the inner ear of Wonambi was not landmarked due to incompleteness) were landmarked in Landmark Editor v. 3.6 [27], following the procedure outlined in [3] (see electronic supplementary material, S2).
A recent study on mammalian bony labyrinths [28] pointed out that digital thresholding of CT scan data, the procedure used to obtain surface renderings of anatomical structures to be analysed using geometric morphometrics, can lead to artificial variation in the thickness of the semicircular canals. For this reason, [28] recommended to digitize landmarks on a centreline that runs along the canals rather than on their surface. However, this happens only when considerably different thresholds are used in the different specimens [28], and because in our case all surface files but one (Platecarpus, see above) were extracted by the same person, such large inconsistencies in the thresholding can be excluded. Moreover, because our landmarking scheme makes use of points on the sacculus and ampulla and not only on the canals, reducing the inner ear endocast to its midline skeleton via thinning of the volume [28] was not a valid option.
Measurement error in the placing of our selection of landmarks on the inner ear endocasts was tested and confirmed to be negligible in the previous study that used the same core dataset and landmark scheme [3], and will not be further discussed here.
A canonical variates analysis (CVA) was used to display the separation of the various groups in shape space. This analysis was first run in R using the package Morpho v. 2.4.1.1 [33] with jackknife crossvalidation (1000 replicates), and then plots and diagrams were produced in MorphoJ [36].
We tested for the presence of a consistent ontogenetic pattern in the growth of the inner ears of eleven juvenile-adult pairs of selected squamates (inclusive of both lizards and snakes: Acrochordus arafurae, Anilios (Ramphotyphlops) bicolor, Aspidites ramsayi, Boiga irregularis, Candoia carinata, Cerberus rhynchops, Ctenophorus decresii, Ctenotus spaldingi, Cylindrophis ruffus, Notechis scutatus, Varanus gilleni). The pairs of inner ear endocasts were landmarked using the same scheme adopted for the other specimens and described in electronic supplementary material, S3. We then ran a principal components analysis (PCA) using this morphometric data, and the ontogenetic trajectories between juveniles and adults of each pair were examined in the morphospace defined by the first three principal components (PCs).
The phylogenetic tree adopted for the various phylogenetic tests (phylogenetic signal, phylogenetic ANOVA and phylogenetic PCA) using extant taxa was obtained from Zheng & Wiens [37], with unsampled species pruned using Mesquite v. 3.2 [38], but all branch lengths were retained. Whenever one of our selected species was missing in the tree, we selected a close relative (see electronic supplementary material, S2). Three additional trees inclusive of the fossil taxa Platecarpus and Yurlunggur (three alternative positions: see below) were obtained after insertion of these fossils into the tree of extant species using the editing tools of Mesquite v. 3.2 [38]. Platecarpus was positioned according to the topology in [39] and inserted midway along the relevant branch, i.e. halfway between the node representing the most recent common ancestor of extant snakes (Ophidia) and that of the clade ((Anguimorpha, Iguania), Ophidia). In a similar way, Yurlunggur was inserted in three different positions, resulting in three alternative tree topologies to accommodate phylogenetic uncertainty: (1) Yurlunggur was placed as a stem ophidian (Tree 1) (e.g. [18]); (2) Yurlunggur was placed as a stem alethinophidian (Tree 2) (e.g. [40]); (3) Yurlunggur was placed within Alethinophidia (Tree 3) (e.g. [21]), in particular, in a position just above Anilius and Tropidophis (see electronic supplementary material S2, figure S2). Platecarpus tympaniticus was assigned a tip age of 81 Myr [41], while Yurlunggur sp. was assigned a tip age of 23 Myr [8,42].
We assessed the effect of phylogenetic signal using the function 'physignal' in the package geomorph v. 3.0.3 [32], using a phylogeny with branch lengths and divergence times between the 80 sampled extant taxa from Zheng & Wiens [37] (unsampled terminal taxa were pruned). The fossil Platecarpus was inserted into this phylogeny based on [39], and Yurlunggur was inserted in three alternative positions based on [18,21,40] (see above). Thus, three supertrees were used to take into account the uncertainty regarding the placement of Yurlunggur. Phylogenetic signal was tested using each of the three alternative trees inclusive of all 82 taxa. The test was performed with 10 000 random permutations.
We carried out non-phylogenetic and phylogenetic Procrustes analyses of variance (ANOVA) using a randomized residual permutation procedure (10 000 iterations) [43][44][45] to test for correlation between shape and groups defined based on ecological habits. The phylogenetic ANOVA was run only using the tree of 80 extant species, because that is the tree where ecological data are available for all taxa and where there is less uncertainty about phylogenetic relationships.
We first used an ordinary (i.e. non-phylogenetic) PCA to see where Yurlunggur is located in shape space compared to other taxa based only on morphology. We then ran a phylogenetically informed PCA (phylogenetic PCA or PPCA) to provide a correction for the distribution in the shape space of the taxa that may be affected by phylogenetic signal. The phylogenetic PCA was carried out in the R package phytools v. 0.6-00 (function phyl.pca) [34,46] and the model of evolution was set to uniform Brownian motion.
We tested for a possible correlation between centroid size (CS, an index of overall size) [47] and first principal components (PC1) from both ordinary and phylogenetic PCAs using Pearson, Kendall and Spearman methods [48]. PC1 was selected because in tests of multivariate allometry PC1 is the most appropriate PC as it treats all variables equally [49], and because in biological datasets size is typically the dominant factor contributing to variation, and PC1 is that direction of multidimensional space that accounts for the greatest proportion of variance [49].
We included a classification (group affinity) test using the 'typprobClass' function in the package Morpho v. 2.4.1.1 [33], which calculates the typicality probability that a given species belongs to any given group (in this case ecological categories) based on the Mahalanobis distance [50]. This was meant to ascertain which ecological group Yurlunggur is closest to, based on the scores of the first two PCs (tests performed on scores from both ordinary and phylogenetic PCAs; ecological groups were defined for all taxa except Yurlunggur).
Information about the ecological preferences of the selected species (except Yurlunggur, which was left as unknown) was obtained from a survey of the literature (electronic supplementary material S4, table S2). We adopted the same five ecological categories of [3], keeping in mind the same caveats: (i) generalist, squamates that are commonly found in a variety of habitats and typically forage on the ground surface; (ii) arboreal, species that spend most of their time basking and foraging in trees or shrubs; (iii) fossorial, species that spend a considerable amount of time underground in burrows or that forage under loose soil and vegetation; (iv) aquatic, species that spend most or all of their time in an aquatic environment and often show anatomical specializations for swimming (e.g. sea snakes); and (v) semiaquatic, species that spend considerable amounts of time in the water, but often emerge to feed, bask or reproduce (e.g. Eunectes, Natrix). The
Results
As noted above, the digital endocast of the inner ear of Yurlunggur was sufficiently complete for quantitative morphometric analysis, while that of Wonambi was too incomplete, and will be discussed qualitatively in the Discussion.
The results of the CVA ( figure 3) show that ecological groups can be separated in shape space with a classification accuracy of 100% (K = 1). The percentage of variance explained by each canonical variate (CV) is: 47.6% for CV1, 20.5% for CV2, 17.7% for CV3, and 14.2% for CV4. The first two CVs separate semiaquatic (high values of CV2) and fossorial/semifossorial taxa (high values of CV1) from all other categories. In particular, while both semiaquatic and fossorial/semifossorial taxa have an enlarged saccular region, in semiaquatic forms the inner ear is characterized by a relatively larger lateral ampulla. High positive values of the third CV (CV3 > 3) distinguish (fully) aquatic taxa from the rest, and this translates morphologically in the combination of a relatively smaller saccular region, a shorter anterior semicircular canal, and a more mediolaterally compressed inner ear as a whole. Positive values of CV4 appear to be typical of arboreal forms, while negative values are typical of generalists. Morphologically, this corresponds to a relatively larger area enclosed by the anterior semicircular canal in arboreal forms, where the anterodorsal margin of the canal tends to be convex dorsally rather than concave.
The PCA of the inner ears from juveniles and adults of eleven different species showed that while some taxa show considerable ontogenetic shape change (e.g. Varanus), others show little such transformation (e.g. Boiga) (electronic supplementary material S2, figure S3). There is no common trajectory in the shape space defined by PC1 and PC2: the trajectories varied stochastically in length, axis orientation and direction (electronic supplementary material S2, figure S3). However, in the shape space defined by PC1 and PC3, several taxa had trajectories with similar orientation and direction, namely the lizards Ctenophorus and Varanus (both trending towards more positive values of PC3 and slightly more negative values of PC1), and the snakes Acrochordus, Anilios, Aspidites, Cerberus, Cylindrophis and Notechis (all trending towards more positive values of both PC1 and PC3). Interestingly, the snakes Candoia and Boiga have trajectories that go in opposite directions compared to all other snakes, which indicates lack of a consistent ontogenetic pattern across snakes as a whole.
Tests for phylogenetic signal found a statistically significant correlation between evolutionary history and shapes regardless of the phylogeny adopted (the null hypothesis of no phylogenetic signal present was rejected; Tree 1: K = 0.408, p = 0.0001; Trees 2 and 3: K = 0.410, p = 0.0001).
Both ordinary and phylogenetically informed Procrustes ANOVA found a statistically significant correlation between shapes and ecology (the null hypothesis of no difference between group means was rejected; ordinary Procrustes ANOVA: F 79 = 3.4578, p = 0.0001, Rsq = 0.156; phylogenetic Procrustes ANOVA: F 79 = 4.3988, p = 0.002, Rsq = 0.190). In other words, the variability between groups is significantly more than that expected based on variability within groups. . Distribution of the five ecological groups (81 taxa; all except Yurlunggur, whose ecology is unknown) in the CVs morphospace (ordinary CVA). Orange: fossorial/semifossorial taxa; cyan: semiaquatic taxa; blue: fully aquatic taxa; green: arboreal taxa; red: generalist taxa. 90% confidence ellipses for each ecological group are also shown. Procrustes landmark configurations towards the positive and negative extremes of each axis are shown on the right-hand side, in lateral and dorsal (to the right or below the former) views (anterior is to the right in all projections).
The results of our ordinary PCA ( figure 4) show that the first three components (PCs) explain approximately 50% of the variance. In the plot of PC1 versus PC2, Yurlunggur falls closest to a semiaquatic snake (the homalopsid Cerberus), while in the plot of PC1 versus PC3, Yurlunggur is surrounded mostly by fossorial/semifossorial taxa, but the semiaquatic Eunectes and the generalist Python are also quite proximal.
In the PPCA (figure 5), the first three components explain approximately 57% of the variance. Both in the plot of PPC1 versus PPC2 and in that of PPC1 versus PPC3, Yurlunggur is surrounded mostly by semiaquatic and fossorial/semifossorial taxa. The plots of the PPCAs based on the three alternative tree topologies (Tree 1, Tree 2 and Tree 3) were very similar (result shown is from Tree 1).
No statistically significant correlation was found between size (measured as CS) and the PC1 of either the ordinary or the phylogenetic PCA (based on Tree 1; values of PC1 based on Trees 1, 2 and 3 were almost identical), regardless of the method adopted (for ordinary PC1: The classification tests using the typicality probability function and the scores of the first two PCs (table 1) indicate that Yurlunggur is closest to semiaquatic forms (highest probability of 67%, second highest being fossorial/semifossorial at 59.4%) when the scores are from the ordinary PCA, and is closest to semifossorial forms when the scores are from the phylogenetic PCA (based on Tree 1) (highest probability of 64.5%, second highest being semiaquatic at 42.8%); note that values do not add up to 100% because typicality probabilities are calculated for each group independently.
Owing to inconsistencies and lack of data in the ecological literature, it was often difficult to determine whether particular species were fully fossorial or semifossorial (as already discussed in [3], this would be possible only for a few of the best-documented cases), hence a single 'fossorial' category was used. However, we could readily separate fully aquatic and semiaquatic categories. We wanted to test whether merging fully aquatic and semiaquatic taxa into one category, thus better balancing out the number of taxa across all the ecological categories, would affect the classification of Yurlunggur. Our classification tests after doing so placed Yurlunggur in the fossorial category with the highest probability regardless of whether the scores were from the ordinary (59.1% probability) or the phylogenetic PCA (64.3% probability). However, a classification into the category 'fully aquatic + semiaquatic' was not far behind (53.7% for ordinary PCA and 31.9% for PPCA).
Discussion and conclusion
Prior to this study, it was becoming clear that interpreting causation for the variation in inner ear morphology in squamates is not a straightforward process. While ecology has a significant role in shaping inner ear morphology [1], phylogenetic constraint has also a strong influence [3]. Our results indicate that a third source of inner ear variation, ontogeny, may also be important. We observed ontogenetic trajectories of considerable length for some taxa (e.g. Varanus) relative to others (e.g. Boiga) (electronic supplementary material S2, figure S3). The length of some of these trajectories in the morphospace defined by PC1 and PC2, and also in that of PC1 and PC3, was greater than some of the distances separating different species. This implies that due care needs to be taken when applying morphometric methods to the inner ear of squamate reptiles in situations when the ontogenetic stage (i.e. juvenile versus adult) of a specimen is not clear. Luckily, the fossil specimen of Yurlunggur that we examined (QMF45111) is clearly an adult, based both on degree of ossification of its skull bones and overall size of the associated vertebrae, which fall in the upper range for the genus [9]. Table 1. Typicality probabilities of Yurlunggur based on the scores from the first two ordinary principal components (PCs), and based on the scores from the first two phylogenetic principal components (PPCs). Values are shown for when 'aquatic' and 'semiaquatic' are considered as separate categories (first two rows) and when they are merged into the same category (aquatic + semiaquatic) (bottom two rows). The inner ear morphology of Yurlunggur resembles most closely that of fossorial/semifossorial taxa (e.g. Simoselaps, Anilius, Aspidites) as well as semiaquatic taxa (e.g. Cerberus, Eunectes). A semiaquatic ecology can be readily accepted for a large snake (estimated total length of approx. 5 m), but semifossorial habits may be harder to envision due to relatively large size. However, recent studies on the Australian python Aspidites (total length 2 m or more [51]) have shown that even fairly large snakes can actively burrow in search for prey [52]. Semifossorial habits in Yurlunggur are also supported by some cranial and postcranial features that are typically associated with fossorial behaviour in modern snakes. In particular, Yurlunggur has two anterolateral processes on the parietal that clasp the frontal and apparently reinforce the frontoparietal suture in a fashion very similar to what has evolved convergently in several fossorial and semifossorial snake lineages, for example Anilius, Cylindrophis, uropeltids, Micrurus and Simoselaps ( figure 6). The overall skull morphology of Yurlunggur (figures 1 and 6) is, however, indicative more of an occasionally semifossorial lifestyle rather than of truly fossorial habits, because truly burrowing snakes (e.g. uropeltids, scolecophidians, Anilius and Cylindrophis) are typically characterized by small size, small eyes (Yurlunggur has relatively large orbits; figures 1 and 6), small gape, narrow head and a snout that is firmly connected to the rest of the skull [53]. However, some fossorial snakes lack these specializations and retain a relatively mobile (kinetic) skull (e.g. Aspidites ramsayi and Aspidelaps scutatus [52,53]). Another feature that is indicative of semifossorial habits in Yurlunggur can be found in the postcranium, and specifically in the shape of the neural spines, which are typically very low in this genus, especially when compared with the neural spines of Wonambi. Low neural spines are again typical of fossorial or semifossorial habits in modern species [53] (figure 7). Due to the fact that the neural spines of the closely related Wonambi look quite different, a phylogenetic constraint can easily be ruled out. In particular, the neural spines on the mid-trunk vertebrae of Yurlunggur look intermediate in size between those of Anilius, a fossorial species, and those of the semifossorial Aspidites. Interestingly Anilius, besides being fossorial, also has semiaquatic habits, preying largely on freshwater eels [54]. This would be consistent with our findings suggesting a mixed ecology for Yurlunggur.
Therefore, features of the inner ear, skull and vertebrae suggest that Yurlunggur was likely adapted to a mixed semiaquatic and semifossorial lifestyle; its ecology may have been similar to that of modern red pipe snakes (Anilius scytale), which burrow but also hunt for prey in rivers [54]. Because of its skull structure and large size, burrowing behaviour in Yurlunggur was likely limited to occasional digging in loose or soft soil, like that of woma pythons Aspidites ramsayi [52].
The incomplete nature of the inner ear endocast of Wonambi, which is missing the whole upper portion, precluded inclusion in the quantitative geometric morphometric study. However, a general comparison based on gross morphology is still possible: most importantly, some of the main features of the inner ear, skull and postcranium of Wonambi can still be compared with homologous structures in Yurlunggur.
Compared with the inner ear of Yurlunggur, that of Wonambi (figure 1) has a relatively smaller saccular portion, a much shorter lateral semicircular canal, and much taller anterior and posterior semicircular canals based on the preserved portions. Given that Wonambi and Yurlunggur are closely related Australian madtsoiids, these differences likely reflect adaptation; a similar situation has been documented in closely related Anolis [2]. Interestingly, on the skull roof, the parietal of Wonambi lacks the anterolateral processes visible in Yurlunggur and typical of fossorial and semifossorial snakes (see above). Moreover, the neural spines of mid-trunk vertebrae of Wonambi are relatively much taller than those of Yurlunggur and semifossorial taxa; their relative height is similar to that observed in the semiaquatic anaconda Eunectes (figure 7), and is consistent with semiaquatic habits, although this can only remain speculative in the absence of a comprehensive survey of the relative heights and morphology of the neural spines of snakes (while low neural spines are generally accepted as an indicator of fossorial/semifossorial habits [53], a tall neural spine may be associated with multiple habitats, e.g. [55]). There is support for the inference that these two snakes had distinct environmental preferences if we compare the relevant palaeoclimatic information available for the Late Oligocene and Early Miocene at Riversleigh in northern Queensland, the locality of Yurlunggur [8], and that available for the Late Pleistocene in southern Australia, where W. naracoortensis has been found [7]. While Yurlunggur likely lived in a warm mesic forest habitat (e.g. [56]), W. naracoortensis occupied much cooler and drier regions of the Australian continent (e.g. [57]). A geologically rapid shift towards drier and cooler conditions in the mid-Miocene [57] may have been responsible for the disappearance of Yurlunggur and similar taxa at Riversleigh, especially if they were semiaquatic, while the ability of Wonambi to live in much cooler and drier habitats may explain its much longer and widespread persistence in the fossil record despite the increasing aridification of the Australian continent.
Finally, the diversity of inner ear, skull and postcranial morphology evident in Yurlunggur and Wonambi suggests considerable ecological diversity and plasticity across madtsoiids and other extinct basal snake lineages. Such disparity should not be surprising, given the known history of madtsoiids spans approximately 100 Myr, which is roughly equivalent to the inferred age of modern (crown) snakes. This means caution is warranted when using single fossil snakes to make broad extrapolations about early snake biology. | 2018-04-27T03:18:16.471Z | 2018-02-23T00:00:00.000 | {
"year": 2018,
"sha1": "5883560315a11d37364939f447022d56ac947706",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.172012",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfb45e9d4fe5511dc924c495d7c63ed2000bf574",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
225612787 | pes2o/s2orc | v3-fos-license | MSFA-Net: a Network for Single Image Deraining
Rain streaks degrade the quality of image. Many methods have been proposed to solve single image rain streaks removal recently. However, some methods over-smooth the recovered image. A deep network architecture called Multi-Scale Feature Attention Network (MSFA-Net) is proposed in this paper. We propose a novel basic block structure to exploit the image features, which consists of multi-scale residual learning block and feature attention block. Several basic block structures with a local residual learning compose a group architecture. The outputs of each group architecture are concatenated for final multi-scale feature fusion. Then the features are fed into feature attention block and reconstruction module. Finally, a global residual learning module restore the clean image. Besides, the feature attention block combines channel attention with spatial attention. The proposed MSFA-Net removes the rain streaks which study a non-linear mapping relationship between the rainy and clean image from synthesized dataset. Through comparing with other state-of-the-art algorithms, our algorithm performs better for both synthesized rainy image data and real rainy image data.
Introduction
Images and videos with rain streaks are collected by outdoor vision systems, which have bad quality and then lead to poor performance for other computer vision tasks including image classification, person re-identification and so on [1][2][3]. So the rain streaks removal is a necessary research task. Nowadays, various algorithms have been proposed to remove rain streaks for both video and single image. In general, there are two categories for both video and single image rain streaks removal: (1) conventional methods, (2) methods based on deep learning.
Video rain streaks removal can take advantage of the temporal redundancy. By contrast, single image rain streaks removal is more complicated. Thus, we concentrate on removing rain streak removal for single image. The widely used rain model [4][5][6] for single image is described as:
O=B+S
(1) where O is the rainy image or the input of the method, B is the clean image or the output of the method. S is the rain streak layer. Recovering the background B from the rainy image O and enhancing the image visibility are the goals of removing the rain streaks. In recent years, most methods have achieved certain success but have their suitability and advantage on specific situations. Motivated by the work [7] which has multiple local residual connection, we further propose a Multi-Scale Feature Attention Network to remove single image rain streaks (Section 3). We utilize their whole framework [7]. Unlike its work, we further design a more effective basic block structure. [7] attempts to increase the number of its basic block structure to improve the ability of the network. But it results in [8]. The multi-scale residual learning block may take maximum advantage of the rainy image features on different scales. So it can be regarded as local multiscale features (Section 3.1). Afterwards, the different outputs of three group architecture are concatenated for final multi-scale feature fusion. In summary, our contributions are: We present a Multi-Scale Feature Attention Network (MSFA-Net) to remove single image rain streaks. MSFA-Net achieves better performance on synthetic datasets than the other state-of-the-art algorithms. Experimental results on real-world images also demonstrated the superiority of MSFA-Net.
We design a new basic block structure that combines multi-scale residual learning block and feature attention. Multi-scale residual learning block can obtain features on different scales and to allow the information of the thin rain regions or no rain regions to be bypassed through plenty of skip connections.
We propose a feature attention merges channel attention, spatial attention into the residual learning. The channel attention gathers the average-pooled and maximum-pooled features. This module focuses on the thick rain region.
Related work
Removing rain streaks is an ill-posed and challenging question for single image. For conventional methods, they mainly employ a model-driven methodology which utilize prior knowledge or physical properties. [2] proposed a method decomposing the rainy image into low-frequency and high-frequency layers. In high-frequency layer, they set apart the rain streaks from background. However, in this process, they lost some details of the background. Li et al. [5] exploited GMM based patch prior to adapt variable orientations and scales of rain streaks. They achieved good performance, but also slightly smoothed the background. Gu et al. [9] designed a joint convolutional analysis and synthesis (JCAS) sparse representation model which integrate Analysis sparse representation (ASR) and synthesis sparse representation (SSR). However, their methods generally needed many iterations of computation resulting in their inefficiency. In [10], they proposed an unrolling strategy to remove the rain whose conventional numerical iterations involve data-dependent information. [11] indicated that this is a good effort to integrate model-driven and data-driven methodologies.
For deep learning methods, they mainly adopt a data-driven manner which design specific network. [12] proposed a density-aware image deraining approach (DID-MDN). By utilizing the residual-aware classifier process, they can adaptively determine the density information of rain. However, the real rain image can not judge the density only according to these three degree in fact. [13] utilized a recurrent squeeze-and-excitation (SE) block to recover the background. In [14][15], their network regarded the image detail layer as input so they had an advantage in keeping texture details. But they did not deal with removing rain streaks in heavy rain cases. Yang et al. [16] proposed multi-task deep learning architecture. They further designed an enhanced version JORDER-E to get a better performance [17]. In contrast to these previous algorithms based on deep learning which treat channel-wise feature equally, we concentrate on treating the channel-wise and spatial-wise features unequally.
Method
The input of the MSFA-Net is a rainy image in Figure 1. The whole network includes multiple skip connections for both local residual connection and global residual connection which enable the less important information to be bypassed as same as [7]. The input image is fed into a 3×3 convolution layer. And then it is sent to 3 Group Architectures. The different output features of 3 Group Architectures are concatenated and then fed into a feature attention. The final feature is fed into the restoration part and the global residual learning to attain the no-rain image. Besides, every Group Architectures include 13 Basic Block Structures. Figure 2 shows the Basic Block Structure which combines the multi-scale residual learning block and the feature attention block.
Multi-scale residual learning block
The rain degradation is complex. [17] pointed out that numerous methods were performed in a restricted receptive field. So we adopt different convolutional kernel size to attain different receptive field and then get different scale features based on their work [8]. The multi-scale residual learning block includes local multi-scale features fusion and local residual learning as shown in Figure 2. Like [8], it is a local two-bypass network. The input feature maps are fed into diverse convolutional layers. Its kernel size is 3×3, 5×5 respectively. And then they are sent to the ReLU function to improve the power of the network. Finally all these feature maps are concatenated and passed into a 1×1 convolutional layer. The detailed operation can be described as: [18] indicated that the edge or texture area involves more high-frequency information. However, most image deraining networks ignored the channel and spatial attention and their work always tended to remove texture details more or less and then resulted in over-smoothing effect for the recovered background. Hence, our network should concentrate on the more important regions to restore the highfrequency details or the thick rain regions. So the feature attention block learns what and where to concern or suppress through combining the channel attention and spatial attention in Figure 3. We apply the average pooling and maximum pooling feature for the channel attention to increase the effectiveness of the network. They can modulate the feature representations more adaptively. Before entering the feature attention block, we feed the input M out into a convolutional layer whose kernel size is 3×3.
Feature attention
Where FA in denotes the input of feature attention. The channel attention focuses on the inter-channel features. For many works, average-pooling has been used so far. However, maximum-pooling features can aggregate another important clue to acquire finer channel-wise attention.
[19] thought max-pooled feature could compensate for the missing encoding of the average-pooled feature. So we use average pooling and maximum pooling as feature descriptors simultaneously. Then the features enter into two 1×1 convolution layers, a ReLU and a sigmoid activation function.
CA out =FA in ⊗CA out * (12) [10], [17] indicated that spatial contextual information demonstrated to be helpful for single image rain removal. Considering aforementioned discussion, we adopt the spatial attention after the channel attention. The features pass into the two 1×1 and a ReLU, sigmoid activation function.
Then we element-wise multiply the input CA out and the weights of the spatial attention SA out * . SA out is the result of spatial attention.
FA out =SA out =CA out ⊗SA out * (14) We visualize the spatial and channel attention weight map of three Group Architectures to visually demonstrate the effectiveness. We can see clearly that the features of rainy streaks are given less weight from Figure 4. As for channel attention, we reveals a 3×64 sized map which every row represents corresponding Group Architecture output. Different features are assigned to different weights as shown in Figure 5. The subsequent experiments results can demonstrate the effectiveness of our method.
Implementation details
We adopt L 1 loss function. (15) θ denotes the parameters of this network. I gt signifies the ground truth and I rain signifies the corresponding rainy image. We use ADAM optimizer. Our initial learning rate is 0.0001. While our learning rate adjusts from 0.0001 to 0 according to the cosine annealing strategy [7], [22]. The Rain100L and the Rain100H is trained for 3×10 5 , 2×10 5 respectively. Every Group Architecture has 64 filters. We implement our model on Pytorch using two NVIDIA TITIAN XP GPU. Table 1 displays the PSNR/SSIM results of different algorithms on Rain100L and Rain100H. Note that FFA-NET [7] is trained from scratch on these datasets and the initial conditions of the experiment are the same as ours. Other approaches are available online. As observed, our approach is evidently superior over other state-of-the-art methods. Such good performance proves that combining the multi-scale residual learning block and the feature attention boosts the performance on these synthesized rainy datasets. Figure 7 show some results of synthesized images. Five competing methods are considered including LP [5], JCAS [9], DDN [14], UGSM [20] and PReNet [3]. Enlarging the images can demonstrate our approach is superior. For the images on Rain100H, the rain streaks effect the image [5], JCAS [9], DDN [14], UGSM [20] can not remove most of the rain streaks. PReNet [3] and our method are superior over them for rain streaks removal. However, our method behaves better than PReNet [3] in preserving the details such as the second line of Figure 6 on the zebra markings recovery. For the images on Rain100L whose rain streaks are light, LP [5], JCAS [9], DDN [14], UGSM [20] remove most of the rain streaks but over-smooth the background resulting in lacking of details. Although PReNet [3] remove all the rain streaks, it sometimes smooths the background like the second line of Figure 7 on the buildings recovery. In summary, MSFA-Net (ours) is better in removing the rain streaks. Figure 8 shows some results of rainy images in the real world. All these images were taken in real rainy situations. The ground truth of these images do not exist so we compare with each other. As observed, our method also behaves better than other approaches through zooming in these recovered images.
Conclusion
We present a Multi-Scale Feature Attention Network to remove rain streaks. We gather the multi-scale residual learning block and the feature attention in a novel basic block structure. The multi-scale residual learning block combines local multi-scale features which improves the network performance greatly. The feature attention gives different weights to different features. The different outputs of three group architecture are concatenated for final multi-scale feature fusion. Evaluations on the synthesized images and real-world images proved our algorithm outperforms state-of-the-art algorithms. Like most datadriven methods, to achieve better results on real images, we need many training rainy in real world which is time-consuming and difficult to collect. So we will consider the self-supervised method to improve the network in the following work. Putting the real-world rainy images without clean images into the network will enhance the ability of the network in real world. | 2020-07-23T09:09:36.969Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "38f8c665e7cf71572bedfb888185fb2c0690a6ee",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1584/1/012047",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9b09d1eaf2249d2889e39515f18cd861455ab3ac",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
256763672 | pes2o/s2orc | v3-fos-license | Pneumocystis pneumonia in COVID-19 patients: A comprehensive review
The admitted patients of intensive care units with coronavirus disease 2019 (COVID-19) meet the challenges of subsequent infections. Opportunistic fungal infections such as Pneumocystis pneumonia (PCP) are among the important factors in the context of COVID-19 patients affecting illness severity and mortality. We reviewed the literature on COVID-19 patients with PCP to identify features of this infection. Although studies confirmed at least the presence of one immunosuppressive condition in half of PCP patients, this disease can also occur in immunocompetent patients who developed the immunosuppressive condition during Covid-19 treatment. The major risk factors associated with COVID-19 patients with PCP can be considered low lymphocyte counts and corticosteroid therapy. Diagnostic and treatment options are complicated by the overlapping clinical and radiologic characteristics of PCP and COVID-19 pneumonia. Therefore, physicians should comprehensively evaluate high-risk patients for PCP prophylaxis.
Introduction
Cases of coronavirus disease (COVID- 19) have spread rapidly worldwide since 2019, causing a public health emergency [1,2]. One of the most common complications in patients with severe COVID-19 pneumonia is acute respiratory distress syndrome (ARDS), requiring intensive care units(ICU) hospitalization, intubation, and mechanical ventilation (MV) [3,4]. To encounter narrowing of the inflammatory airway and consequent cytokines releasing syndrome (CRS), systemic steroids and immunomodulators like tocilizumab, a humanized anti-IL-6 receptor antibody, are diagnosed for these patients [5]. Antiviral immune activation in COVID-19 patients' lung tissue can create an ideal environment for secondary infections and/or coinfections caused by other respiratory viruses such as influenza, bacteria, and fungal (yeasts and filamentous fungi) pathogens [6][7][8][9]. COVID-19 associated secondary fungal infections have been shown to significantly impact the severity of the illness and mortality rate [10][11][12]. Various studies are reported on patients with COVID-19 infection developing opportunistic fungal diseases like candidiasis, pulmonary aspergillosis, and mucormycosis [13][14][15][16]. Pneumocystis pneumonia (PCP) is an opportunistic infection caused by Pneumocystis jirovecii. Its clinical pattern Based on host immune status can change from colonization to cause life-threatening pneumonia [17]. In PCP patients, the challenges due to typically nonspecific clinical, radiological signs and diagnostic difficulties are combined with the possibility for colonization, frequent co-infection with other respiratory pathogens, and low access to sensitive and exact diagnostic tools. Considering poor specificity of clinical PCP definitions in COVID-19 patients, there is a need to establish more robust prevalence estimates, focusing on laboratory-confirmed P. jirovecii in respiratory samples from COVID-19 patients [13][14][15][16]. To address these gaps, we conducted a comprehensive review to determine the prevalence, diagnostic methods, and treat cases of COVID-19-associated PCP (CAPCP).
Method
We performed the literature review to better understand the commonalities between related previous investigations using PubMed/MEDLINE, Scopus, and Web of Science databases for published articles from the beginning of 2020 to December 2021. "Pneumocystis", "Pneumocystosis", "PJP", or "PCP" were used as mesh keywords, along with "SARS-COV-2 ′′ , or "COVID-19". Moreover, the relevant references were manually searched. Molecular-confirmed cases of COVID-19 were included in this review, and articles without the details of CAPCP cases were excluded. From selected studies, the demographic data of the country, age, gender, and the clinical data of underlying disease, CD4 cell count, use of systemic steroids, use of MV, ARDS, ICU admission, anti-COVID-19, anti-PCP treatment, method of PCP diagnosis, and disease outcome were extracted. According to the European Organisation for Research and Treatment of Cancer and the Mycoses Study Group (EORTC/MSGERC) definitions of invasive fungal disease in patients, PCP is categorized as probable, and proven PCP [17,18]. Proven PCP was diagnosed according to radiologic and clinical features plus microscopic observations of P. jirovecii on tissue or respiratory samples using conventional or immunofluorescence staining. The diagnosis of probable PCP was based on clinical and radiologic features and relevant host factors, plus the detection of P. jirovecii DNA amplified by real-time PCR on respiratory samples and/or 1,3-beta-D-glucan (BDG) in the serum sample.
Search results and demographic data
In the initial search, a total of 394 articles were identified by the search in three databases: PubMed (n = 103); Scopus (n = 193); and Web of Science (n = 98). Among them, duplicate articles (n = 126) were removed. After evaluating the titles and abstracts, 210 articles without inclusion criteria were excluded. Finally, 28 studies with the inclusion criteria were reviewed deeply and 23 articles were eligibility . Fig. 1 Shows the PRISMA flow diagram of the search and study selection strategy. Table 1
Predisposing factors
Ten cases out of 30 (33.3%) had HIV infection. Three cases required extracorporeal membranes oxygenation (ECMO), one of the major complications (16.6%, n = 5) was ARDS, indicating that mentioned patients can be considered severe cases of COVID-19 patients. In addition, corticosteroids were utilized for 83.3% (25/30) of the cases to treat pneumonia or the underlying illness.
Clinical and paraclinical findings
Ground-glass opacity (GGO) was the commonest radiologic finding 90% (27/30) that identically was reported between PCP and COVID-19 pneumonia. Further, the first and 4th CAPCP cases had cystic lesions, which were based on the distinctive PCP radiological findings. The most common samples used were Bronchoalveolar lavage (BAL) 43.3% (13/30) and sputum 20% (6/30). In most CAPCP cases, molecular approaches 50% (15/30) and microscopic observations 30% (9/30) were used to diagnose P. jirovecii. The mean serum BDG level was measured 377 pg/mL in 16.6% of cases (5/30). The mean serum BDG level was measured 377 pg/mL in 16.6% of cases (5/30). Overall, serum lactate dehydrogenase (LDH) was evaluated with a mean of 498.75 IU/L in 53.3% of cases (16/30). The mean value of LDH evaluated (724 IU/L) in cases with HIV (5/30) was almost double in comparison without HIV cases (11/30). The postmortem was diagnosed in one of the cases. Fig. 2 shows a flow diagram to diagnose Pneumocystis infection in COVID-19 patients with clinically suspected PCP. According to the EORTC/MSGERC definitions of invasive fungal disease in patients without HIV, 30% of patients (9/30) met the proven PCP criteria.
Demographic data
In severe diseases, immune disorders can increase the risk of secondary infections with a significant impact on the patients' life quality and survival [12]. According to the previous reports, the risk of secondary fungal infections such as invasive pulmonary aspergillosis (IPA), invasive mucormycosis, or invasive candidiasis in patients with COVID-19 infection has been investigated [10,42,43]. Although among patients with COVID-19 pneumonia, the most common fungal respiratory pathogen is Aspergillus spp. [44]. Reports on P. jirovecii, the causative agent of PCP, were recently emerging. Among 30 analyzed CAPCP cases in the current study, males (83.3%) were dominant. Similarly, reviews of Ahmadikia et al. and Lai et al. indicated that males were 85.7% and 82.4% COVID-19 patients with mucormycosis and pulmonary aspergillosis, respectively [43,44]. The patients' mean age was <60 years (exactly 53.65 years old) in this review, which validated Ahmadikia et al.'s review of the COVID-19-associated mucormycosis [43]. Meanwhile, in the previous studies on COVID-19 associated aspergillosis pulmonary, 62.5 years, 63 years, and 66.5 years were reported as the mean ages [10, 45,46]. As a result, it is possible to speculate that secondary fungal infections, particularly PCP, in COVID-19 patients are unaffected by age. Gender, on the other hand, can be an effective parameter.
Predisposing factors
HIV disease, organ transplant, diabetes mellitus, and hematologic malignancies can be underlying causes of invasive fungal infection (IFI) [47]. Most cases (90%) evaluated in this study had at least one comorbidity. The most common underlying condition was HIV infection, which accounted for 33.3% (10/30). In HIV patients, PCP is begun gradually and insidiously with few clinical or radiological presentations. Although, in patients with immune disorders without HIV, clinical symptoms with a tendency to the acute and rapid onset of respiratory lead to respiratory failure, and high rates of ICU admissions [15,[48][49][50][51]. This disparity can be attributable to the severity of pneumonia and the degree of lung inflammation. Furthermore, HIV patients have a higher burden of P. jirovecii and fewer neutrophile counts than non-HIV patients [52]. According to the EORTC/MSGERC, solid organ transplantation, glucocorticoid or T-cell suppressive medication use, and a CD4 + count of <200 cells/mm3 are risk factors for developing PCP in individuals without HIV [18]. Although the link between PCP and non-immunocompromised ICU patients less attention has been paid, patients with Influenza comprised 7% of reported coinfections [53]. Also, lymphopenia (the decrease of absolute CD4 + count and CD4/CD8 ratio) is a similar characteristic in COVID-19 and HIV patients that can be attributed to infection severity [54,55]. Steroid therapy, advised against moderate to severe viral pneumonia, can be a double-edged sword: patients may be saved from viral pneumonia, and can cause secondary fungal and bacterial infections [56]. According to Verweij et al.'s systematic review and meta-analysis, patients who take systemic steroids have a greater death risk than those who take a placebo [57]. In addition, systemic steroids were given in 25 instances (out of 30) to treat pneumonia or underlying illness. Immunomodulators have a positive impact on COVID-19 treatment. However, clinicians should keep the risk of taking them for PCP in mind. For example, the medicine tocilizumab, one of the utilized treatments for COVID-19, has been linked to PCP, which was used to treat inflammatory illnesses, including rheumatoid arthritis. Tocilizumab was given to three out of 30 (10%) cases in our study. MV is another predisposing factor in patients with IFIs and severe viral pneumonia, such as COVID-19 [56,58]. For the highest respiratory support, 46.6% (14/30) of COVID-19 cases with PCP required invasive mechanical ventilation (IMV), and 30% (9/30) of them required non-invasive MV (NIMV) and high flow nasal cannula (HFNC), corroborating Chong et al. study [59]. Severe viral pneumonia has a poor prognosis, linked to ICU admission and subsequent fungal infection, resulting in a high fatality rate [58]. In our reviewed cases, sixteen patients were brought to the ICU with a fatality rate of 50% (8/16). In patients with pulmonary aspergillosis related to COVID-19, Arkel et al. and Koehler et al. found that ICU mortality was 67% (4/6) and 60% (3/5), respectively [45,60]. 16.6% (5/30) had ARDS, indicating low PCP risk among COVID 19 patients with ARDS. Our results agree with previous studies [61][62][63].
Clinical and paraclinical features
Overlapping clinical features between PCP and COVID-19 pneumonia leads to difficulty distinguishing between both pneumonias [64]. This resemblance can be explained by similarities in pathogenic processes of pneumonia produced by P. jirovecii and SARS-COV-2, as well as the interaction of both agents with pulmonary surfactant [65]. Further, the common radiologic finding of PCP and COVID-19 pneumonia is GGO, making it difficult to differentiate based on radiological findings [66][67][68][69]. One-third of patients with advanced PCP can form cystic lesions [66]. Also, 2/30 (6.6%) of our reviewed cases had these lesions, which helps with other differential diagnoses [19]. COVID-19 patients with PCP can make diagnosis challenging. Although the diagnosis of COVID-19 on nasopharyngeal swabs is rapid and available, the PCP diagnosis is less ordinary [70,71]. BAL fluid was considered the proper sample for diagnosis of PCP because of greater sensitivity [72]. However, due to the danger of SARS-CoV-2 aerosolization, obtaining BAL specimen via bronchoscopy, an invasive and hazardous procedure, cannot be suitable for patients with severe hypoxia [73]. To diagnose definitive PCP, microscopic observations of P. jirovecii on respiratory samples by conventional stains (silver stains, toluidine blue) and Immunofluorescent staining with high sensitivity are considered the standard gold test [74,75]. Because laboratories lack either nucleic acid amplification test (NAAT) or immunofluorescent staining, conventional stains can be employed to observe cystic/trophic formations in some specimens, such as histology and cytology [17]. The high charge for the fluorescent microscope is one of the important limitations of IFAs. Significantly, microscopic P. jirovecii observations on different respiratory specimens are considered the criterion for proven PCP, while negative microscopical results because of low sensitivity do not exclude infection [17]. NAAT-based methods with more sensitivity than microscopic methods easily do not permit to distinguish between infection and colonization of P. jirovecii. Thus, the interpretation of PCR results requires quantifying the fungal load [17]. For PCP detection, qualitative PCR tests, such as conventional and nested are not recommended [17]. Because of the quantitative data and fast speed, real-time PCR is preferred. Positive qPCR results are one of the microbiological criteria for diagnosing PCP, however, negative results do not rule it out. Due to the invasive BAL sampling in severe COVID-19 patients, relevant clinical factors and radiological features together with the serum levels of BDG can be helpful to begin empirical treatment against PCP [76]. Because of the lack of BDG polysaccharides in the COVID-19 virus, the serum BDG level of COVID-19 patients is low (<80 pg/mL) [77,78]. The sensitivity and specificity of serum BDG in the patient with PCP were reported 94.8%, 86.3% in patients with relevant risk factors and clinical signs, respectively [79]. The P. jirovecii colonization in COVID-19 patients is prevalent, creating more diagnostic challenges [26]. Due to PCR-based methods or different immunosuppression levels in patients studied, P. jirovecii colonization varies in different studies [80,81]. Consequently, some studies have proposed using quantitative polymerase chain reaction (qPCR) and serum BDG levels to differentiate PCP and colonization [82,83]. Notably, a cut-off value of qPCR on BAL samples (>1.6 × 10 3 DNA copies/μl) and the serum BDG levels with a 100 pg/mL threshold can distinguish PCP and colonization with a sensitivity of almost 100% according to HIV status [82]. Therefore, the combination of PCP qPCR with high sensitivity on BAL and serum BDG levels can prevent the requirement of the immunofluorescent assay (IFA). Even positive results of BDG on serum and PCR on BAL or sputum samples and negative microscopic examinations can lead to increasing the clinical suspicion of PCP in symptomatic patients without HIV [84]. Additionally, three previous studies exhibited P. jirovecii colonization in COVID-19 patients [62,63,85]. The improvement of respiratory status without anti-pneumocystosis-specific treatment and the lack of relevant predisposing factors were possible reasons for the differentiation between the colonization and infection of P. jirovecii in these studies. The increased LDH level in patients with COVID-19 or PCP as a sensitive biomarker but not specificity can distinguish both infections [86,87]. Although, LDH level of non-survivors is higher than survivors for PCP (mean 447 IU/L vs. 340 IU/L; P < 0.05) and COVID-19 (mean 521 IU/L vs. 253 IU/L; P < 0.01) [86,88]. Another study showed that a cut-off LDH (>450 IU/L) with a sensitivity of almost 100 could diagnose PCP in patients with relevant clinical signs in the context of non-COVID-19 [86]. In reviewed cases, although serum LDH level of (16/30) was measured, five out of 30 (16.6%) had serum BDG evaluated. Additionally, the use of corticosteroids for severe COVID-19 may further delay the diagnosis of co-occurring PCP due to improvement temporarily of severe PCP. One of the practical interventions is antifungal therapy for patients with PCP. However, the majority of available antifungal drugs are inability to treat pneumocystosis. Thus, the disruption of the folic acid pathway, a good treatment target of Pneumocystis organisms, inhibits its synthesis and, as a consequence, the synthesis of protein amino acids and DNA nucleotides [89].
Treatment
The combination of TMP and SMX medicines disrupted the folic acid pathway of P. jirovecii and had good results in PCP patients. Twenty-seven CAPCP cases were given this combo medication, and 63% survived. Finally, the clinical and radiological similarities between PCP and COVID-19 pneumonia can cause a delay in diagnosis and therapy. Because of the high prevalence of COVID-19 patients with PCP and advanced HIV illness, HIV testing should be routine in all COVID-19 patients who also have PCP. In COVID-19 patients with HIV/AIDS, paying attention to PCP is critical since early identification and treatment can be beneficial.
Limitations of the study
We have reported the Pneumocystis infection among COVID-19 patients in our study. One main limitations of this study is a few case reports or case series with laboratory-confirmed P. jirovecii.
Furthermore, the distinguishment between PCP and COVID-19 pneumonia due to overlapping clinical and radiological findings is difficult, especially in non-HIV patients.
Author contribution statement
All authors listed have significantly contributed to the development and the writing of this article.
Data availability statement
Data included in article/supp. material/referenced in article.
Declaration of interest's statement
The authors declare no competing interests. | 2023-02-12T05:22:48.564Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "2d37233a3b0d404807618352132704f72a05bff9",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9126177a7e51811936c0bd2f6ecf5d402b771bd5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248816388 | pes2o/s2orc | v3-fos-license | Machine Learning in Causal Inference: Application in Pharmacovigilance
Monitoring adverse drug events or pharmacovigilance has been promoted by the World Health Organization to assure the safety of medicines through a timely and reliable information exchange regarding drug safety issues. We aim to discuss the application of machine learning methods as well as causal inference paradigms in pharmacovigilance. We first reviewed data sources for pharmacovigilance. Then, we examined traditional causal inference paradigms, their applications in pharmacovigilance, and how machine learning methods and causal inference paradigms were integrated to enhance the performance of traditional causal inference paradigms. Finally, we summarized issues with currently mainstream correlation-based machine learning models and how the machine learning community has tried to address these issues by incorporating causal inference paradigms. Our literature search revealed that most existing data sources and tasks for pharmacovigilance were not designed for causal inference. Additionally, pharmacovigilance was lagging in adopting machine learning-causal inference integrated models. We highlight several currently trending directions or gaps to integrate causal inference with machine learning in pharmacovigilance research. Finally, our literature search revealed that the adoption of causal paradigms can mitigate known issues with machine learning models. We foresee that the pharmacovigilance domain can benefit from the progress in the machine learning field.
Introduction
The World Health Organization has been promoting pharmacovigilance programs to assure the safety of medicines through a timely and reliable information exchange regarding drug safety issues, for example, adverse drug events (ADEs) [1]. An ADE is an unintended response caused by a medicine and is harmful [2]. For in-patient stays, 16.9% of the patients experienced ADEs with 6.7% categorized as serious and 0.3% as fatal [2,3]. While medication errors (e.g., wrong/missing doses, wrong administration techniques, equipment failure) and prescription of multiple medications were considered important risk factors of ADEs [4,5], there are still many incidences of ADEs due to undetected signals during clinical trials [3]. This may be due to limited sample sizes and stringent patient eligibility criteria in pre-approval studies [3]. Therefore, pharmacovigilance is important to the safe use of medicines. In this review, we focus on the tasks of ADE detection and monitoring (including pre-clinical prediction) in the pharmacovigilance program lifecycle because those tasks were mostly likely to be achieved with machine learning and causal inference. While we have narrowed down our scope to focus on the tasks of ADE detection and monitoring in the pharmacovigilance program lifecycle, the methodologies and examples of causal inference discussed in this paper could apply to each phase of the pharmacovigilance program.
Currently, major data sources for pharmacovigilance include spontaneous reporting systems (SRS), real-world data (RWD) such as electronic health records (EHRs), social media, biomedical literature, and knowledge bases [3]. Each data source has unique advantages and biases, which we discuss in the following sections. While data mining was applied to those data sources to enhance the efficiency of pharmacovigilance, the level of evidence from identified signals depended heavily on the chosen data source as well as the study design. Overall, we identified the following three main tasks in the field of pharmacovigilance.
1. Drug-event pair extraction. For this task, we usually use either structured data from EHRs [6,7] or the natural language processing (NLP)-based machine learning/ deep learning (ML/DL) method to extract drug-event co-occurrence pairs from the unstructured texts [8][9][10].
Note that those pairs only indicate a potential associative "relationship" between the drug and the event and cannot be considered a "confirmed" ADE yet. The symptoms experienced might be caused by a variety of medical conditions other than the ADE. Thus, we still need further proof using other statistical analyses or data sources. 2. Adverse drug event detection. For traditional pharmacovigilance, the most important task is to detect ADEs for these post-marketing drugs in time. The ADE detection task aims to identify and confirm ADEs from "realworld" medication usage information as early as possible. We consider ADE detection as a task providing a higher level associative relationship compared with disproportionality or NLP-based drug-event co-occurrence pair extraction. However, ADE detection is only associative without further confirmation if using SRS owing to the limitation of the data source (no control group can be matched, and no causality evaluation can be performed). Adverse drug event detection using an RWD database, however, can be evaluated for causality if a proper study design was adopted. 3. Adverse drug event prediction. Adverse drug event prediction, or ADE discovery, could be conducted only if the event data have accumulated to a certain amount. Thus, there was a time difference from drug launch to ADE prediction. Adverse drug event prediction focuses on discovering potential ADEs before being observed. The predictive power (forecast future events from data generated previously) of many ML/DL models made ADE prediction possible. Using literature and knowledge bases, researchers can predict ADEs at the pre-marketing stage. After launching and as more data accumulate, researchers can use RWD and social media data for post-marketing pharmacovigilance. While ADE prediction may not only depend on causal relationships, establishing causal relationships can facilitate feature selection and greatly improve model performance and generalizability.
Machine learning or a causal inference paradigm separately has been adopted for many pharmacovigilance studies [11][12][13][14][15]. The integration of machine learning into a causal inference paradigm was also studied, although mostly theoretically [16][17][18][19][20]. However, the relationship between machine learning and a causal inference paradigm in the context of pharmacovigilance has not been extensively examined. The goal of causal inference is to explain what factors lead (are influential) to the outcome. The emphasis is on investigating and explaining the role of individual factors in the outcome. On the contrary, most machine learning tasks emphasize the outcome and aim to predict whether an outcome will occur in the future. Weights in machine learning models are not equivalent to effect sizes in causal inference [21]. Pharmacovigilance involves a series of tasks: (1) predicting the outcome using drug exposure and a set of covariates and (2) understanding the causal effects between drug exposure and the outcome. The complicated nature of pharmacovigilance requires researchers to choose methods and study designs wisely in order to answer the proposed question (prediction or explanation). However, ideally, machine learning and causal inference could be combined to enhance both the predictive and explanatory power of a single study. Therefore, this article aims to discuss the application of machine learning and a causal inference paradigm in pharmacovigilance. Pharmacovigilance tasks, machine learning, and causal inference paradigms have intertwined relationships (Fig. 1). In the following sections, we discuss (1) data sources for pharmacovigilance, common methods (traditional or machine learning) used to analyze data from each data source, and the advantages and biases of each data source; the search query for this section was as follows: data source name (e.g., spontaneous reporting system, SRS, EHRs, data registry) + "machine learning" + "adverse event/adverse effect/side effect". (2) Integration of machine learning into traditional causal inference paradigms (with examples of studies in the pharmaceutical industry); the search query for this section was: as follows: causal inference paradigm name (e.g., naranjo score, propensity score matching, instrumental variable) + "adverse event/adverse effect/side effect" + "machine learning/artificial intelligence" (optional). (3) Issues with machine learning and how a causal paradigm can address those issues; search query for this section was: "machine learning/artificial intelligence" + "generalizability/ generalizable/explainability/explainable/fairness/bias" + "adverse event/adverse effect/side effect" (optional). Because of the length limit of the paper, we were not able to include all papers identified from the above queries. However, we selected the most recent papers representative of the data source/methods/combination of methods to reveal current trends of machine learning in causal inference with an application in pharmacovigilance.
Spontaneous Reporting System
The most traditional dataset for ADE detection is the SRS database, such as the FDA Adverse Event Reporting System (FAERS) [22] and WHO's VigiBase [23]. Traditionally, statistically based methods such as disproportionality measures and multivariate analyses were used to analyze SRS data [24]. Recently, machine learning methods such as association rule mining [25,26], clustering [11], graph mining [12], and the neural network [27] were also applied to SRS data. However, those methods were only able to detect 'signals of suspected causality' [27,28]. Moreover, several studies have revealed limitations of the SRS, including reporting bias (e.g., underreporting, stimulated reporting), the lack of a population denominator, poor documentation quality [28,29], and lower reporting rates for older products [30][31][32]. Important details required for a causality assessment may not be captured by the SRS, for example, comorbidities and concomitant medications. This can lead to background 'noise' or may generate false-positive signals [33]. Therefore, the causality of the detected signals still needs further validation from other data sources [34].
Real-World Data
Real-world data containing both structured and unstructured data, for example, insurance claims, EHRs, and registry databases offer new opportunities for pharmacovigilance as Fig. 1 Relationships between pharmacovigilance data sources, analytical approaches, pharmacovigilance tasks, and causal inference paradigms. Each data source is commonly analyzed by specific analytical approaches depending on the characteristics of data in those data sources. Each pharmacovigilance task is also associated with specific analytical approaches. Causal inference paradigms are integrated with different analytical approaches and applied to pharmacovigilance tasks. ADE adverse drug event, LSTM long short-term memory, NLP natural language processing, RNN recurrent neural network, RWD real-world data, SVM support vector machine they provide a longer duration of follow-up, better ascertainment of exposure and outcomes, and a more complete collection of confounding variables such as comorbidities and co-prescribed medications [35]. We could also identify comparison groups in RWD databases using matching techniques. However, the timeliness of the RWD collection has been an issue with a claim or registry database [30]. Electronic health records were considered a better choice in terms of data timeliness. However, data quality issues such as non-random missingness and discrepancies across databases also made rapid utilization of RWD from EHRs difficult [30]. Despite the limitations, RWD databases enabled a transition from traditional "passive" surveillance toward "active" surveillance, and thus received considerable attention in the field of pharmacovigilance. Notably, RWD was superior as they offers longitudinal data for each subject. Therefore, increasing numbers of studies explored temporal relation extraction [36] using RWD to increase the confidence level of detected signals.
There has been a progression in the better utilization of RWD for observational studies in pharmacovigilance including: (1) development of common data models [37] such as the Observational Medical Outcomes Partnership [38][39][40] to facilitate rapid data extraction from unstructured RWD; (2) traditional epidemiologic methods (or slightly modified variants) adapted for signal detection, including a self-controlled case series study [41], a selfcontrolled cohort analysis [42], a tree-based scan statistic [6,7], and a prescription symmetry analysis [43]; and (3) new ML/DL and approaches applied to a temporal analysis [36] and relational learning [44]. Patient event-level or code-level embedding was also calculated for downstream predictive modeling using RWD [45].
Social Media and Biomedical Literature
Social media such as social networks, health forums, question-and-answer websites, and other types of online health information-sharing communities is another resource containing potentially useful and most timely information for pharmacovigilance. Biomedical literature, including research articles, case reports, and drug labels, was considered a more reliable source of unstructured data for pharmacovigilance compared with social media data. Association rule mining was commonly used for extraction of drug-event pair or drug-drug interaction from social media and literature [46][47][48]. Advancement in NLP has enabled relation extraction of drug-event pairs from the above-mentioned unstructured data sources for pharmacovigilance [49][50][51][52][53]. Advanced machine learning such as supervised learning was also applied to extract ADEs from social media and biomedical literature. For example, Patki et al. [54] used supervised machine learning algorithms to classify sentences into two classes: one with ADE mentions and another without, before inference of the experienced ADE. Several shared tasks based on social media and biomedical text data have significantly accelerated development for ADE detection using these two data sources, for example, Drug-Drug Interaction Extraction 2011 challenge task [55] and Social Media Mining for Health (SMM4H) shared task [56].
Knowledge Bases
With the development of ML/DL techniques, particularly on graph mining, knowledge bases have become a rising data source for pharmacovigilance study, especially for the pre-marketing phase. Drug chemical databases [57], drug target databases [58] (including a side effects database [59]), biomedical pathway databases [60], protein interaction databases [61], and drug interaction databases [62] were some of the most used knowledge bases in pharmacovigilance studies. Logistic regression, Naive Bayes, k-Nearest Neighbor, Decision Tree, Random Forest, and Support Vector Machine were commonly used algorithms for the prediction of unknown ADEs using knowledge bases. The algorithms were always compared with each other given a specific dataset before the best-performing algorithm was selected [57,58]. Recent advancement in Graph Neural Network (GNN) has led to an increasing interest in using knowledge bases for ADE prediction as GNN has achieved superior performance compared with other machine learning algorithms. In more recent works, the graph structures of knowledge bases were integrated with RWD to enhance the causal interpretability of ADE detection [63].
Each of these data sources has its own advantage/bias and is suitable for different pharmacovigilance tasks at different phases (pre-marketing or post-marketing). We summarized this information in Table 1. Even though we discussed each of the data sources separately in Table 1, we observed that the trend in pharmacovigilance is to employ more than one type of data source [64][65][66][67][68]. We also observed a trend to combine multiple analytical approaches, for example, [44] combined sequence analysis with supervised learning, [69] used NLP to extract features from free text, which were later used in supervised learning, and [70] proposed a novel synthesis of unsupervised pretraining, representational composition, and supervised machine learning to extract relational information from the biomedical literature. Both data source integration and analytical approach synthesis will facilitate the design of a generalizable and causally explainable ML/ DL framework.
Traditional Causal Inference Paradigm and Integration with Machine Learning
Most pharmacovigilance studies are observational studies because of the nature of the data used for analysis. However, observational studies have only limited ability to prove causality, i.e., probabilities under conditions (adverse events) that are changed and induced by treatments or external interventions [80]. Conducting causal inference for observational studies required either randomization or a rigorous study design [81,82]. In most cases of long-term pharmacovigilance, randomized trials are not feasible. Therefore, observational studies became a more favorable approach for this task. However, there are many challenges in both the design and analysis stages to draw causal conclusions from retrospective observational studies. The primary challenge is to distinguish between causal and associative relationships with observational data in the presence of confounders (i.e., factors related to both the exposure and the outcome) and colliders (i.e., factors influenced by both the exposure and the outcome). While a multivariable regression analysis was often used to adjust for potential confounders, causal effects cannot be directly estimated. Furthermore, temporal relationships are to be captured and assessed in observational studies before causal relationships can be established [83,84]. Hill's criteria (i.e., 1. Strength, 2. Consistency, 3. Specificity, 4. Temporality, 5. Biological gradient, 6. Plausibility, 7. Coherence, 8. Experiment, 9. Analogy between exposures and outcome) are often referenced as the standard definition for causality in epidemiology [85]. It has guided the development of many causal inference models, statistical tests, as well as machine learning tasks for the evaluation of causality.
In this section, we discuss four causal inference paradigms in the domain of pharmacovigilance: (1) causality assessment scales, (2) propensity score matching (PSM), (3) graph-based causal inference, and (4) instrumental variables (IVs). Our discussion focuses on how ML/ DL was integrated into the traditional causal inference methods. We also discuss current progress in pharmacovigilance that has adopted causal inference-machine learning integration. Table 2 shows the relevant papers we reviewed.
Causality Assessment Scales
Various methods are available to assess the causal relationship between a drug and an ADE, which are based on three main approaches: (1) expert judgment-based World Health Organization-Uppsala Monitoring Centre system; (2) algorithm-based Naranjo causality assessment method; and (3) probabilistic-based Bayesian Adverse Reactions Diagnostic Instrument (BARDI) [120]. The World Health Organization-Uppsala Monitoring Centre system is relatively easy to implement, it is highly dependent on an individual expert's judgment, thus suffering from poor reproducibility. The Naranjo algorithm is also simple and has good reproducibility. Its disadvantages include low sensitivity for the 'uncertain' cases and therefore a low detection rate for certain ADEs. It is also not valid for children, critically ill patients, drug toxicities, and drug-drug interaction (DDI) detection. The Bayesian approach is regarded as the most reliable approach, its complex and time-consuming nature limits its use in clinical routine practice [120].
We found that the relationships between machine learning and causality assessment scales are three-fold: (1) causality assessment scales serve as outcome labels in machine learning models that predict causality of extracted drug-ADE pairs. For example, in studies [86][87][88], researchers have utilized the World Health Organization-Uppsala Monitoring Centre to create gold-standard labels of causal drug-ADE pairs, which were later used for training supervised machine learning models to perform causal classifications on the identified drug-ADE pairs. Likewise, Rawat et al. [90] constructed a multi-task joint model using unstructured text in EHRs, using physicians' annotation as the gold standard. These efforts demonstrated that machine learning algorithms have some ability to predict the value of a report from SRS or content from social media for causal inference. (2) Causality assessment scales serve as features in machine learning models that predict causality. A group of researchers from Roche developed a model called MONARCSi with nine features capturing important criteria from Naranjo's scoring system, Hill's criteria, and internal Roche safety practices [89]. Their model achieved a moderate sensitivity and high specificity with high positive and negative predictive values. However, this approach cannot be fully automated, restricting its potential for future application. Thus, automated tools for extracting features capturing important criteria from Naranjo's scoring system or Hill's criteria are desirable. (3) Machine learning methods were employed to extract Naranjo score features and improve the efficiency of causality assessment score calculations. As discussed above, the inability to automate the extraction of Naranjo score features restricted the adoption of the proposed decision support system by Roche. Recent work by Rawat et al. [90,91] offered solutions to this limitation. In [90], they formulated Naranjo questions as an end-to-end questionanswer task. They used Bidirectional Long short-term memory (BiLSTM) to predict the scores for a subset of Naranjo questions. Later in [91], they used Bidirectional Encoder Representations from Transformers (BERT) to extract relevant paragraphs for each Naranjo question and then used a logistic regression model to predict the Naranjo score for each drug-ADE pair. To sum up, with the availability of Table 1 Data sources for pharmacovigilance, analytical approaches, advantages, and biases Analytical approaches
Pharmacovigilance tasks
Advantages and biases
Propensity Score Matching
Matching has been widely used in observational or cohort studies for drug safety investigation [14,[121][122][123][124][125] through subsampling of the dataset strategically to balance the confounder distribution in the treatment and control groups so that both groups share a similar probability of receiving treatment [126]. It allows observational studies to be designed similar to randomized designs with the outcome being independent of confounders [127]. Matching methods have evolved from "exact" matching to matching on propensity scores and to algorithmic matching, where machine learning algorithms were used for the matching process [92]. Regardless of the types of matching, this approach is often used during data preprocessing or cohort construction. Matching involves two steps: (1) definition of a similarity metric (e.g., propensity score) and (2) matching controls to treatment groups based on the defined metric [128]. While some most recent algorithmic matching techniques such as Dynamic Almost-Exact Matching with Replacement (D-AEMR) [19] and DeepMatch [129] did not necessarily use a propensity score as a similarity metric, matching using a propensity score was still the most widely adopted method Table 2 Categorization of papers reviewed regarding data sources and machine learning methods used for four causal inference paradigms RWD real-world data, SRS spontaneous reporting system, SVM support vector machine Papers for "propensity score matching" and "instrumental variables" are not applied in the field of pharmacovigilance. Papers for "graph-based causal inference" still lacks a clear causal interpretation from a graph perspective Machine learning methods Data source in observational studies. Therefore, we focus our discussion on PSM in the following paragraphs. Propensity score matching enabled the estimation of the causal effect of treatments. However, the definition of similarity and selection of covariates before matching may sometimes hinder the causal inference power of matching [130]. In other words, it could be hard to account for all possible confounders and an inappropriate assumption of similarity is likely to undermine the matched analysis. Machine learning has inspired new methods for propensity score estimation that are hypothesis-free and thus enhance the causal inference ability of PSM. Traditional PSM mainly used logistic regression for propensity score estimation. A more recent study showed promising performance improvement by using tree-based algorithms such as Classification and Regression Trees (CART) and bagging algorithms such as Random Forest for propensity score estimation [92]. Contrary to statistical models that fit models with assumptions and estimations of parameters from the data, machine learning models tend to learn the relationship between features and outcomes without an a priori model, i.e., hypothesisfree [131]. Additionally, machine learning models were also useful in addressing the "curse of dimensionality" when the number of covariates becomes too large, which has become very common in the era of "big data" [132]. For example, Zhu et al. were able to control the number of covariates and thus balance the trade-off between bias and variance of a propensity score estimator by tuning the number of optimal trees using a tree-based boosting algorithm [20].
Integration of PSM and machine learning techniques has been found frequently in observational studies [94][95][96]100], including but not limited to treatment effect estimation and outcome evaluation [93,[97][98][99], which all showed promising performance improvement compared with traditional PSM. Theoretical developments of PSM and a machine learning combination are also booming through the development and use of simulated datasets [133][134][135][136]. However, the application of such a combination has not yet been utilized/ discussed in the domain of pharmacovigilance. Propensity score matching is important for pharmacovigilance studies [14,137]. As more data or covariates become available for pharmacovigilance, the combination of PSM and machine learning can handle large covariate sets and reduce bias and variance compared with traditional PSM. Therefore, we foresee that machine learning-integrated PSM will empower future studies in pharmacovigilance.
Graph-Based Causal Inference
The graph is a common data structure that consists of a finite set of vertices (concepts) and a set of edges that represents relationships (semantic or associative) between the vertices. Graph-based methods are mainstream in both exploratory machine learning and causal inference paradigms. Graphbased methods also offer theoretical and systematic representations of causality that do not require an a priori model [138][139][140]. They can be applied to analyze integrated data from various databases, e.g., knowledge bases, molecular (multi-omics) databases, and RWD databases for causal signal detection.
In pharmacovigilance, because of the complex nature of relationships between drugs, diseases (indication, comorbidities, or adverse event), and individual characteristics (e.g., demographic, multi-omics), graph-based ML/DL methods demonstrate their strengths in modeling these complicated topologies. Graph-based methods can be applied in two separate phases of pharmacovigilance: pre-marketing and post-marketing. The rationale behind pre-marketing ADE prediction is to identify potential ADEs from a biological mechanism perspective: chemical structure, DDIs, and protein-protein interactions (PPIs). Traditionally, researchers utilized chemical structures [13,57] or biological phenotypes [58,103,141] from graph knowledge bases to predict potential adverse effects of a drug candidate. More recently, Zhang et al. predicted potential adverse effects of a drug candidate using a knowledge graph embedding generated from Drugbank [142]. Dey et al. [102] developed an attentionbased deep learning method to predict adverse drug effects from chemical structures using SIDER. The hidden attention scores were utilized to interpret and prioritize the associative relationships between the presence of drug substructures and ADEs. Zitnik et al. [104] applied graph convolutional neural networks to predict potential side effects induced by PPI networks [61] and DDI networks [60,62]. Researchers have also constructed knowledge graphs through literature mining [101]. Most of the papers using graph-based methods were for pre-marketing ADE prediction because knowledge bases regarding biomarkers, drug targets, disease indications, and adverse effects are readily available.
As more clinical or observational databases become available, researchers have transited from using a single data source, for example, knowledge bases, towards combining RWD in their analysis. For example, Kwak et al. [63] predicted ADE signals via GNNs from a graph constructed combining a knowledge base and EHR data. There were several recent studies proposed to use graphs generated from both knowledge bases and EHRs for safe medication recommendations [105,106,109]. In [106], graph embeddings were combined with a memory network recommender system. In [105], drug-ADE pairs were identified through a link prediction task. In [109], an encoder-decoder attention-based model was proposed for sequential decision making on drug selection in a multi-morbidity polypharmacy situation. Additionally, the characteristics of RWD enabled researchers to incorporate the temporality and sparsity of the features into signal detection models [110,111].
Machine learning/deep learning frameworks demonstrated improved performance in structure learning compared with the baseline greedy search scoring strategy [18,143,144] for the identification of causal graph structures with the highest score or probability. In the meantime, causal inference methods were introduced to graph-based ML/DL models to improve the explainability and generalizability of those ML/DL models. For instance, Narendra et al. adopted counterfactual reasoning for causal structural learning [145]. Lin et al. utilized a loss function based on Granger causality to provide generative causal explanations for GNN models [17]. Rebane et al. evaluated the temporal relevancy of medical events to interpret medical code-level feature importance [107]. In a more recent paper, Rebane et al. incorporated the SHAP (SHapley Additive exPlanations) framework to provide more clinically appropriate global explanations in addition to medical code-level explanations captured by attention mechanisms [108].
While the advancement of ML/DL has enabled a plethora of graph-based data mining studies in pharmacovigilance, causality interpretation was still not explicitly discussed in any of those papers. We cannot naively equate link prediction to causal inference. This is not to say that existing knowledge bases are not causal graphs, thus existing links may only be associative and have a different level of confidence in terms of causality. Among all those papers reviewed, only [17] had a clear causality evaluation. We resort to the lack of causality interpretation to the shortage of a graph-based benchmark dataset with causal components in the domain of pharmacovigilance. Currently, most studies used SIDER [102,103] or datasets integrating multiple knowledge bases as the benchmark. In [112] for example, the author used Pauwels's dataset [57], Mizutani's dataset [146], and Liu's dataset [58] as the benchmark datasets. The benchmark datasets currently prevailing lacked a causal component, for example, a level of confidence for relationships. We believe a benchmark dataset with causal components and/ or with integrated information from multiple sources could significantly benefit the development of causally explainable graph mining models.
Instrumental Variables
Estimation of causal relationships through an IV can adjust for both observed and unobserved confounders. This is a big advantage over methods such as stratification, matching, and multiple regression methods, which only allow adjustment for observed confounding variables. An IV is an additional variable, Z, that is used in a regression analysis to evaluate the causal effect of an independent variable X on a dependent variable Y (Fig. 2). The assumption of Z to be a valid IV is that (1) Z is correlated with the regressor X, (2) Z is uncorrelated with the error term U, and (3) Z is not a direct cause of outcome variable Y. Therefore, Z only influences Y through its effect on X. However, IV-based methods also suffer from criticism. First, different instruments will identify different subgroups and thus obtain different numerical treatment effects. Another criticism is that one cannot rule out "mild" violations of assumptions. Finally, an IV is consistent but not unbiased.
Several pharmacovigilance studies used an IV to investigate the adverse impact of certain medications. For example, Brookhart et al. [15] used physician preference of a cyclooxygenase-2 inhibitor over non-selective non-steroidal antiinflammatory drugs as the IV to assess the adverse effect of cyclooxygenase-2 inhibitor use on gastrointestinal complications. Ramirez et al. [147] investigated the adverse effect of rosiglitazone on cardiovascular hospitalization and allcause mortality using the facility proportion of patients taking rosiglitazone as the IV. The study found an increased risk for all-cause and cardiovascular mortality among patients taking rosiglitazone vs those who were not. Groenwold et al. [148] studied the effect of the influenza vaccine on mortality as reported in many observational studies. The study evaluated the usefulness of five IVs including a history of gout, a history of orthopedic morbidity, a history of antacid medication use, and general practitioner-specific vaccination rates in assessing the effect of influenza vaccination on mortality adverse events. They found that these IVs did not meet the necessary criteria because of their association with the outcome. In the field of causal inference for pharmacovigilance, IV-based methods have been overshadowed by PSM and graph-based methods because of the difficulty of finding a valid and unbiased IV that can serve as a randomization factor.
Recently, a few studies have explored using machine learning to improve the efficiency and fairness of IV learning from observational data. Hartford et al. [114] proposed the DeepIV framework, an approach that trained deep neural networks by leveraging IVs to minimize the counterfactual prediction error. DeepIV had two prediction tasks: first, it performed treatment prediction. In the second stage, DeepIV Fig. 2 Graph representations of relationships between X, Y, Z, and U under instrumental variable assumptions calculated its loss by integrating over the conditional treatment distribution. The author claimed that DeepIV estimated the causal effects by adopting the adapted loss function, which helped to minimize the counterfactual prediction errors. The proposed framework was also able to replicate the previous IV experiment with minimal feature engineering. Singh et al. [119] proposed a general framework called MLIV (machine-learned IVs) that allowed IV learning through any machine learning method and causal inference using IVs to be performed simultaneously. They showed that their method significantly improved causal inference performance through experiments from both simulation and real-world datasets. McCulloch et al. [16] proposed another framework for modeling the effects of IVs and other explanatory variables using Bayesian Additive Regression Trees (BARTs). Their results showed that when nonlinear relationships were present, the proposed method improved the performance dramatically compared with linear specifications. While these new advancements in IV learning have not yet been adopted in pharmacovigilance studies, they created new potentials when integrating with other causal inference study designs, for example, algorithmic matching [149], Mendelian randomization [113], and counterfactual prediction [118].
Issues with Machine Learning and Why Causality Matters
Machine learning/deep learning algorithms are good at identifying correlations but not causation. In many use cases, correlation suffices. However, this is not the case with pharmacovigilance, or generally speaking, the healthcare domain. Without evaluation of causality, ML/DL algorithms suffer from a myriad of issues: generalizability, explainability, and fairness. The ML/DL research society has directed increasing attention on improving generalizability, explainability, and fairness in recent years. As discussed in previous paragraphs, ML/DL has been integrated with traditional causal inference paradigms to enhance the performance of traditional paradigms. The opposite is fitting ML/DL into a causal inference paradigm can enhance the generalizability, explainability, and fairness of ML/DL models. Addressing these issues is critical to providing high-quality evidence for pharmacovigilance if machine learning were to be employed for signal detection.
Generalizability
Generalizability is the ability of a machine learning model trained on a sample dataset to perform on unseen data. Generalizability is important for the wide adoption of machine learning models. Recent work utilized cross-validation [150,151] or eternal validation [152,153] to examine the generalizability of their proposed machine learning model. More recently, anchor regression was proposed to deal with conditions when training data and test data distributions differed by a linear shift [154]. Anchor regression makes use of external variables to modify the least-squares loss. If anchor regression and least-squares provide the same answer ('anchor stability'), the model can be considered invariant under certain distributional changes. Comparing different ML/DL methods using ensemble methods or robust feature selection can avoid overfitted models and thus secure model generalizability [155]. In recent work, we observed that the trend in pharmacovigilance is to employ more than one type of data source [64][65][66][67][68] and to compare/combine multiple analytical approaches [44,69,70]. We also observed that causal inference models were adopted for feature selection. For example, Rieckmann et al. presented the Causes of Outcome Learning approach, which fitted all exposures from a causal model and then used ML models to identify combinations of exposures responsible for an increased risk of a health outcome [156]. We foresee that data source integration, new analytical approaches (e.g., anchor regression to address the data shift issue), and causal feature selection will benefit the design of a generalizable ML/DL framework for pharmacovigilance.
Explainability
Explainable AI (XAI) refers to ML/DL models with the results or analytical process understandable by humans, in contrast to the "black box" design where researchers cannot explain why a model arrives at a specific output [157]. This is especially important for domains such as healthcare that require an understanding of the causal relationships between features and outcomes for decision support. Several ML/DL algorithms are inherently "explainable" using feature importance, for example, Random Forest, logistic regression, and causality explanation do not equate to feature importance or regression coefficients. As in the case of [107,108], the authors utilized feature weight to interpret the contribution of each medical code to the predicted ADE outcome. However, a causality explanation between medical codes and ADE incidence cannot be established. Similarly, we cannot naively equate link prediction to a causality explanation although several existing graph-based XAI works were framed as a link prediction task, for example, prediction of potential PPI, DDI, or drug-ADE link given a medication [13,101,102,104,112]. Therefore, integration of causality evaluation is much needed to improve the power of XAI models. For example, the examples below integrated three different causal inference approaches to enhance the explainability of drug-event relationships for ADE detection: [17] (Granger causality), [145] (counterfactual reasoning), and [158] (combination of a transformer-based component with a do-calculus causal inference paradigm). The three causal inference approaches discussed above have not been extensively used for pharmacovigilance tasks, thus we did not discuss them in previous sections. However, future researchers might be able to integrate them with ML/DL models to enhance model explainability. Additionally, as we have discussed earlier in Sect. 3.3, a benchmark dataset (e.g., PPI, DDI, or drug-ADE network) with causal relationships between graph features, for example, level of confidence, can significantly benefit the development of XAI models for pharmacovigilance studies.
Fairness
Machine learning fairness is a recently established area that studies how certain biases (e.g., race, gender, disabilities, and sexual or political orientation) in the data and model affect model predictions of individuals. This issue has caught more attention under the current pandemic, as the health disparity issue was under public scrutiny [159]. Racial disparity is also a significant issue in ADE detection. As pointed out in a review paper, 27 out of 40 pharmacovigilance studies reviewed demonstrated the presence of a racial or ethnic disparity [160]. Therefore, Du et al. [161] proposed to adopt a kernel re-weighting mechanism to achieve the global fairness of the learned model. Several ML/DL fairness studies have leveraged feature importance to understand which feature contributes more or less to the model disparity [162,163]. A recent study proposed to decompose the disparity into the sum of contributions from fairness-aware causal paths linking sensitive features and the predictions, on a causal graph [159]. The same group of researchers also proposed a Federated Learning framework that balanced algorithmic fairness and performance consistency across different data sources [164]. The work discussed above, however, was applied only to datasets and tasks in the general healthcare domain. We have not found any work on machine learning fairness in the pharmacovigilance domain that pointed to a new direction worthy of exploration in the future. We anticipated that the new approaches introduced in [159,[161][162][163][164] can be extended to pharmacovigilance studies as well. Furthermore, while causal inference paradigms have not been utilized to address the machine learning fairness issue, we anticipated that the integration of causal inference paradigms with machine learning algorithms may also be a potential direction.
Current Challenges, Trends, and Future Directions
To summarize the discussion from the above sections, we found that missing data and data quality posed significant issues for currently dominant pharmacovigilance data sources. Researchers have attempted to address these issues through (1) integration of multiple data sources, (2) development of analytical approaches to impute missing data and mitigate other data issues (e.g., unbalanced confounder distribution, biased samples), and (3) development of novel estimators that allow estimation through incomplete or biased data. New methodology advancements in machine learning, causal inference, and especially, the integration of the two have accelerated the progress in each of the three directions above. On the one hand, the adoption of machine learning has facilitated the efficient implementation of traditional causal inference paradigms. On the other hand, the adoption of causal inference paradigms has facilitated our understanding and thus addresses current issues with machine learning models. High rates of underreporting and missing covariate information in SRS have undermined the power of SRS for pharmacovigilance [165]. While regulatory approaches were previously proposed to improve reporting, current approaches to address the under-reporting issue were from two directions: [168] revealed that when clinical measurements have a high missing rate, the number of times they were taken by one patient is ranked as more informative than looking at their actual values. 2. Using machine learning to estimate under-reporting or predict and impute under-reported cases. Recent progress in machine learning has enabled the estimation of AE under-reporting rates for data quality management [170,171]. Traditionally, missing data imputation was conducted statistically via unconditional mean imputation, k-Nearest Neighbor imputation, multiple imputation, or regression-based imputation [172,173]. Here, we only highlighted a few more recent studies incor-porating machine learning approaches. Nestsiarovich et al. [174] proposed to use supervised machine learning (classification) to impute self-harm cases that were significantly under-reported in EHRs. They demonstrated that using the combined coded and imputed cohort, the power of their analysis could be enhanced. Another work by Sechidis et al. [175] presented solutions using the m-graph, a graphical representation of missingness that incorporated a prior belief of under-reporting. They demonstrated an approach to correct mutual information for under-reporting by examining independence properties observed through the m-graph. Their work represented a recent interest in the field of machine learning towards PU learning [176], i.e., learning from positive and unlabeled data. The assumption of PU learning is that each unlabeled data point could belong to either the positive or negative class. Therefore, potential underreported cases could be estimated from unlabeled data. Alternatively, the anchor variable framework may be adopted to reduce dependency on gold-standard labels for unlabeled cases [177][178][179]. These new directions in machine learning could provide potential solutions to alleviate the under-reporting issue.
In terms of machine learning for traditional causal inference paradigms, we observed that new advancements in PSM and IV learning through machine learning-causal inference integration have not yet been adopted in pharmacovigilance studies. However, theoretical advancements or successful adoptions in other domains demonstrated new potentials for future adoption of the integrated paradigm in the pharmacovigilance domain. For graph-based causal inference, while both graph databases and graph mining methods for pharmacovigilance are booming, causal interpretations from the graphs as well as the algorithm outputs are much needed, yet currently missing, for most of the studies. Even the currently prevailing benchmark datasets were mostly association-based. Relationships in knowledge bases may represent a certain level of causality but the level of confidence for a causal relationship was not represented explicitly. Therefore, we also recommend future researchers be very careful about the level of causality represented by graph edges when constructing graph databases.
Incorporating causal inference paradigms to address currently prominent machine learning issues in pharmacovigilance is also considered a promising future direction. It is especially worth exploration for those less utilized (in pharmacovigilance tasks) causal study designs, for example, Granger causality, counterfactual reasoning, and docalculus. In addition, there is a scarcity of exploration of addressing the machine learning fairness issue through the incorporation of causal paradigms, and thus may be a new direction for future pharmacovigilance studies.
Finally, to examine the distribution and trend in this research area, we considered 19 publications to fall into the intersection of machine learning, causal inference, and pharmacovigilance [86-91, 101-112, 158]. The breakdown of the 19 papers by year and country is shown in Fig. 3. The earliest paper was published in 2014 and utilized knowledge bases to predict potential ADEs. We observed a trend that older papers mostly use databases such as knowledge bases or social media to predict or monitor, while more recent papers utilized RWD, SRS, or a combination of multiple databases. North America was dominant in this research area followed by Europe. This may be owing to the availability of datasets for analysis.
Conclusions
In this paper, we reviewed (1) data sources and tasks for pharmacovigilance, (2) traditional causal inference paradigms and integration of machine learning into traditional paradigms, and (3) issues with machine learning, and how causal designs could mitigate current issues. First, we found that most existing data sources and tasks for Fig. 3 Year and continent distribution of 19 papers most relevant to the intersection of machine learning, causal inference, and pharmacovigilance pharmacovigilance were not designed for causal inference. In the meantime, low data quality undermined the ability to evaluate causal relationships. As establishing a causal relationship is important in pharmacovigilance, research on enhancing data quality and data representation will be an imperative step towards high-quality study for pharmacovigilance. Second, we observed that pharmacovigilance was lagging in adopting machine learning-causal inference integrated models, which pointed to some missed opportunities. For example, machine learning-based PSM and IV learning can be further developed and refined for pharmacovigilance tasks. Finally, we recognized that attempts have been made to address currently prominent issues with correlation-based ML/DL models, especially through the incorporation of causal paradigms. Therefore, we anticipated that the pharmacovigilance domain can benefit from the progress in the ML/DL field, especially through the integration of machine learning and the causal inference paradigm.
Declarations
Funding This article was funded by National Institutes of Health grants U01TR003528 and R01LM013337.
Conflict of interest
The authors declare that they have no competing interests.
Ethics approval Not applicable.
Consent for publication Not applicable.
Data availability The publications reviewed in this paper are all available online.
Code availability Not applicable.
Author contributions YZ and YL originated and planned the scope of the study. YY drafted Sect. 1, HW drafted Sect. 2.2, YL drafted Sect. 2.3, YD drafted Sect. 2.4, and YZ drafted Sects. 2.1 and 3-6. YZ and YL revised the manuscript till its final version. All of the authors have read and approved the final manuscript.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by-nc/4. 0/. | 2022-05-17T13:50:24.530Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "57c779d159d7359097994dd070459154ac79d1ee",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "bc12a6af4c589ede061a9b97a4ba934abd1e2527",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54654188 | pes2o/s2orc | v3-fos-license | Relationship between Radioulnar Incongruity of Elbow Joints and the Type of Fragmented Processus Coronoideus Medialis
The aim of the study was to find the difference between actual and anticipated frequencies of individual types of FCP (fragmented coronoid process) in relation to the extent of radioulnar incongruity. We evaluated the radiographs of elbow joints (n = 135) of dogs (n = 77) with arthroscopically (n = 109) or arthrotomically (n = 26) proven fragmented coronoideus medialis ulnae. Radioulnar incongruity was classified as a congruent joint (0-0.5 mm), moderate incongruity (0.6-2 mm) and marked incongruity (> 2.1 mm). In elbow joints without radiologically identifiable radioulnar incongruity (0-0.5 mm) significantly higher occurrence of fissured PCM (processus coronoideus medialis) was found (p < 0.01). In elbow joints with pronounced radioulnar incongruity (> 2.1 mm) we found significantly higher occurrence of FCP with a dislocated fragment (p < 0.001). The results of this study suggest the possibility of using the assessment of radioulnar incongruity from radiographs of elbow joints in mediolateral projection for specifying the X-ray diagnosis of FCP with regard to the type of FCP lesion. Fragmented coronoid process, elbow, dog, radioulnar incongruence, Xray diagnostics Fragmented processus coronoideus medialis (FCP) of the elbow joint is the most frequently occurring developmental disease of the elbow joint in dogs (Wind and Packard 1986; Boulay 1998; La Fond et al. 2002; Meyer-Lindenberg et al. 2002; Gemmill et al 2006). As a possible cause of the FCP, temporary or permanent incongruity of articular surfaces of the elbow joint is considered (Guthrie et al. 1992; Fitzpatrick and O’Riordan 2004). Radiography is the basic and most commonly used technique in FCP diagnostics and in the assessment of radioulnar incongruity (Samoy et al. 2006). Radiography is considered as an insufficiently sensitive method for diagnosing moderate radioulnar incongruity (Murphy et al. 1998; Mason et al. 2002). It is, however, relatively suitable for the detection of medium and marked radioulnar incongruity (Blond et al. 2005). Mediolateral projection (ML) of the elbow joint in the standing angle and craniocaudal projection (CrCd) of the elbow are suitable projections for the assessment of its articular surface congruity. Mediolateral projection with a 90° flexion in the elbow joint appears as the most appropriate projection for the assessment of radioulnar incongruity (Murphy et al. 1998). Larger flexion in the elbow joint accentuates radioulnar incongruity (Murphy et al. 1998). Evaluation of elbow joint congruity is subjective and may be affected by incorrect positioning (Morgan et al. 2000). Radioulnar incongruity of the elbow joint is assessed from radiographs based on the mutual position of the subchondral bone of the radial head and the subchondral bone of the distal border of the trochlear notch of ulna (Murphy et al. 1998). The articular surface of the radial head and the craniocaudal border of incissura trochlearis ulnae under physiological conditions form a continuous arch. In serious cases radioulnar incongruity reaches 5-6 mm (Morgan et al. 2000). ACTA VET. BRNO 2010, 79: 307-312; doi:10.2754/avb201079020307 Address for correspondence: MVDr. Pavel Proks Department of Diagnostic Imaging, Small Animal Clinic, Faculty of Veterinary Medicine University of Veterinary and Pharmaceutical Sciences Palackého 1-3, 612 42 Brno, Czech Republic Phone: + 420 541 562 343 E-mail: proksp@vfu.cz http://www.vfu.cz/acta-vet/actavet.htm According to the appearance of PCM during arthroscopic examination the FCP is classified into seven variants (Bardet 1997; Griffon 2006). To our knowledge, it is so far unknown whether the occurrence of individual variants of FCP is affected by the extent of radioulnar incongruity. The aim of the study was to find whether there is a relationship between the occurrence of individual types of FCP and the extent of radiologically detectable radioulnar incongruity. Null hypothesis assumed that there is no relationship between the occurrence of individual types of FCP and the extent of radiologically identifiable radioulnar incongruity. Alternative hypothesis assumed that there is a relationship between the occurrence of individual types of FCP and the extent of radiologically detectable radioulnar incongruity. Materials and Methods Inclusion criteria A total of 135 elbow joints of dogs with FCP and with the necessary complete records of the type of FCP during arthroscopic or arthrotomic treatment were chosen from the medical records of the Department of Surgery and Orthopaedics of the Small Animal Clinic, Faculty of Veterinary Medicine, University of Veterinary and Pharmaceutical Sciences Brno in the period of 2001-2008. The study did not include elbows with concurrent FCP and OCD (osteochondritis dissecans) of the medial humeral condyle, elbows with concurrent FCP and UAP (ununited anconeal process), and elbows in which radiographs were not done on X-ray cassettes of the 18 × 24 cm format. Radiography In all elbow joints included in the study, mediolateral (ML) radiographs in the standing angle and oblique craniocaudal projection (Cr15°L-CdMO) were made. All radiographs were done with patients under deep intravenous sedation induced by combination of medetomidine (10-20 μg/kg i.v. Domitor, Pfizer) and butorphanol (0.2 mg/kg i.v. Butomidor, Richter Pharma AG), or in general intravenous anaesthesia (medetomidine 10-20 μg/ kg i.v., butorphanol 0.2 mg/kg i.v., propophol 1 ml/kg i.v. Propofol, Abbott). Radiographs were done on the X-ray machine Proteus XR/a without the use of a grid, with the cassette placed immediately under the elbow joint. Patients were positioned by employees of the Department of Diagnostic Imaging acquainted with the technique of correct positioning of the elbow joint. Radiographs of 105 elbows were made on X-ray films using intensifying screen with the screen speed 100, or on mammographic cassettes and films. Radiographs of 30 elbows were made in digital form in the DICOM format at a resolution of 1170 × 2370 px (CR, Capsula XL, Fuji). X-ray films were converted to digital form in the DICOM format at a resolution of 1170 × 2370 pixels using the X-ray film scanner (Diagnostic pro plus, Vidar System Corporation). Radioulnar incongruence measurement Radioulnar incongruence was assessed from mediolateral radiographs of elbow joints with FCP in the standing angle using the method according to Brunnberg et al. (1999). A line was traced using the DICOM viewer (JiveX, Visus-Transfer Technology GmbH) on the digital mediolateral radiograph of the elbow joint, connecting the cranioproximal and caudoproximal border of the articular surface of the radial head, demarcating the radial plateau. A parallel line was traced to this line, intersecting the apex of the processus coronoideus lateralis ulnae. The distance between these two lines was recorded in mm (of one digit) as radioulnar incongruity (Plate XIX, Fig.1). For the purpose of this study the elbow joints with FCP were divided according to radioulnar incongruity into the following groups: radioulnar incongruity of 0-0.5 mm (congruent joint), radioulnar incongruity of 0.6-2 mm (moderate incongruity), and radioulnar incongruity of > 2.1 mm (marked incongruity). For the measurements, radiological magnification of the object with regard to the direct contact of the elbow joint with the X-ray cassette was not taken into account. FCP classification In each elbow, the FCP variant was recorded from the operational protocol. We used the FCP classification that is used at the Department of Surgery and Orthopaedics of the Small Animal Clinic, University of Veterinary and Pharmaceutical Sciences Brno based on a modification of the arthroscopic classification of FCP variants (Bardet 1997; Griffon 2006). We divided FCP by its appearance during the arthroscopic or arthrotomic treatment into seven types – fragmented medial margin of PCM, eroded lateral rim of PCM, fissured PCM, non-dislocated fragment of PCM, dislocated fragment of PCM, chondromalacia of PCM, and osteophytes on PCM. All surgical procedures were performed and evaluated by experienced orthopaedic surgeons (arthroscopy
Fragmented processus coronoideus medialis (FCP) of the elbow joint is the most frequently occurring developmental disease of the elbow joint in dogs (Wind and Packard 1986;Boulay 1998;La Fond et al. 2002;Meyer-Lindenberg et al. 2002;Gemmill et al 2006).As a possible cause of the FCP, temporary or permanent incongruity of articular surfaces of the elbow joint is considered (Guthrie et al. 1992;Fitzpatrick and O'Riordan 2004).Radiography is the basic and most commonly used technique in FCP diagnostics and in the assessment of radioulnar incongruity (Samoy et al. 2006).
Radiography is considered as an insufficiently sensitive method for diagnosing moderate radioulnar incongruity (Murphy et al. 1998;Mason et al. 2002).It is, however, relatively suitable for the detection of medium and marked radioulnar incongruity (Blond et al. 2005).Mediolateral projection (ML) of the elbow joint in the standing angle and craniocaudal projection (CrCd) of the elbow are suitable projections for the assessment of its articular surface congruity.Mediolateral projection with a 90° flexion in the elbow joint appears as the most appropriate projection for the assessment of radioulnar incongruity (Murphy et al. 1998).Larger flexion in the elbow joint accentuates radioulnar incongruity (Murphy et al. 1998).Evaluation of elbow joint congruity is subjective and may be affected by incorrect positioning (Morgan et al. 2000).Radioulnar incongruity of the elbow joint is assessed from radiographs based on the mutual position of the subchondral bone of the radial head and the subchondral bone of the distal border of the trochlear notch of ulna (Murphy et al. 1998).The articular surface of the radial head and the craniocaudal border of incissura trochlearis ulnae under physiological conditions form a continuous arch.In serious cases radioulnar incongruity reaches 5-6 mm (Morgan et al. 2000).
According to the appearance of PCM during arthroscopic examination the FCP is classified into seven variants (Bardet 1997;Griffon 2006).To our knowledge, it is so far unknown whether the occurrence of individual variants of FCP is affected by the extent of radioulnar incongruity.The aim of the study was to find whether there is a relationship between the occurrence of individual types of FCP and the extent of radiologically detectable radioulnar incongruity.Null hypothesis assumed that there is no relationship between the occurrence of individual types of FCP and the extent of radiologically identifiable radioulnar incongruity.Alternative hypothesis assumed that there is a relationship between the occurrence of individual types of FCP and the extent of radiologically detectable radioulnar incongruity.
Inclusion criteria
A total of 135 elbow joints of dogs with FCP and with the necessary complete records of the type of FCP during arthroscopic or arthrotomic treatment were chosen from the medical records of the Department of Surgery and Orthopaedics of the Small Animal Clinic, Faculty of Veterinary Medicine, University of Veterinary and Pharmaceutical Sciences Brno in the period of 2001-2008.The study did not include elbows with concurrent FCP and OCD (osteochondritis dissecans) of the medial humeral condyle, elbows with concurrent FCP and UAP (ununited anconeal process), and elbows in which radiographs were not done on X-ray cassettes of the 18 × 24 cm format.
Radiography
In all elbow joints included in the study, mediolateral (ML) radiographs in the standing angle and oblique craniocaudal projection (Cr15°L-CdMO) were made.All radiographs were done with patients under deep intravenous sedation induced by combination of medetomidine (10-20 μg/kg i.v.Domitor, Pfizer) and butorphanol (0.2 mg/kg i.v.Butomidor, Richter Pharma AG), or in general intravenous anaesthesia (medetomidine 10-20 μg/ kg i.v., butorphanol 0.2 mg/kg i.v., propophol 1 ml/kg i.v.Propofol, Abbott).Radiographs were done on the X-ray machine Proteus XR/a without the use of a grid, with the cassette placed immediately under the elbow joint.Patients were positioned by employees of the Department of Diagnostic Imaging acquainted with the technique of correct positioning of the elbow joint.Radiographs of 105 elbows were made on X-ray films using intensifying screen with the screen speed 100, or on mammographic cassettes and films.Radiographs of 30 elbows were made in digital form in the DICOM format at a resolution of 1170 × 2370 px (CR, Capsula XL, Fuji).X-ray films were converted to digital form in the DICOM format at a resolution of 1170 × 2370 pixels using the X-ray film scanner (Diagnostic pro plus, Vidar System Corporation).
Radioulnar incongruence measurement
Radioulnar incongruence was assessed from mediolateral radiographs of elbow joints with FCP in the standing angle using the method according to Brunnberg et al. (1999).A line was traced using the DICOM viewer (JiveX, Visus-Transfer Technology GmbH) on the digital mediolateral radiograph of the elbow joint, connecting the cranioproximal and caudoproximal border of the articular surface of the radial head, demarcating the radial plateau.A parallel line was traced to this line, intersecting the apex of the processus coronoideus lateralis ulnae.The distance between these two lines was recorded in mm (of one digit) as radioulnar incongruity (Plate XIX, Fig. 1).For the purpose of this study the elbow joints with FCP were divided according to radioulnar incongruity into the following groups: radioulnar incongruity of 0-0.5 mm (congruent joint), radioulnar incongruity of 0.6-2 mm (moderate incongruity), and radioulnar incongruity of > 2.1 mm (marked incongruity).For the measurements, radiological magnification of the object with regard to the direct contact of the elbow joint with the X-ray cassette was not taken into account.
FCP classification
In each elbow, the FCP variant was recorded from the operational protocol.We used the FCP classification that is used at the Department of Surgery and Orthopaedics of the Small Animal Clinic, University of Veterinary and Pharmaceutical Sciences Brno based on a modification of the arthroscopic classification of FCP variants (Bardet 1997;Griffon 2006).We divided FCP by its appearance during the arthroscopic or arthrotomic treatment into seven types -fragmented medial margin of PCM, eroded lateral rim of PCM, fissured PCM, non-dislocated fragment of PCM, dislocated fragment of PCM, chondromalacia of PCM, and osteophytes on PCM.All surgical procedures were performed and evaluated by experienced orthopaedic surgeons (arthroscopy AN, arthrotomy MD).
Statistical analysis
We found the absolute and relative frequencies of individual variants of FCP in individual degrees of radioulnar incongruity.Furthermore, we evaluated the actual and anticipated frequencies of individual FCP variants in individual intervals of radioulnar incongruity.For statistical analysis of the data Fisher's exact test was used.Categories with null frequency were not statistically evaluated.
Results
The retrospective study evaluated a total of 135 elbow joints of 77 dogs with surgically treated fragmentation of the processus coronoideus medialis.One-hundred and nine elbow joints were treated arthroscopically and 26 elbow joints were treated arthrotomically.The group comprised of 63 males and 14 females.In elbow joints without radiologically detectable radioulnar incongruity (0-0.5 mm) we found significantly higher occurrence of fissured PCM (p < 0.01).In elbow joints with marked radioulnar incongruity (> 2.1 mm) we found significantly higher occurrence of FCP with dislocated fragment (p < 0.001).
In other types of FCP and individual degrees of radioulnar incongruity no significant differences were found between the actual and anticipated frequencies.The absolute and relative frequencies of individual types of FCP in relation to radioulnar incongruity are presented in Table 1.
Discussion
In the study we evaluated the frequency of individual types of FCP in relation to radioulnar incongruity.In elbow joints without radiologically detectable radioulnar incongruity we found significantly higher occurrence of fissured PCM (p < 0.01).In elbow joints with radiologically diagnosed marked radioulnar incongruity (> 2.1 mm) we found significantly higher occurrence of dislocated fragment of FCP (p < 0.001).
Some authors consider radioulnar incongruity in the elbow joint as the primary cause of FCP (Danielson et al. 2006).Ubbink et al. (1999) found in Bernese mountain dogs radioulnar incongruity in all cases of FCP and in 80% of dogs with osteoarthrosis of the elbow joint.The cause of radioulnar incongruity is probably temporary asynchronous growth of the ulna and radius, PCM overload and subsequent fragmentation.Elbow joints with FCP and marked radioulnar incongruity are more often diagnosed in young dogs; less frequently they are described in older dogs (Morgan et al. 2000).The degree of radioulnar incongruity probably affects the severity of clinical symptoms.According to some authors, (Wind and Packard 1986;Samoy et al. 2006) the severity of clinical symptoms.Moderate incongruity need not be the cause of PCM fragmentation and lameness associated with it, whereas in the case of marked incongruity, fragmented PCM and lameness are very common.The results of our study show that the degree of radioulnar incongruity may affect the type of FCP.Whether the cause of rather pronounced clinical symptoms is marked radioulnar incongruity in itself or a free fragment, is presently the subject of further research.Assuming that the main cause of lameness in elbow joints with marked radioulnar incongruity is the free fragment, a marked clinical amelioration may be expected after removal of the free fragment of FCP.
The cause of more frequent occurrence of fissured PCM in elbow joints without radiologically diagnosed radioulnar incongruity is unknown.One of the possible explanations is radioulnar incongruity only on the level of PCM which is practically impossible to diagnose radiologically.Two studies observed radioulnar incongruity on the PCM level using CT (Gemmill et al. 2005;Kramer et al. 2006).However, their authors came to completely opposed results.Meanwhile Gemmill et al. (2005) came to the conclusion that radioulnar incongruity exists on the level of cranial apex of PCM but not at the base of PCM, Kramer et al. (2006) published contrasting results (incongruity at the base of PCM).The higher frequency of fissured PCM in elbow joints without radiologically detectable radioulnar incongruity in our study supports rather the theory of incongruity on the level of the cranial apex of PCM.A method for objective assessment of radioulnar incongruity has not so far been described (Samoy et al. 2006).Another possible explanation of the more frequent occurrence of fissured PCM in elbow joints with congruent formation of articular surfaces rests in insufficient sensitivity of radiography in the detection of moderate radioulnar incongruity.Wind and Packard (1986) described on a small number of dogs the reliability of radiologically detectable radioulnar incongruity as starting approximately from 2 mm.Other studies consider the evaluation of radioulnar incongruity under 2.5 mm unreliable (Murphy et al. 1998;Mason et al. 2002).Wind and Packard (1986) mention that the assessment of radioulnar incongruity of the elbow joint is not affected by elbow joint positioning.Slight supination and pronation of the extremity and centring the X-ray beam on the centre of antebrachia does not lead to incorrect evaluation of both radioulnar and humeroulnar incongruity.Likewise, slightly oblique imaging of humeral condyles does not lead to misinterpretation of the assessment of elbow joint congruity.In contrast, other studies (Murphy et al. 1998;Mason et al. 2002) mention radiography to be an unreliable method for evaluating moderate radioulnar incongruity, especially due to incorrect positioning of the elbow, superposition of the bone structures and assessment of a three-dimensional bone structure from a two-dimensional image.False positive radioulnar incongruity ensues from supination as well as pronation of the extremity (Murphy et al. 1998).Contrary to that, in an in vitro study, Blond et al. (2005) point out high sensitivity of radiography in detecting radioulnar incongruity and as the most sensitive they mention the ML projection of the elbow joint with a 90° flexion with a 100% sensitivity; and then the ML projection in the standing angle (135°) with a 80% sensitivity in assessing radioulnar incongruity larger than 2 mm.With incongruity smaller than 2 mm, sensitivity was 60% during radiological examination in the ML flexion projection and 80% during radiological examination in the ML neutral projection.
Assessment of the degree of radioulnar incongruity of the elbow joint in this study is accompanied with a certain inaccuracy in terms of utilisation of the imaging method, as conventional radiology does not allow direct imaging of articular cartilage.Native radiographs do not allow differentiating whether there is an actual step between the articular surface of the radius and ulna or whether the radioulnar step is compensated for by a thicker articular cartilage of the head of radius (Holsworth et al. 2005).Likewise, X-ray examination of an extremity in unweighted position does not allow accurate assessing of the elbow joint congruity.A number of forces act upon a weighted elbow that may affect mutual positions of the radius, ulna and humerus in the elbow joint.So far it has not even been studied how elbow joint congruity is affected by the lowered muscle tone during radiological examination under general anaesthesia.For this purpose, it would be necessary to assess elbow joint incongruity in a fully weighted extremity, e.g. from a radiograph taken by horizontal ray.The elbow is a complex joint consisting of several articular surfaces on many levels.It is practically impossible to image radiologically both radiohumeral and humeroulnar joints in such a way that the X-rays fall tangentially on both articular surfaces simultaneously.That further complicates a completely objective evaluation of a three-dimensional joint from a two-dimensional image.Another possible inaccuracy in the interpretation of radioulnar incongruity may arise during comparison of the position of the proximal articular surface of processus coronoideus medialis and the articular surface of the radial head.The articular surface of processus coronoideus medialis has an oblique form and its medial margin lies below the level of the articular surface of the radial head.It is therefore practically impossible to determine from a two-dimensional image the position of the articular surface of PCM in relation to the articular surface of the head of radius.
It follows from the results of the study that the extent of step between the radius and ulna is connected with the occurrence of certain types of FCP.In elbows with marked radioulnar incongruity, complete fragmentation of PCM may be expected.In elbows without radiologically detectable radioulnar incongruity, more frequent occurrence of fissured PCM may be expected.The results of our study suggest the possibility of using the assessment of radioulnar incongruity from radiographs of elbows in mediolateral projection to specify the radiological diagnosis of FCP, and their possible use for establishing the severity of the clinical finding, especially with regard to prediction of the development of secondary degenerative changes in the joint as a results of the mentioned pathological processes.
Fig. 1.Radiographic measurement of radioulnar incongruity (method according to Brunnberg).Marked incongruity of the elbow joint.Arrow points to a free dislocated fragment.
Table 1 .
Absolute and relative frequencies of individual types of lesions of FCP in relation to the degree of radioulnar incongruity | 2018-12-07T17:24:16.073Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "fd0b86d4be2d643e4e50a88b7bd00faafbeeb97e",
"oa_license": "CCBY",
"oa_url": "https://actavet.vfu.cz/media/pdf/avb_2010079020307.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fd0b86d4be2d643e4e50a88b7bd00faafbeeb97e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119393818 | pes2o/s2orc | v3-fos-license | Model predictions for azimuthal spin asymmetries for HERMES and COMPASS kinematics
We have presented the results for the single and double spin asymmetries in semi-inclusive deep inelastic scatterings for proton in a light front quark-diquark model. The asymmetries generated by the T-even TMDs are discussed here. The model predictions are found to agree with the available data. We also present our model predictions for the Collins asymmetry for the future electron-ion collider experiments.
I. INTRODUCTION
Azimuthal spin asymmetries in semi-inclusive deep inelastic scattering ( Another chiral-odd TMD h 1 (x, p 2 ⊥ ) is accessed in SSA A U T requiring unpolarized lepton and transversely polarized target. The chiral-even TMD g ⊥ 1T (x, p 2 ⊥ ) describes the probability of finding a longitudinal quark inside a transversely polarized proton and it can be obtained in double-spin-asymmetry(DSA) A LT involving longitudinally polarized lepton and transversely polarized proton.
Many phenomenological models have addressed the spin asymmetries. Most of the model calculations consider Gaussian ansatz for TMDs and FFs to extract the corresponding distribution functions from the fitting of the asymmetry data. A simultaneous extraction of Collins and transversity distribution is done by Anselmino et.al. [2][3][4] from Collins asymmetry data of HERMES and COMPASS. Sivers function is extracted from Sivers asymmetry data in the Refs. [5][6][7].
We calculate the Collins asymmetry as well as other single spin asymmetries where the leading twist TMDs are calculated in light-front quark diquark model(LFQDM) [8] and the fragmentation functions are taken from phenomenological parametrization [2,4,9]. We have shown the Collins asymmetry for SIDIS process N → Xh at µ 2 = 2.5 GeV 2 and compared with the experimental data of COMPASS and HERMES for π + and π − channels. One of the major challenges of comparing model results with experimental data is the scale evolution of TMDs. Till now, except the unpolarized TMDs, the scale evolution of TMDs are not known.
Since the model is defined at an initial scale, without proper scale evolution of the TMDs, the comparison of the model predictions with the data is incomplete. Since the asymmetries are written as ratios of cross-sections, one may expect the scale evolution may get partially canceled in numerator and denominator and as a result the effect of evolution may not be very large. In case of Collins asymmetry what we observe is that the effect of scale evolution gets partially canceled and does not show much scale dependence but this is not true for all other azimuthal asymmetries. For the asymmetries, we keep the polarized TMDs at the initial scale and consider the scale evolution of the unpolarized TMD which is known. Thus the errors in the results are restricted in the polarized TMDs only. We also compare the results when the polarized TMDs are evolved in different approximation schemes. Some evolution ansatz may produce good agreements with the data for certain asymmetries but may fail in other cases.
Unless, we know the proper QCD evolution of all the TMDs, it is not possible to favor one over the other.
A brief discussion on azimuthal asymmetries in SIDIS is given in Sec.II. Model calculation of TMDs in light front quark model is discussed in the Sec.III followed by the TMDs evolution in brief. The model calculation of single spin asymmetry in SIDIS is discussed in IV and a comparison with experimental data of HERMES and COMPASS are also shown. The model prediction to the Double spin asymmetry data is presented in Sec.V.
II. AZIMUTHAL ASYMMETRIES IN SIDIS
In the QCD factorization scheme the Semi-Inclusive Deep Inelastic Scattering(SIDIS) crosssection for the one photon exchange process N → hX is written as (1) where the first term represents the transverse momentum dependent parton distribution functions(TMDs) which provides the probability of having a struck quarks of a particular polarization in a nucleon, the second term represents the hard scattering which is a point like QED scattering mediated by a virtual photon and the third term is for fragmentation functions(FFs) which gives information about hadronizations fragmented from a quark. Such a scheme holds in small P h⊥ and large Q region, P 2 h⊥ Λ 2 QCD Q 2 . At large P h⊥ quark-gluon corrections and higher order pQCD corrections become important [10][11][12]. The TMD factorization theorem is not proven generically for all the process. However, a proof of the TMD factorization is presented for the SIDIS and the DY processes in [13,14] and latter on used in [15][16][17][18]. The kinematics of SIDIS is given in Fig.1. In the γ * − N center of mass frame, the kinematic variables are defined as In this frame, struck quark and diquark have equal and opposite transverse momentum and produced hadron gets a non-zero transverse momentum. Thus, momentum of the incoming proton P ≡ (P + , M 2 P + , 0 ⊥ ) and of the virtual photon q ≡ (x B P + , .q is the Bjorken scaling with Q 2 = −q 2 . The struck quark of momentum p ≡ (xP + , p 2 +|p ⊥ | 2 xP + , p ⊥ ) interact with the virtual photon and the diquark carries a momentum (1−x)P + , −p ⊥ ). The produced hadron carries a momentum P h ≡ (P + , P − , P h⊥ ). We use the light-cone convention x ± = x 0 ± x 3 . The fractional energy transferred by the photon in the lab system is y and the energy fraction carried by the produced hadron is z = P − h /k − . In this frame, though the incoming proton dose not have transverse momentum, the constituent quarks can have nonzero transverse momenta which sum up to zero. p ⊥ , k ⊥ and P h⊥ are the transverse momentum carried by struck quark, fragmenting quark and fragmented hadron respectively. The relation between them, at O(p ⊥ /Q), is given by Here we consider one photon interaction only. The transverse momentum of produced hadron makes an azimuthal angle φ h with respect to the lepton plane and transverse spin(S P ) of the proton has an azimuthal angle φ S .
FIG. 1: γ * − P center of mass frame: produced hadron has a non-zero transverse momentum(P h⊥ ) in this frame and makes an azimuthal angle of φ h . The proton spin (S) has an azimuthal angle of φ S .
All kinematics are given in text.
In the general helicity decomposition, the polarized SIDIS cross-section is written in terms of structure functions, at kinematic order p ⊥ /Q, as [19] Where S is the lepton polarization and S L/T P represent the polarization of proton with longitudinally polarization(L) and transverse polarization(T) index at the superscript. The first three terms(first line) contribute to the unpolarized cross-section and the other terms contribute for different proton polarizations.
The weighted structure functions, F wheref ν (x, p ⊥ ) andD ν (z, k ⊥ ) represent leading twist TMDs and FFs respectively. The above convolution integral is solved assuming Gaussian ansatz for TMDs in several models as well as in phenomenological extractions [2,4,20].
The weighted structure functions contributing to SSAs are written in terms of convolutions of TMDs and FFs as [19] and the structure functions contributing to the DSAs are given by Where, C stands for the convolution as defined in Eq.(5) and f 1 , h ⊥ 1L , h 1 , h ⊥ 1T , g 1L and g 1T are the leading twist T-even TMDs which are functions of x and is the Collins fragmentation function. The contribution of above structure functions to the azimuthal spin asymmetries are discussed in the following sections.
In the SIDIS process, asymmetry is observed experimentally during the measurement of angular distribution of produced hadrons. The azimuthal asymmetries in SIDIS process are defined as Note that, dσ (S )P (S P )→ hX is a short hand notation of dσ (S )+P (S)→ P h X dx B dydzd 2 P h⊥ dφ S of Eq.(4). Thus using Eq.(4) the asymmetries can be expressed in terms of structure functions and then as a convolution of leading twist TMDs and FFs. Since, in the cross-section, each structure functions comes with a defined angular coefficient, the contribution of single TMDs can be extracted by introducing corresponding weight factor(and integrating over φ h and φ S ) in the definition of azimuthal asymmetry as Where the function W(φ h , φ S ) is a weight factor which project out corresponding asymmetry.
For example, Collins asymmetry can be extracted by the weight factor W(φ h , φ S ) = sin(φ h +φ S ) for a transversely polarized proton interacting with a unpolarized lepton beam. There are many more weighed asymmetries in SIDIS process some of them are observed experimentally. Here we will restrict ourselves to the asymmetries which has contribution from T-even leading twist TMDs and fragmentations. A detailed calculation of the different SSAs and DSAs are discussed in sec.IV and V.
III. MODEL CALCULATIONS
Before we get into the asymmetries, let us discuss about the model in brief. Since different asymmetries have contribution from different leading twist TMDs and FFs, we give model prediction to the azimuthal spin asymmetries measured by HERMES, COMPASS experiments by calculating the leading twist TMDs in a recently proposed light-front quark-diquark model(LFQDM) [8]. In this model, the wave functions are constructed in the framework of softwall AdS/QCD prediction. As we mentioned before, we concentrate on the asymmetries related to the T-even TMDs at the leading twist. The FFs D h/ν 1 (z, k 2 ⊥ ) and H ⊥ν 1 (z, k 2 ⊥ ) are taken as a phenomenological input from Refs. [2,4,9]. The model calculation(LFQDM) of TMDs are discussed briefly in the following subsection.
A. TMDs in LFQDM
In this subsection we briefly discuss about calculation of leading twist T-even TMDs in the recently proposed model LFQDM [8]. In this model,the proton state is written as two particle bound state of a quark and a diquark having a spin-flavor SU (4) structure.
Where | u S 0 , |u A 0 and |d A 1 are two particle states having isoscalar-scalar, isoscalaraxialvector and isovector-axialvector diquark respectively [10,21]. The states are written in two particle Fock state expansion with J z = ±1/2 for both the scalar and the axial-vector diquarks [8]. The two particle Fock state wave functions are adopted from soft-wall AdS/QCD prediction [22,23] and modified as We use the AdS/QCD scale parameter κ = 0.4 GeV as determined in [24]. The parameters a ν i , b ν i and δ ν are fixed by fitting the Dirac and Pauli form factors. The quarks are assumed to be massless.
In the light-front formalism, the TMDs correlator at equal light-front time z + = 0 is defined for SIDIS as for different Dirac structures Γ = γ + , γ + γ 5 and iσ j+ γ 5 . Where x (x = p + /P + ) is the longitudinal momentum fraction carried by the struck quark of helicity λ. The proton spin M , and S T with helicity λ N . In the leading twist, the TMD correlator is connected with the corresponding TMDs for different Dirac structures as The transversity TMD h ν 1 (x, p ⊥ ) is given as The T-odd TMDs f ⊥ 1T and h ⊥ 1 vanish as no gluon degrees of freedom is considered here. The one gluon final state interaction is needed to calculate the T-odd TMDs. The final state interaction generates a phase term in the wave functions which give rise to a non vanishing T-odd TMDs [25].
In this model, a explicit form of the wave functions is given in [8]. Using those in the correlator of Eq. (20) and comparing with the decompositions of Eq.(21-23), the leading twist T-even TMDs contributing to the SSA reads explicitly as [26] The T-even TMDs contributing to the DSAs are given by Where The values of the model parameters a ν i , b ν i (i = 1, 2) and δ ν are given in [8] at initial scale µ 0 = 0.8 GeV with the AdS/QCD scale parameter κ = 0.4 GeV [24]. The pre-factors containing C j (j = S, V, V V ) and N k (j, k = S, 0, 1) are the normalized constants which satisfy the quark counting rules for unpolarized TMDs. The subscript A represents V and V V for u and d quarks respectively. Note that the normalization constant N d S = 0 for d quarks.
B. Fragmentation functions
We use Gaussian ansatz for fragmentations functions as discussed in Ref. [2,4].
Where the hadron of momentum P h and of energy fraction z = P − h /k − is produced from a fragmenting quark of momentum k. The values of the parameters are listed in [4] and D h/ν
C. TMD evolutions
The Q 2 evolution of unpolarized TMD and unpolarized fragmentations functions are proposed in [15]. An extension of the unpolarized TMD evolution is presented in [16] and provides a framework to the scale evolution of spin-dependent distributions. The QCD evolution of TMDs in the coordinate space is defined [15,18] as WhereF (x, b ⊥ ; µ 0 ) is the TMDs at the initial scale µ 0 and the exponential function contains the QCD evolution of the corresponding TMDs. The functionK(b ⊥ ; µ) is given by [16] where,K at O(α s ) [27,27]. We adopt a particular choice for the constant C 1 = 2e −γ E [15,16], with the Euler constant γ E = 0.577 [27]. In the SIDIS, non-perturbative function g K (b T ) is parametrized [16,18,28] as g K (b T ) = 1 2 g 2 b 2 T with g 2 = 0.68 GeV 2 and b max = 0.5 GeV −1 . This prescription overestimates the evolution for the Drell-Yan process as discussed in [29]. Using Eq. (36,37,38) the evolution Eq.(35) can be written as with the kernelR Here we consider the LO evolution. The anomalous dimensions are given by In the kernelR(µ, µ 0 , b T ), the impact parameter (b ⊥ ) dependency comes from the upper limit µ b in the µ integration. This evolution equation can be solved analytically by making an approximation on b ⊥ as discussed in [18]. Eq. (38)indicates that the µ b converges to a constant value in this framework. Therefore under this approximation, the kernelR(µ, µ 0 , b T ) reduces to R(µ, µ 0 ) and the evolution equation can be integrated analytically. We compare the evolution of f ν 1 produced by the two kennelsR(µ, µ 0 , b T ) and R(µ, µ 0 ) at the scale µ 2 = 2.5 GeV 2 . We take a fixed value of x = 0.1. We observed very insignificant difference in the QCD evolution generated withR(µ, µ 0 , b T ) kernel and with the reduced kernel R(µ, µ 0 ) as shown in Fig.2.
Therefore, we calculate the SSAs at the scale µ 2 = 2.5 GeV 2 by evolving the TMDs in reduced QCD evolution and compare with the experimental data. Where the FFs are adopted from phenomenological parametrization at the scale µ 2 = 2.5 GeV 2 .
IV. SINGLE SPIN ASYMMETRIES IN LFQDM
The Single Spin Asymmetry(SSA) is measured when the target is polarized with respect to the beam direction. In the SIDIS processes, the SSAs associated with unpolarized lepton(U) beam and transversely polarized proton(T) target is defined as Where ↑, ↓ at the superscript of P represent the transverse spin of the target proton. From Eq.(4) we can write the numerator of A U T as The first term corresponds to the Sivers asymmetry which has contribution from Sivers functions(f ⊥ν 1T ) and unpolarized FFs. The second term corresponds to the Collins asymmetry which has contribution from transversity TMD (h ν 1 ) and Collins fragmentation function(H ⊥h/ν 1 ).
The third term has contribution from pretzelocity distribution(h ⊥ν 1T ). The fourth and fifth terms have contributions from multiple TMDs and FFs. Among these five SSAs, only two of them involve T-even TMDs and will be discussed here.
From Eq.(4), the denominator can be written as We extract the Collins asymmetry by introducing appropriate weighted factor sin(φ h + φ S ) in Eq. (17) and write in terms of structure functions as The Collins asymmetry provides a correlation between the transverse polarization of the fragmenting quark in a transversely polarized proton and the transverse momentum of the final hadron. Since helicity is conserved in hard process, the chiral-odd TMD h 1 (x, p ⊥ ) has to be convoluted with a chiral-odd FF, which is Collins function. Unlike Sivers function which differs by a sign for SIDIS and Drell-Yan processes, Collins function is same in both processes. In the SIDIS process we consider, a transversely polarized quark is scattered out of transversely polarized proton with the probability provided by transversity distribution h 1 (x, p ⊥ ) and fragmented to a hadron with probability given by Collins function H ⊥ 1 (z, k ⊥ ). The transverse polarization of the initial proton gets transferred to the final state by the hard scattering which produces an azimuthal spin asymmetry in the final hadron about the "jet axis".
The azimuthal dependence in the structure function F sin(φ h +φ S ) U T , given in Eq. (9), can be written in terms of φ as which is contributed from azimuthal angle φ h q involved in fragmentation process. φ h q is the azimuthal angle of the produced hadron with respect to the fragmenting quark helicity frame and defined at O(p 2 ⊥ /Q 2 ) as [19] cos The pre-factors in the denominator and numerator of Eq.(46) are the planar elementary hard cross-sections The Collins asymmetry defined in Eq.(46) is a function of the variables x, z, P h⊥ and y.
The single spin asymmetry associated with longitudinally polarized proton is defined as Where →, ← represent the longitudinal spin of proton along the momentum. From Eq.(4), the numerator of A U L can be written in terms of two structure functions F Where both the structure functions F have contribution from h ⊥ν 1L TMD and Collins FFs. The associated asymmetries are given as Using the TMDs from Eqs. (25)(26)(27) and FFs from Eqs. (31,32) into the Eqs.(6-10), the structure functions read in this model as Thus, explicit expression of the single spin asymmetries, in LFQDM, are as the following: and (iv) the SSA A The pre-factor C A represents C V and C V V for u and d quarks respectively.
A. Predictions for COMPASS and HERMES
All the above asymmetries are functions of x, z, P h⊥ , y and scale µ whereas the experimental measurements of asymmetries provide the variation of the integrated asymmetry with one variable at a time. Therefore one has to integrate the denominator and numerator separately over all the other variables except that one variable which is measured in that data. Also to compare with the experimental data it is needed to keep the x, y, z dependence canceling factors in the numerator and denominator of asymmetries unchanged.
An amount of integrated asymmetry can be estimated by integrating over all the variables
x, z, P h⊥ and y in the corresponding kinametical limits i.e., experimental data by HERMES Collaboration [30]. Upper row and lower row are corresponding to π + and π − channels. First, second and third column represent the variation of asymmetry with respect to x, z and P h⊥ . Red continuous lines(yellow error regions) represent the model result when f ν 1 is evolved in QCD evolution [15,18] at scale µ 2 = 2.5 GeV 2 . The blue dashed lines represent the model result when the TMDs are evolved in parameter evolution approach [8]. In both cases, h 1 remains at the initial scale and FFs are taken from the parametrization [4,9] at µ 2 = 2.5 GeV 2 .
The kinematical limit for the variables in the HERMES experiment are: both the π + and π − channels. The experimental data are available only for different values of kinematical variables so direct comparison is not possible, the signs of different asymmetries evaluated in the model are consistent with the data. The amplitude of asymmetries are calculated following the same strategy i.e., f ν 1 is evolved in QCD evolution at µ 2 = 2.5 GeV 2 and the polarized TMDs involved in the numerator remains at the initial scale. [15,16](see Sec.III C).A reduced form of QCD evolution is proposed in [18] and adopted to the evolution of spin-dependent TMDs e.g., Sivers functions. Similarly one can adopt this QCD evolution for all the polarized TMDs and predict the asymmetries. To understand qualitatively, we compare our result for Collins asymmetry in the three different schemes(shown in Fig.3): (i) f ν 1 is at µ 2 = 2.5 GeV 2 and h ν 1 is at initial scale, (ii) both f ν 1 and h ν 1 are at µ 2 = 2.5 GeV 2 and (iii) f ν 1 and h ν 1 are at the initial scale µ 2 0 . Interestingly, the scheme-(i) gives better result among these three schemes. The evolution contribution from h ν 1 is very small for Collins asymmetry(scheme-(ii)). Note that, in the case of other asymmetries e.g., A sin(3φ h −φ S ) , the scheme-(ii) has a large deviation from the data. Therefore we evolve the unpolarized TMDs, f ν 1 , which is known and contributes to the denominator of the asymmetries and all spin-dependent TMDs whose evolutions are not well known and are involved in numerators of all the asymmetries are taken at initial scale. Not only this strategy gives better agreement with data but limits the uncertainty to the numerators of the asymmetries only. A similar strategy is used in [20].
The average bin energy range for the HERMES is 1.3 GeV 2 < Q 2 < 6.2 GeV 2 and the average Q 2 values of the HERMES experiment is around 2.4 GeV 2 . We use the LO parametrization for D h/ν 1 and H ⊥ν 1 at the scale 2.5 GeV 2 [9,18]. So, we evolve f ν 1 to the same scale (µ 2 = 2.5 GeV 2 ) to give a model prediction for Collins asymmetry as well as for other azimuthal asymmetries.
We perform evolution of f ν 1 in two different approaches: one is the QCD evolution approach as discussed in Sec.III C and another one is the TMDs evolution by parameter evolution approach of LFQDM proposed in [8]. In the parameter evolution approach, the parameters in the LFQDM are allowed to evolve to generate the DGLAP evolution of the unpolarized PDFs. The same evolution of the parameters are used to estimate the TMD evolution. So, the information of DGLAP evolution are encoded into the parameters and the TMDs are expected to follow more like DGLAP evolution in this approach.
Our model predictions to the Collins asymmetry are shown in Fig.4 and compared with the HERMES data for the kinematics 0.023 ≤ x ≤ 0.4, 0.2 ≤ z ≤ 0.7 and 0.1 ≤ y ≤ 0.95. The upper row is for π + and the lower row is for π − production channels. The first, second and third columns indicate the x, z and P h⊥ variations of Collins asymmetry respectively. The red continuous lines represent the model prediction of Collins asymmetry where the f ν 1 is evolved in QCD evolution given in [15,18], see Sec.III C. The corresponding error is represented in yellow for π + (upper row) and π − (lower row) channels. The first, second and third column represent the x, z and P h⊥ variations respectively. The colors and symbols have the same interpretations as in Fig.4. Data are measured by HERMES collaboration [33].
color. The error corridors are coming from the uncertainties in the parameters of TMDs(initial scale error) and FFs. Error coming from the LFQDM is small, large contributions come from the uncertainties in the parameters of FFs [4]. The model predicts qualitative behavior of the asymmetries and agree with the data within error bar and we expect that when QCD evolution of all the TMDs and FFs are correctly incorporated, the agreement with the data will improve. The blue dashed lines represent the model prediction when the TMD f 1 is evolved by parameter evolution approach [8]. Error corridor for blue dashed line is not shown to avoid clumsiness in the plot. Since a well defined QCD evolution for transversity is not available, we evolve the unpolarized TMD only and restrict the uncertainty to the numerator of Collins asymmetry. Using the same strategy in parameter evolution approach we observe a fantastic agreement to the experimental data(denoted by blue dashed line). The model results of Collins asymmetry for π + channel and π − channel are positive and negative respectively as found in experimental measurements. In this model, the amount of the Collins asymmetries(in the HERMES kinematics) are 0.0236 and -0.0364 for π + and π − channels respectively (see Table I).
In Fig.5, the model result for Collins asymmetry is compared with the COMPASS data corresponding to the kinematics: 0.003 ≤ x ≤ 0.7, 0.2 ≤ z ≤ 1.0 and 0.1 ≤ y ≤ 0.9. All the colors and indicators represent the same as used in Fig.4. We observe that our model prediction to the Collins asymmetry is quite reasonable. As for HERMES, the agreement of the model predictions for variation with P h⊥ is not so good. The parameter evolution approach(blue line) again shows excellent agreement with the COMPASS data. In this model, the amount of integrated asymmetries(in the COMPASS kinematics) are 0.0374 and -0.0534 in π + and π − channels respectively(see Table I).
Model prediction to the single spin asymmetry A is shown in Fig.6 and compared with HERMES data [32]. (The color and signs of plots represent the same as of Fig.4.) This asymmetry involves pretzelocity distribution and characterizes the p ⊥ dependence of the transverse quark polarization in a transversely polarized proton. The pretzelocity TMD is linked to the non spherical shape of the proton and quark orbital angular momentum. Compared , this asymmetry is suppressed by a factor of P 2 h⊥ /M 2 and hence expected to be very small for small transverse momentum of the outgoing hadron | P h⊥ |< M , where M is the proton mass (see Eq.(63)). Experimental results show that the asymmetries as functions of x, z or P h⊥ are near equal to zero as shown in Fig.6. Our model results also predict almost negligible asymmetries for both the channels. As a result, the amount of integrated asymmetries(Eq.(75)) are also very small and are found to be -0.0011 and 0.0015 for the π + and π − channels respectively. Fig.7 and compared with the HERMES data [33] for π + and π − production channels. The colors and symbols represent the same as in Fig.4. This asymmetry has contribution from h ⊥ν 1L (x, p 2 ⊥ ) TMD, see Eq.(65). are 0.0336 and -0.0518 in π + and π − channels respectively.
The model prediction to SSA
Note that the parameter evolution is a model to reproduce the DGLAP evolution of the PDFs, but it is found to work well to reproduce the SSAs too. The TMDs are known not to follow the DGLAP evolution, and the same parameter evolution is not expected to reproduce their evolutions. But in the SSAs, which involve ratios of different TMDs and fragmentation functions, it seems to work fine which might be due to partial cancellations of the evolution effects. Proper QCD evolutions of all the TMDs and FFs are required for more accurate predictions of the asymmetries at the experimental scales.
B. Prediction for EIC
The upcoming Electron Ion Collider(EIC) [34] is designed to use several existing facilities to probe both DIS and SIDIS over a wide range of kinematics and beam polarization. It is expected to provide much deeper insight into the hadron structure. Here we present the model predictions for the Collins asymmetry for the EIC kinematics. We present our predictions for the EIC kinematics [35]: 0.05 < P h⊥ < 1, 0.01 < y < 0.95, at the center of mass energy √ s = 45 GeV. The predictions for the collins asymmetry A sin(Φ h +Φ S ) U T at µ 2 = 100 GeV 2 are shown in Fig.8. Note that the future EIC will explore much smaller values of x as can be seen from the plots. The upper panel in Fig.8 represents the results for π + channel while the lower panel is for π − channel and the asymmetries are predicted to be sizable in both channels.
V. DOUBLE SPIN ASYMMETRIES IN LFQDM
The double-spin asymmetry is observed when both the lepton beam and the target proton are polarized and only proton polarization flips. The DSAs associated with the longitudinally polarized lepton beam is defined as Where, the target proton can considered as longitudinally polarized(S L ≡→) or transversely polarized (S T ≡↑). For longitudinally polarized proton, from Eq.(4), the numerator can be written in terms of the structure functions as Where the first two structure functions contribute to the double spin asymmetries. The DSAs for longitudinally polarized proton and lepton beam are defined in terms of structure functions as The double spin asymmetry with longitudinally polarized lepton and transversely polarized proton is defined as From Eq.(4),the numerator is written as and the weighted DSAs for transversely polarized proton are given by for π + (upper row) and π − (lower row) channels are shown and compared with the preliminary HERMES data [37]. The first, second and third column represent the x, z and P h⊥ variations respectively. The red continuous lines(yellow error region) indicates the same as in Fig.9.
The structure functions in this model read as Thus, in this model the explicit form of the double spin asymmetries are given by Here all the DSAs are functions of x, z, P h⊥ , y at a scale µ. The DSAs A LL and A cos φ h LL have contribution from the helicity TMD, g ν 1L . The other three DSAs A have contributions from the worm-gear TMD, g ν 1T . The model prediction of cos(φ h − φ S ) weighted double spin asymmetry A for longitudinally polarized lepton and transversely polarized proton are shown in Fig.9. The error bar is very small in this case and presented by yellow region. Our results show reasonably good agreement with the HERMES data. This asymmetry is found to be slightly positive for both the π + and π − channels as observed by HERMES experiment [36,37]. Positive asymmetry for π − channel is also found in Hall-A results on transversely polarized 3He target. In our model, amount of the integrated asymmetries are very small, 0.0093 and 0.0032 for π + and π − channels respectively (see table.II) and are consistent with the experimental data.
The model result for DSA
is shown in Fig.10. The colors and notations are the same as in Fig.4. The data are taken from HERMES measurement [37]. In the HERMES measurement, this asymmetry is found to be nearly equal to zero for both the π + and π − channels. Our model also shows almost zero asymmetry for x variation, whereas a slight positive asymmetry is observed for the case of P h⊥ variation. Note that the model error is very small and presented by yellow region. The amount of integrated asymmetries are given in Table.II. In the SIDIS process, the integrated DSA(integrated over transverse momentum) A P LL (x, z, µ) is measured by the HERMES collaboration and defined in terms of helicity PDFs The model result for x variation of P LL (x, z, µ) are shown in Fig.11 and compared with the HERMES result [38] for π + and π − channels. We have taken the bin average values for z = 0.46 in the HERMES experiment. All the distributions in Eq.(101) are taken at the scale µ 2 = 2.5 GeV 2 . Since the parameter evolution is consistent with the DGLAP evolution, the helicity PDF and unpolarized PDF are evolved in parameter evolution approach. [38][39][40] and denoted by the blue color with experimental error bar. The red dash doted line represents the asymmetry when all the distributions(f 1 and g 1 ) are at initial scale µ 0 .
If no hadron is observed in the final state, the double spin asymmetry for proton is given by which have the contribution from PDFs only (no contribution from FFs). In this model, the variation of A P 1 with x is shown in Fig.12 and compared with the experimental data [38][39][40]. The red dot-dashed line represents the asymmetry when both the PDFs f 1 (x) and h 1 (x) are at initial scale µ 0 . The red data points represent the model result corresponding to the set of x and µ values measured experimentally at EMC, E134 and HERMES [38][39][40]. Since A 1 symmetry involves the PDFs, we use the parameter evolution approach(which is consistent with DGLAP evolution) for the scale evolution. Since the evolutions of the PDFs are well known, as expected the model predictions are in good agreement with the data.
VI. RELATIONS
From the Fig.13 we can write a model dependent inequality as The above inequality can be considered as a Soffer bound type relation for asymmetries, which provides an upper cut for Collins asymmetry in SIDIS process. Similarly Fig.17 provides an upper bound for A sin(3φ h −φs) U T (P h⊥ ) as In this model, relations among the SSAs and DSAs can be written as (P h⊥ )| for π + channel are found to be smaller than the π − channel as shown in Fig.15. One of the possible reasons for this result is that apart from many other factors, the SSA/DSA ratios involve the ratio of the fragmentation functions H ⊥ 1 (k ⊥ )/D 1 (k ⊥ ) and this ratio of the fragmentation functions for u quark is smaller than the same for d quark (Fig.16).
Since u → π + and d → π − are the favored fragmentations, it suggests that the SSA/DSA ratio for π + channel should be smaller than the π − channel. Note that, the ratio of fragmentation function accounts for about a factor of 1.5 whereas |A sin(φ h ) U L (P h⊥ )/A cos(φ h ) LL (P h⊥ )| for π − is about twice that for π + channel and the ratio |A
VII. CONTRIBUTION OF uu DIQUARK
The role of ss diquarks was recently emphasized [41] in the studies of heavy baryons spectroscopy. It is therefore instructive to explore the role of diquark containing light identical quarks. To do so, we compared the results with and without (putting C V V = 0) uu diquarks.
Although the results do not change significantly, some disagreement for z dependence of Collins asymmetry for π − mesons can be observed (Fig.17).
VIII. SUMMARY AND CONCLUSION
Azimuthal spin asymmetries are very important to understand the three dimensional structure of the proton. There are many experimental as well as theoretical model predictions for these asymmetries. Here, we have presented the results for both single and double spin asymmetries associated with T-even TMDs in a light front quark-diquark model of the proton for SIDIS processes in both π + and π − channels. for both π + and π − channels for the EIC experiments.
The double spin asymmetry A p 1 in DIS depends on the PDFs rather than TMDs. Our model predictions for A p 1 show excellent agreement with the data. When the lepton beam is longitudinally polarized but the proton is transversely polarized, both A in the model are consistent with the experimental data and are found to be almost zero. But, the DSA when both proton and lepton beams are longitudinally polarized, A LL is quite large for both π + and π − channels which is also predicted in our model.
We have explored different relations among the SSAs and DSAs and found an inequality similar to Soffer bound for PDFs. It will be interesting to see if similar relations are also found in other models. | 2017-11-06T06:42:26.000Z | 2017-11-06T00:00:00.000 | {
"year": 2017,
"sha1": "dd93ddd7c8d0e1b47ec4a65ddb0e436ca4f13418",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1711.01746",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dd93ddd7c8d0e1b47ec4a65ddb0e436ca4f13418",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55193887 | pes2o/s2orc | v3-fos-license | A DFT based equilibrium study of a chemical mixture Tachyhydrite and their lower hydrates for long term heat storage
Chloride based salt hydrates are promising materials for seasonal heat storage. However, hydrolysis, a side reaction, deteriorates, their cycle stability. To improve the kinetics and durability, we have investigated the optimum operating conditions of a chemical mixture of CaCl2 and MgCl2 hydrates. In this study, we apply a GGA-DFT to gain insight into the various hydrates of CaMg2Cl6. We have obtained the structural properties, atomic charges and vibrational frequencies of CaMg2Cl6 hydrates. The entropic contribution and the enthalpy change are quantified from ground state energy and harmonic frequencies. Subsequently, the change in the Gibbs free energy of thermolysis was obtained under a wide range of temperature and pressure. The equilibrium product concentration of thermolysis can be used to design the seasonal heat storage system under different operating conditions.
Introduction
The variation of solar radiation with geographical basis, daily and seasonal basis is the major concern in its large scale utilizations. The solution of this intermittency is the solar energy storage. Solar energy can be stored in the form of chemical energy (in batteries) or in the form of thermal energy. Thermal energy can be stored by sensible heat, latent heat or thermocheimcal from. Thermochemical form has highest energy density compared to other thermal forms, therefore suitable for compact heat storage. Salt hydrate, carbonates and hydroxide are major class of promising thermochemical materials (TCMs) for seasonal heat storage [1,2,3]. Salt hydrates store energy via a reversible physico-chemical reaction in which it undergo dehydrationhydration reaction for charging and discharging cycle respectively. Chloride based salt hydrates are one class of TCMs having good storage capacity and fast kinetics. Hydrolysis is an irreversible side reaction, which appears in chloride based hydrates (usually in MgCl 2 hydrates). It produces corrosive HCl gas and affects the durability of chloride based salt hydrates.
Mixture of salt hydrates is used to improve the kinetics and hinder undesired side reactions. Hydrolysis can be hindered by mixing with other halides [4]. To decrease the dehydration temperature of Mg(OH) 2 and Ca(OH) 2 , various salts like chlorides, acetates, sulphates, and nitrates have been doped and nitrates was found to be effective in decreasing the dehydration temperature [6]. The chemical dopant has been chosen based on either scientific approach or trial and error approach. In the scientific approach, the dopant is chosen as similar structure as the base material while in the second method it is based on a chemical intuition [7]. Posern and Kaps have been observed that 20 % of MgSO 4 hydrate and 85 % MgCl 2 hydrate mixture has better hydration rate and higher temperature lift [5]. Rammelberg et al. [20] have examined many salt hydrate mixtures to overcome the operational challenges of TCMs like agglomeration, slow kinetics and durability. They observed that the mixture of CaCl 2 and MgCl 2 hydrates has faster kinetics and better durability than their ingredient salt hydrates.
Tachyhydrite (CaMg 2 Cl 6 ·12H 2 O) is a chemical mixture of CaCl 2 and MgCl 2 hydrates. It is a naturally occurring hygroscopic material, present in evaporite deposits and cretaceous potash formation [9,10]. It has been shown a potential application in chemical conversion from α to β form of spodumene [11]. The salt hydrates of CaCl 2 , MgCl 2 , MgSO 4 have been explored using DFT-GGA [16,17,18]. In this present study, we will obtain the thermolysis of various hydrates of CaMg 2 Cl 6 under a wide range of temperature and pressure. The relative stability of their hydrate is analyzed by means of DFT calculations. We obtain the Bader charges and vibrational frequencies for the stable configurations. Subsequently, the equilibrium product concentrations of dehydration and hydrolysis reaction are obtained under various temperature and pressure regimes. These equilibrium curves can be used to predict the operating range and the onset temperature of HCl formation (hydrolysis) of various hydrates of CaMg 2 Cl 6 . These equilibrium curves can be used to design the seasonal heat storage system.
Methodology
In the present study, all the gaseous molecule of CaMg 2 Cl 6 hydrates are optimized in DFT using Perdew-Wang exchange and correlation functional (PW91) [15] under generalized gradient approximation (GGA) [13] implemented in Amsterdam Density Functional (ADF) program [12]. A spin restricted Kohn-Sham method is used with double-polarized triple-ζ basis set by keeping the maximum integration accuracy.
CaMg 2 Cl 6 hydrates can undergo a cycle of hydration/dehydration reactions. The plausible dehydration reaction of CaMg 2 Cl 6 hydrates can be described similar to the CaCl 2 and MgCl 2 hydrates as: (1) CaM g 2 Cl 6 · nH 2 O + heat CaM g 2 Cl 6 · (n − 1) Hydrolysis, an irreversible undesirable side reaction may compete with dehydration in the thermolysis of tachyhydrite. Hydrolysis can be described as; To investigate the equilibrium condition of above reactions, it is essential to obtain the Gibbs free energy (G) of each components at given pressure (p) and temperature (T ). The Gibbs free energy of molecule is defined as: The partial contributions in the above equation include total energy (E tot ), configurational entropic contribution (T S tot ), and pV term. These partial terms can be further partitioned as: and Where in E rot , E trans , E vib , E ZP E , and E gr are rotational, translational, vibrational, zero point energy and electronic ground state energy respectively. A S trans , S rot , and S vib are translational, rotational, and vibrational contribution of the entropy. The optimized geometry, harmonic frequencies, and electronic ground state energy are required to obtained the Gibbs free energy of a molecule under ideal polyatomic gas assumption [21]. The ∆G of a chemical reactions can be expressed as: Where G prod and G reac is Gibbs free energy of product and reactant respectively. The equilibrium concentration of products and reactants can be obtained by equating ∆G to zero. The physical state of reactants and products are essential to calculate the ∆G of a reaction. Experimentally, CaMg 2 Cl 6 ·12H 2 O remains in solid phase and H 2 O, HCl exist in the gaseous phase. The physical states of lower hydrates are remain unknown and computationally challenging for solid phase. The Gibbs free energy of crystalline phase of MgCl 2 hydrates were lower than the gas phase energy of the MgCl 2 hydrates [16,19]. Although, these calculations were carried out for another solid salt hydrates, this should hold true for CaMg 2 Cl 6 ·12H 2 O also. Hydrolysis reaction usually happen in the liquid phase salt hydrates mixture [16]. As the solid phase of lower hydrates of CaMg 2 Cl 6 are unexplored experimentally, the ∆G can be estimated by Gibbs free energy of gaseous phase (G gas ). The equilibrium thermodynamic study of thermolysis of CaMg 2 Cl 6 hydrates under gas assumption will be considered as the safety limits of these reactions in seasonal heat storage systems.
Structure of various hydrates
The initial geometry of tachyhydrite is taken from known experimental structure [9]. The geometry optimization is carried in DFT formalism under the GGA approximation on the gaseous molecule of CaMg 2 Cl 6 ·12H 2 O. The optimized structure of CaMg 2 Cl 6 ·12H 2 O is symmetric as shown in Figure 1(a). The Mg atoms became hydrated with six H 2 O molecules and form a distorted octahedral structure. These two distorted octahedron structures are connected via a bridge octahedral structure made of CaCl 6 . The optimized structure is similar to the experimental crystal [9]. The Bader atomic charge on the O atoms of hydrated water facing towards the bridge layer of CaCl 6 are 0.12 more electronegative and 0.09Å shorter than nonfacing O atoms (extreme left and right H 2 O atoms). The Mg-Cl coordination length and atomic charges on Ca, Mg and Cl are symmetric in both of these distorted octahedron. The Mg-O coordination lengths are 2.07 and 2.15Å. The experimental crystalline phase Mg-O coordination lengths are 2.10 and 2.01Å, shows a good agreement between the experimental crystal structure and the DFT optimized structure The structure of the lower hydrates are obtained by successive removal of the two H 2 O atoms and re-optimization of structure in GGA-DFT. The Mg-Cl coordination length is continuously decreasing with the hydration number and it became 2.3Å for CaMg 2 Cl 6 ·2H 2 O as shown in Figure 1(b). The hydration strength of Mg atom progressively increases with decrease in the hydration number. The electro-positive atomic charge on Ca and Mg atom is decreases from 1.726 to 1.63 and 1.549 to 1.530 as the hydration number varies from 12 to 1. The average electronegative charge on Cl atoms is keeps on decreasing from (-0.75 to -0.80) with the decrease in the hydration number from 12 to 2. The atomic charge distribution suggests that electrostatic interactions play a major role in stability and hydration of these hydrates.
Enthalpy change in thermolysis
We have obtained the binding enthalpy (E bind ) of various salt hydrates as: The enthalpy change in dehydration of salt hydrate per mole of water (E dehyd ) is defined as: The enthalpy change in hydrolysis of salt hydrate (∆E hydro ) is defined as: The binding enthalpy of the salt hydrates monotonically increases with hydration number as shown in Table 1. It indicates that hydration process becomes energetically favorable with increase in hydration number. The change in enthalpy of dehydration per mole of released water molecule is maximum for mono-hydrate and keeps on decreasing till octa-hydrate. It has a sudden peak for deca-hydrate bringing it to the level of the hexa, and it decreases again for dodeca being almost equal to the octahydrate. The intramolecular hydrogen bonding may be a plausible reason for the exceptional stability of the deca-hydrates. Another observation is that the enthalpy change in hydrolysis continuously increasing with hydration number. This employs, hydrolysis is relative difficult for higher hydrates.
Atomistic thermodynamic equilibrium study
In the present study, we have obtained the equilibrium compositions (partial pressure) of products formed during the thermolysis (dehydration / hydrolysis) of various hydrates of CaMg 2 Cl 6 . We obtained the Gibbs free energy of each reactant and product from atomistic DFT calculations and equilibrium compositions are obtained by equating ∆G to zero. In the heat storage systems, the partial pressure of water (p H 2 O ), partial pressure of HCl (p HCl ) and temperature (T) are controlling variables while partial pressure of salt hydrates are kept constant (1 atm). Thus, partial pressure of salt hydrate is kept constant in all the calculations.
Dehydration reaction of salt hydrates
Dehydration is an endothermic reaction in which hydrate molecules absorb thermal energy and disintegrate into lower hydrate or anhydrous form. To understand the effect of temperature on the dehydration reaction of CaMg 2 Cl 6 hydrates, the equilibrium products of dehydration reaction is investigated in the temperature range of 100 to 1000 K. The equilibrium temperaturevapor pressure obtained from dehydration reactions of CaMg 2 Cl 6 hydrates is shown in Figure 2.
For seasonal heat storage, the typical operating temperatures are about 273 -500 K. We have represented the dehydration reaction falling in this range with green color, and outside of this range with red color. The dodeca and octa hydrates of CaMg 2 Cl 6 dehydrates at very low temperature (<200 K). This behavior is consistent with their low dehydration enthalpy change as given in Table 1 Figure 2. Equilibrium product concentrations for the dehydration reactions of CaMg 2 Cl 6 hydrates at various temperatures and constant partial pressure of hydrate, p o = 1 atm
Hydrolysis reaction of salt hydrates
Hydrolysis is a side reaction reaction which produces HCl and H 2 O. To understand the effect of temperature on hydrolysis, we have varied the concentrations of each of the reaction products while keeping the concentration of other product fixed. Firstly, the equilibrium temperature is varied from 300-800 K at constant HCl pressure (0.0001 atm). The fixed low HCl pressure (p HCl = 0.0001 atm) has low pressure gradient and represents slows hydrolysis rate. Such low concentrations of HCl can be used as safety limits for heat storage system. The onset of HCl formation temperature (p H 2 O = 0.001 atm) is shown in Figure 3. Hydrolysis is an undesirable reaction in typical operating range (273 -500 K) of seasonal heat storage application. We have represented the hydrolysis curve with green color for those hydrates whose hydrolysis starts above 500 K, and rest with red color. The difference between Figure 2 and Figure 3 is that in the later case hydrolysis is considered. Hydrolysis starts at higher temperature than dehydration as enthalpy change in hydrolysis is much higher than enthalpy change in dehydration (see Table 1). The effect of temperature on hydrolysis under constant water vapor pressure (p H 2 O = 0.01 atm) is shown in Figure 4. The slope of hydrolysis curve decrease from dodeca hydrate to mono hydrate as shown in Figure 4. The molar ratio between the HCl to H 2 O decreases from mono to dodeca hydrate, therefore for a particular change in temperature affect maximum to mono-hydrate.
Conclusions
We have carried out a GGA-DFT calculations to obtain the optimized structures, Bader atomic charges and frequencies of various CaMg 2 Cl 6 hydrates. The atomic charge distribution reveals that the stability and hydration strength of CaMg 2 Cl 6 hydrates are dominated by electrostatic interactions. The structural property, ground state energy and harmonic frequencies are used to quantify the Gibbs free energy of each reactant for given T , P . The atomistic thermodynamic approach is used to quantify the equilibrium product concentration of thermolysis at various temperature and pressure conditions for various hydrates of CaMg 2 Cl 6 .
The effect of temperature on the dehydration of tachyhydrite is similar to the experimental dehydration of the CaCl 2 and MgCl 2 hydrates. are investigated under constant p HCl and p H 2 O . The onset of HCl formation (hydrolysis) temperature are obtained for a safety limit of p HCl (0.0001 atm) under different temperature and pressure conditions. Hydrolysis of tetra, di and mono hydrates start above 500 K at the safety limit. Hydrolysis of these hydrates under constant vapor pressure (p H 2 O = 0.01) also begins above 500 K. In the lack of experimental studies, it can be concluded from the present studies that CaMg 2 Cl 6 ·10H 2 O, CaMg 2 Cl 6 ·6H 2 O, CaMg 2 Cl 6 ·4H 2 O, and CaMg 2 Cl 6 ·2H 2 O are potential candidate for long term seasonal heat storage. It is expected that these hydrates can improve the hydrolysis resistance compared to MgCl 2 hydrates. Therefore can enhance the durability of the system.
Acknowledgments
This work is part of the Industrial Partnership Programme (IPP) 'Computational sciences for energy research' of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). This research programme is co-financed by Shell Global Solutions International B.V. | 2018-12-05T20:57:16.368Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "0b37588e87916af8ce2c4826906deeceb25e6029",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/745/3/032003/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "75fb862079c1b80f98426ff35eacefdb81874c95",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
235292602 | pes2o/s2orc | v3-fos-license | Sustainability of biodiesel B30, B40, and B50 in Indonesia with addition of emulsifier
Sustainable energy is one of the main challenges of the 21st century. Indonesia is a developing country and ranked fourth in the world population. The total increase in average population growth between 2000 and 2025 is projected at 33.2%. Thus, the problem of energy deficits must be addressed by the Government of Indonesia to overcome the shortage of energy resources in the future. The Indonesian government’s policy on biodiesel began in 2015 and continues to undergo renewal. Starting in September 2018, Indonesia set the B20 rule, and then in January 2020, it began to shift to B30. By the end of 2020, it was targeted to have moved to B50. The concept of sustainability focuses on two things: a combination of environmental and economic considerations. One of the crucial points in sustainable development related to the development of biodiesel B30, B40, and B50 is that economic growth needs to be harmonized with efforts to preserve the environment through long-term maintenance of the availability of biological resources and increasing productivity of the agricultural systems, stability of the human population, limitations on economic growth, and make improvements to the quality of the environment and ecosystem. Biodiesel in Indonesia, Fatty Acid Methyl Ester (FAME) is synthesized from palm oil. Emissions from biodiesel from vegetable oils still contain high NOx gas. When viewed from the side of the impact on the environment, the concept of biodiesel sustainability needs additives/emulsifiers so that the quality and stability of biodiesel increases. That way, the biodiesel effect may show to be environmentally friendly compared to fossil fuels.
1.
Introduction Energy is a fundamental need for human life and a key to the modernization of the existing sector. Meeting sustainable energy needs is one of the main challenges of the 21st century. Indonesia is a developing country and number four for the world population. The population and energy needs are increasing day by day. The total population in Indonesia rose from 205,132,000 million in 2000 to 233,477,400 million in 2010 and is projected to reach 273,219,200 million in 2025 [1]. Thus, the total increase in average population growth between 2000 and 2025 is projected to be 33.2%. The Indonesian government must address this problem to overcome the lack of energy resources in the future [2]. The Indonesian government's biodiesel policy began in 2015 through the Ministry of Energy and Mineral Resources and continues to undergo renewal. In September 2018, it was changed to B20, then began to switch to B30 in 2020. By the end of 2020, it was targeted to have shifted to B50 (Balitbang ESDM). Emissions from fuel oil consumption produce 40.9% of CO2 emissions in Indonesia. Although quite varied, emissions from natural gas consumption and coal use have increased steadily since the early 1970s and accounted for 15% and 38% of Indonesia's total emissions. With more than 225 million people, Indonesia's emissions per capita level of 0.48 metric tons of carbon is well below the global average but has grown fivefold since the late 1960s [3]. Currently, fossil fuel-based energy such as oil, coal, and natural gas is Indonesia's primary energy source. The primary energy mix in 2019 was shown in Figure 1. Petroleum is the single largest energy source (38.8%), followed by coal 33.3%, natural gas 19.7%, and new and renewable energy (EBT) at 8.5 %. The projections for 2025 ( Figure 2) of the energy mix are EBT utilization by 23% and petroleum by 25%, which means more energy conversion from fossil fuels to EBT. The presence of biodiesel B30, B40, and B50 must continue to be sustainable regarding environmental aspects, social aspects, and economic aspects. Fuel consumption needs in Indonesia have increased relatively high from year to year. This consumption can be seen in Figure 3. Data on oil consumption in Indonesia shown in the blue line continued to increase from 1965-2017. In contrast, oil production has decreased, starting in 2000. In Figure 4, we can see that Indonesia has experienced an oil deficit since 1981-2017. Based on these data, for now, Indonesia must have switched to using alternative energy gradually, one of which is biodiesel. The government began to implement the use of B20 in 2018. The use of B20, which has only been running for one year, was increased to B30. Of course, this becomes a challenge for policymakers to continue to make improvements so that B30 is better prepared to be applied. Through the Ministry of Energy and Mineral Resources (ESDM), the government has set a biodiesel plan in Indonesia in the Minister of Energy and Mineral Resources Regulation No. 12 of 2015, which was later renewed in 2018. These regulations made the positioning of the B30 application even more vital to be applied in Indonesia. The application of biodiesel usage has been determined for the micro-business sector, fisheries business, agricultural business, transportation, and Public Service Obligation. It also applies to the types of Non-Public Service Obligation, industrial and commercial transportation sectors, and power plants.
Besides that, the application of biodiesel B30 is also supported by the availability of FAME (Fatty Acid Methyl Ester) supply. The availability of FAME supply for B30 can meet the needs in Indonesia. FAME's industrial processing capacity currently reaches 12 million kiloliters. While the need for 2016 is 6 million kiloliters and 1.5 million kiloliters of exports. The estimated increase in FAME consumption of 3 million kiloliters when B30 is applied is still sufficient. Moreover, there is a guarantee of supply of raw materials in crude Palm Oil (CPO), which currently produces 42 million tons per year (Indonesian Biofuel Producers Association 2020). Today, the realization of the application of B30 biodiesel in Indonesia is being campaigned since the end of 2019 and officially implemented starting in 2020, triggering the industry's pros and cons. Industry players feel burdened by this policy (Indonesian Biofuel Producers Association 2020) because B30 can cause various problems, including biodiesel is more wasteful because of incomplete combustion, the engine used in the production process requires extra care. This condition is burdensome for businesses because they will incur additional costs for maintaining their production machines. Also, the mixture of FAME with diesel causes water to be formed, causing incomplete combustion and the deposition/crust resulting from this combustion process [4,5]. This incomplete combustion can cause emissions produced to be higher [6,7].
Some business operators have also accepted the government's policy regarding the implementation of B30 in Indonesia this year. The industry supports the program of using 30 percent biodiesel blends on diesel or B30 by 2020, on condition that the government must first test several types of vehicles. According to the General Secretary of the Indonesian Automotive Industry Association (Gaikindo) (2020), he hopes that the government can make sure the fuel matches the vehicle engine and does not add to the burden of maintenance. Besides, the use of B30 fuel can encourage producers to use cleaner fuels and reduce fossil fuels. Gaikindo's support also includes vehicles' provision to be tested in as many as four passenger vehicles and three trucks. The selection of these vehicles is based on the most significant domestic diesel vehicle users (Directorate General of Land Transportation, 2019). The application of B30 will increase FAME production, which means it will also increase byproduct production from the process. This byproduct can provide opportunities for other industries because the production of FAME from palm oil can produce derivative products from byproducts such as glycerol. Glycerol is widely used as a raw material for cosmetics and pharmaceutical industries [8]. Based on the regulations that have been implemented by the government, the use of biodiesel B30 replacement needs to get support from all parties so that it can be appropriately realized.
2.
Concept of Sustainability According to Bautista [9], sustainability has three frameworks that must be met in the classification of sustainability standards. Hierarchically, a sustainability standard is divided into principles, criteria, indicators, and guidelines. According to Hambali [10], there are ten indicators of bioenergy sustainability in Indonesia: 2 indicators on environmental aspects, three indicators on social aspects, and five indicators on economic aspects. One of the sustainability indicators discussed in this paper is the environmental aspect, with indicators in the form of air quality produced from raw material production, biodiesel production processes, transportation, and usage. The reference parameter is biodiesel's emission in PM 2.5, PM10, NOx, SO2, and other pollutants.
Mankoff [11] stressed that sustainability is a form of human interaction with the environment. Sustainability focuses on two things: sustainable development is more than just growth, but it is necessary, especially in terms of reducing the materialistic nature, making it more efficient, and balancing the benefits. Second, integration between environmental and economic considerations. One crucial point in sustainable development related to the development of biodiesel B30, B40, and B50 is that economic growth needs to be harmonized with efforts to preserve the environment through longterm maintenance of the availability of biological resources and increasing productivity of the agricultural system, stability of the human population, limitations economic growth, and continuously improve the quality of the environment and ecosystem. In terms of biodiesel development in Indonesia, one of the factors that can be seen regarding the sustainability side is the emission of gas produced to the environment. This can strengthen the sustainability aspect of this policy; besides, according to Hambali [10], sustainability can be seen from two aspects: environmental and social.
3.
Biodiesel Emissions from Various Raw Materials Biodiesel emissions are important to emphasize and become a concern because it is one of the parameters for environmental aspects. Biodiesel in Indonesia is developed from oil palm, fatty acid methyl ester (FAME). However, many biomass sources can be developed into biodiesel raw materials in Indonesia, derived from Jatropha curcas oil, palm oil, used cooking oil, and other vegetable oils. During the process of burning, incomplete fuel will cause CO emissions. If there is enough oxygen, CO will be oxidized to CO2 [12,13]. CO emissions increased at high load and reduced at low load for emulsified biodiesel. This is caused by higher latent heat evaporation causing incomplete combustion. More CO is produced at low and medium loads.
The cooling layer that occurs due to ethanol and water's evaporation effect can increase CO production [14]. However, at a 100% load, CO emissions for emulsion mixes are 50-70% higher than diesel because of the lower air-fuel ratio [12]. CO emissions in emulsified fuels are higher than diesel fuel at full load due to the area of engine contact with less fuel and greater fuel droplets. However, a significant reduction in CO emissions is found when nano additives are added, such as carbon nanotubes (CNT). The addition of nano additives will cause a better fuel distribution to the engine and maximum emulsified fuel [15]. CO emissions often increase with increasing water concentration [16]. The CO emissions produced by water-emitted biodiesel are 5%, 6.5%, and 8.5% lower than 10% and 15% emulsified biodiesel water. A higher amount of water means a higher amount of OH radicals, which can increase carbon oxidation. However, it turns out that fuels with high water content have lower CO emissions than diesel fuels [16]. For emulsified fuels, CO2 emissions increase when CO decreases and vice versa. [17] found a reduction in CO2 emissions at higher loads caused by better combustion. For the comparison, emission from emulsified biodiesel emissions from various raw materials can be seen in Table 1.
4.
Addition of Nano Additives to Reduce Biodiesel Emissions Gas emissions of NOx produced by biodiesel fueled engines are still relatively high, so NOx emissions in biodiesel are still a concern for researchers to reduce. To be sustainable, NOx gas emissions on biodiesel must be as minimal as possible. Many researchers have made several attempts to reduce NOx emissions from biodiesel-fueled diesel engines. [22] A previous study tried water-emulsified biodiesel and conducted experiments in a research machine. The results show that increasing the water concentration in biodiesel can reduce NOx emissions due to the absorption of latent heat by water particles during the combustion process. [16] Another study tried the water emulsification method with methyl esters of palm oil and diesel mixtures. [23] [24] Other studies tried the process of water emulsification with used cooking oil and castor oil (jatropha methyl ester). The water emulsified biodiesel experiment results showed a positive effect on engine performance, NOx, and smoke for all test fuels. However, unburned hydrocarbons (HC) and carbon monoxide (CO) increase the emissions that result from a longer ignition delay (Ignition Delay Period / IDP). The more extended period of ignition delay during the combustion process will also lead to rough engine performance [12]. Comparison of adding nano additive to biodiesel can be seen in Table 2. It is necessary to add nano additives to emulsified biodiesel to reduce IDP and refine engine performance and reduce NOx gas emissions. Several nano additives, including metal-based, such as nano cerium, nano alumina, and nano zinc oxide. During the combustion process, nanoparticles' presence can contribute to better thermal conductivity and better contact and fuel contact area ratios. Besides, nanoparticles can also react with water and carbon atoms, thereby increasing the soot's oxidation [30]. [12]A study used alumina nanoparticles as additives for water-emulsified diesel fuel and produced a significant reduction in NOx emissions. [18] One study showed an increase in combustion processes, performance, and emission levels of water-emitted biodiesel fuel using carbon nanotubes (CNT) as additives. [12] It also showed that zinc oxide nanoparticles could shorten engine ignition and increase emission levels in emulsified biodiesel fueled engines. Nanoparticles have the potential to store energy, which can cause high reactivity. The addition of nano additives in the emulsified biodiesel fuel needs to be investigated in Indonesia because this strongly supports the sustainability of FAME-based biodiesel in Indonesia. In the future, research on the use of nano additives in fuels can also be viewed from the techno-economic aspect. The sustainability of biodiesel must be proven and convincing, lest there is a negative side to biodiesel, which in the future reduces the value of biodiesel as a renewable fuel. Biodiesel is no longer an alternative fuel that replaces fossil fuels, but fossil fuels must be replaced because it is not sustainable.
5.
Conclusions One of the concepts of sustainability or biodiesel sustainability in Indonesia can be achieved by adding additives to biodiesel. These additives can be in the form of emulsifiers or nano additives. The addition of these additives has proven to reduce NOx gas emissions, which is still touted as a weakness in biodiesel. The concept of sustainability, one of which is seeing aspects of the environment, must be considered and become the focus of all parties. Do not let high NOx gas emissions affect the green concept and environmentally friendly biodiesel in replacing fossil fuels. The idea of biodiesel is no longer an alternative fuel to replace fossil fuels as fossil fuels are not sustainable. The problems that exist in biodiesel must gradually be resolved so that in the future, the application of biodiesel with B100 will no longer have a problem that arises, especially in environmental aspects. | 2021-06-03T00:09:30.714Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "86fbf3662b956d72184ca0a9918f170177638256",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/749/1/012026",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "86fbf3662b956d72184ca0a9918f170177638256",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
225915430 | pes2o/s2orc | v3-fos-license | Leaf Shedding Phenology of Ficus Glauca, Terminalia Catappa, and Cassia Fistula
Phenology is a study that explores periodic tree lifecycle events and how those events are influenced by seasonal climatic variation. This study aimed to observe the leaf-shedding period of three commonly found deciduous tree species in the tropics: Ficus glauca, Terminalia catappa, and Cassia fistula; and to analyze the climatic driving factors to trees’ phenological phases. A field survey was conducted to observe the samples, each species consisted of five trees. The survey was conducted weekly from September 2016 to February 2017 in Bogor City and Regency. It was found that F. glauca shed its leaves more than once a year. The canopy coverage reached its lowest in February (65.7% coverage). Leaf shedding process in T. catappa reached its peak in January (83.7% coverage), meanwhile, C. fistula’s shedding period is suspected to happen before September because its canopy coverage kept increasing during the survey (69.7%–95.1% coverage). The climatic factor that significantly affected F. glauca was the previous month’s rainfall. When the rainfall in the previous month decreases, the leaf shedding increases. T. catappa and C. fistula were significantly affected by day length. For T. catappa, when day length decreases, the leaf shedding increases. Meanwhile, for C. fistula, when day length increases, it is shedding its leaves. Leaf phenology of deciduous trees in a tropical climate was affected by different climatic factors depending on their species.
Introduction
Plant phenology is an interdisciplinary field of research that focuses on the timing of phases in the plant's life cycle (phenophases), such as leaf-flushing, flowering, fruiting, and leafshedding [1]. Phenology is widely utilized in biology and ecology. Plant phenological events are influenced by various environmental factors, such as temperature, rainfall, day length, etc. [2].
In the landscape architecture field, phenology is very useful in improving the quality of the landscape because the visual changes in plants can provide special effects for those who see it.
Seasonal changes that occur in plants can affect a person's visual perception of the landscape [3]. Plants are an important element in the composition of green open space areas that create the aesthetic value of the landscape. This makes the identification of plants' visual changes process through phenology observation is important in landscape architecture field.
Research on plant phenology keeps increasing due to the increase in global climate change issues. Various studies show that climate change causes shifts in the phenological phase timing [4] [5]. The shifting of phenological phase timings in plants may cause disruption of migration and breeding timings in animals, as well as various asynchronous events between species [6]. Those events may cause an ecosystem imbalance. By studying phenology, researchers can become more sensitive to environmental changes and can reduce the risk of the damage.
In Indonesia, research on tree phenology continues to grow, but existing research tends to focus only on the flowering period without observing the leaf phenology. Leaves are a large part of the tree that shows the physiological condition of the tree [7]. Therefore, leaf phenology is needed to be studied further.
In this study, we observed the leaf shedding phenology of three tree species, which were Ficus glauca (Liebm.) Miq., Terminalia catappa (Linn.), and Cassia fistula (Linn.). These three species are deciduous tree species that are commonly found in tropical climate landscapes. F. glauca (local name: Bunut) is a tree of the fig family that has a wide canopy and hanging aerial roots. T. catappa (local name: Ketapang) is a tree with a spreading canopy and open branches that are often planted as shade or ornamental trees. C. fistula (local name: Trengguli) is a deciduous tree with yellow flowers that is often planted as an ornamental tree.
Limited knowledge of the leaf shedding phenology of these trees makes the timing unknown. Therefore, this study aimed to observe the leaf-shedding and leaf-flushing period in three deciduous tree species: Ficus glauca, Terminalia catappa, and Cassia fistula, and to analyze the role of climatic factors in the leaf phenological phases of the said species.
Study area
This study was conducted through a field survey to the samples consisting of three deciduous tree species, namely F. glauca, T. catappa, and C. fistula. Each species consisted of five tree samples. Tree samples were chosen deliberately by considering the condition of the tree. The trees should look relatively uniform, mature, and appear healthy. The proximity of each sample was also considered. Figure 1 shows the locations of the samples. The study area is the living locations of the sample trees. One research location in Bogor Regency was the parking lot of the Faculty of Animal Science, Institut Pertanian Bogor Dramaga (T. catappa's place of growth). The other three locations in Bogor City were Kebun Percobaan Balai Penelitian Tanaman Rempah dan Obat Cimanggu (C. fistula's place of growth), and two road greenbelts at Jalan Dr. Semeru and Jalan Ir. H. Juanda (F. glauca's place of growth). The leaf shedding phenological phase was observed for 6 months from September 2016 until February 2017.
Data collection and analysis method
The timing of leaf shedding phenology was estimated by observing the canopy coverage on the tree. When the canopy coverage decreases, it is assumed as the time of leaf shedding phase, meanwhile, when the canopy coverage increases, it is assumed as the time of leaf flushing phase. Trees' canopy sample photos were taken once a week from a consistent distance using a digital camera Nikon Coolpix S4400. The canopy coverage is estimated digitally following the photo grid analysis method [8]. The photo size is 3000 x 4000 px and processed with To examine climatic factors that affect the leaf shedding phenology, we collected the following data: rainfall, rainy days, temperature humidity (RH), day length (DL), solar radiation (SR), and wind velocity (WV). The data were obtained from the Meteorological, Climatological, and Geophysical Agency (BMKG). The influence of climatic factors on the percentage of tree canopy coverage is analyzed by linear regression in SPSS 24. The canopy coverage data are analyzed with three periods of climatic data: weekly, monthly, and previous monthly. This is to detect whether there is a possibility that some climatic factors might not directly affect the leaf phenology. Certain climate factors may take time to affect the leaf phenology. Therefore, in the analysis part, the previous month's climatic data were considered.
General climatic condition
The climatic condition during the observation period is shown in Table 1. The data were collected from the same station except for the rainfall data which had been collected from three different stations. The stations were Dramaga (Dr), Kebun Raya Bogor (KRB), and Cimanggu (Ci) station. Every station's data was used for different species due to different places of growth, Dramaga station for T. catappa, KRB for F. glauca, and Cimanggu for C. fistula. Each station showed a similar trend, the rainfall increased from August to September, then slowly decreased until November, and significantly decreased in December and January, then lastly significantly increased in February.
The average temperature (T) in the study area was relatively stable but it seemed to noticeably decrease in February. Relative humidity (RH) also seemed stable with an average of 84.4%, so is wind velocity with an average of 4.3 km/hr. Day length (DL) data is the percentage of the length of the day in 12 hours. If the sun shines for 12 hours in a day, then the day length would be 100%. Based on Table 1, day length tended to decrease towards the end of the observation period, but solar radiation (SR) tended to fluctuate throughout the period.
Ficus glauca
During the observation period, the lowest canopy coverage of 65.7% occurred in February, meanwhile, the highest of 89.5% occurred in October ( Figure 2). In this study, we estimated trees' leaf-shedding percentage through trees' canopy coverage. We assumed that when the canopy coverage is low, the leaf-shedding percentage is high. This assumption would make the lowest canopy coverage as the peak of the leaf-shedding phase.
If observed weekly, the lowest canopy coverage occurred in the 20 th week (3 rd week of January) with a coverage of 63.2% (Figure 3). Meanwhile, the highest canopy coverage occurred in the 11 th week (3 rd week of November) with a coverage of 93.0%. The lowest average canopy coverage in both weekly and monthly graphs is still quite far from 0%. But if the samples were observed individually, the lowest canopy coverage almost reached 0% as seen on the 3 rd , 4 th , 10 th , and 18 th week. These conditions are represented with minimum values (diamond symbols) in Figure 3. These findings show that the leaf shedding was asynchronous between F. glauca samples despite growing in a close and similar environment. The leaf shedding period in F. glauca individuals did not occur at the same time. This is suspected to happen because of the nature of some Ficus species. Ficus tend to show intrapopulation inter-tree asynchronous phenological phase, yet show strong intra-tree synchronous phenological phases [9]. There was an interesting finding in F. glauca's leaf shedding phase, the leaf flushing process occurred within one week after it shed its leaves. The photos of one leaf-shedding F. glauca sample in a span of a week are shown in Figure 4.
F. glauca shed their leaves more than once a year. During the observation period, there were individuals that shed their leaves twice. Each individual is suspected to shed their leaves 2-5 times a year without a fixed interval, similar to Ficus fistulosa [10].
The summary of the significant climatic factors that affect F. glauca's leaf shedding is shown in Table 2. The most significant factor is the rainfall in the previous month, including the total rainfall and average daily rainfall. When the rainfall in the previous month decreases, the canopy coverage in the next month decreases as well (leaf shedding increases in the previous month causes soil humidity to decrease and drought stress to increase. It shows that F. glauca is not directly affected by rainfall, but soil humidity instead [11]. Another significant climatic factor is wind velocity. If the wind velocity increases, the canopy coverage decreases (the leaf shedding occurs). When the tree is exposed to drought, the petiole weakens [12] and the wind may cause the petiole to flutter [13]. Therefore, the leaf is more vulnerable to fall.
The interaction between temperature and relative humidity (RH) also significantly affected leaf shedding in F. glauca. The analysis result shows that when temperature and RH increases, the canopy coverage increases (leaf shedding decreases). This might be related to the opening and closing of stomata. Stomatal opening is not directly affected by relative humidity's changes, but it is related with humidity gradient [14]. When the temperature increases, the humidity gradient will increase or becomes steeper. Steeper humidity gradient means more stomatal closing during the day. The closing of stomata is plants' effort to increase the water efficiency and decrease the transpiration rate. This phenomenon of stomatal closing might be the cause of less leaf shedding when the temperature and relative humidity is high. (Figure 4). In the weekly graph (Figure 6), the lowest canopy coverage occurred in the 19 th week (2 nd week of January) with a percentage of 76.8% and the highest occurred in the 1 st week (1 st week of September) with a percentage of 98.2%. There was no significant difference between the weekly samples' minimum and maximum value. The highest difference is seen in the 19 th week because there was an individual which canopy coverage is relatively low. The frequency of leaf shedding in T. catappa in one year cannot be confirmed through this observation. According to Orwa et al. [15], leaf shedding usually occurs twice a year. In a subtropical location such as Florida, especially when there is a sudden rain in winter, T. catappa sheds its leaves synchronously so the tree becomes leafless and the leaf flushing period is following soon after that. T. catappa might not become leafless in the study area because it is different from its natural habitat in ocean beaches, coastal plains, or near river mouths [15].
Based on the analysis result, climatic factors that significantly affected T. catappa's leaf shedding phenology are day length and the interaction between day length and solar radiation ( Table 3). The canopy coverage percentage of T. catappa and day length has a positive correlation so when day length increases, the canopy coverage also increases. When day length decreases, the canopy coverage decreases, or it is assumed as leaf shedding increases.
Some species' leaf shedding phase may be affected by the combination of aging leaves and the decrease of day length [16]. Another study found that in places with low fluctuation of day length, such as tropical countries, an increase in day length of 30 minutes or less may induce leaf flushing [17].
Cassia fistula
During the observation period, the canopy coverage kept increasing from the beginning of observation in September until January and then slowly decreasing in February (Figure 7). Due to this trend, the leaf shedding timing is still unknown, but it is suspected to occur before September. The weekly canopy coverage of C. fistula is shown in Figure 8. Judging from the minimum and maximum value, the inter-individual variation between C. fistula samples seemed to be quite low. In India, leaf flushing phase of C. fistula usually occurs from March to July and the flowering phase starts from April to July, or sometimes in October [15].
The climatic factors that significantly affecting leaf phenology of C. fistula are summarized in Table 4. Based on the analysis result, day length and the interaction between day length and solar radiation are significant factors. When day length increases, the canopy coverage decreases, and vice versa. In other words, when day length increases, C. fistula is shedding its leaves. When day length increases, air temperature tends to increase. High temperature may cause soil humidity to decrease until it becomes dry and drives the leaf shedding process. This is what might happen with C. fistula, so it sheds its leaves when day length increases.
Leaf shedding phenology utilization in the landscape
Utilizing the trees leaf shedding phenology may be quite challenging since the timing of each tree might differ from each other. To create a uniform visual of the trees, we can try to modify the microclimate of the growth place. For F. glauca, since soil humidity is a significant factor, we should manage the water availability in the soil for each tree. Drought and flooding activity is a method to manage the water balance in the soil [18]. Drought method is to limit the amount of available water for the tree, meanwhile flooding method is to supply more water for the tree.
For T. catappa and C. fistula, which leaf phenology is most affected by day length, the duration of sun exposure and the amount of solar radiation need to be managed. One way to manage it is to alter the structure of the tree canopy by reducing the biomass of the tree with pruning, thinning, girdling, or defoliation [18]. If the canopy size is relatively uniform, the leaf phenology timing should occur in the near time.
Utilizing tropical deciduous trees in landscape designs should be done with care. Visually, the trees would look good if they are mass-planted in a row so the leaf shedding process would be able to catch people's attentions. The trees will enhance the landscape quality by adding a temporary attraction to the site. The landscape manager should maintain the microclimate by considering the species' significant climatic factors to adjust the leaf phenology timing.
Conclusion
The leaf shedding period in tropical deciduous trees are determined by different climatic factors depending on the species. F. glauca shed their leaves around February, then the leaf flushing phase occurred about one week after that. The climatic factor that mainly drove the leaf shedding was low rainfall in the previous month and high wind velocity at the time. For T. catappa, the peak leaf shedding period occurred in January when the daylength tended to decrease. On the contrary, C. fistula shed their leaves when the daylength tended to increase, which suspected to have occurred outside the observation period before September.
To optimally utilize the leaf shedding phenology of the trees, microclimate modification may be needed. Drought and flooding methods could help in maintaining water soil availability for F. glauca. As for T. catappa and C. fistula, modifying the structure of the tree canopy is necessary to manage the sun exposure received by the tree. By creating the same living environment for every tree, the leaf phenological phase timing hopefully could be easily predicted. In the landscape, these trees are better planted in a row so the leaf phenology could catch people's attentions. Landscape managers should carefully manage the visual and the microclimate of the trees' environment. | 2020-06-11T09:03:26.380Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "f9320f5ba704d78b02a15d0353fb62ea3758701d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/501/1/012039",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ae0714041a6d495853d598a465e3f0f726b5cc2e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Physics"
]
} |
255898986 | pes2o/s2orc | v3-fos-license | Lateral resolution enhanced interference microscopy using virtual annular apertures
The lateral resolution in microscopic imaging generally depends on both, the wavelength of light and the numerical aperture of the microscope objective lens. To quantify the lateral resolution Ernst Abbe considered an optical grating illuminated by plane waves. In contrast, the Rayleigh criterion holds for two point sources or point scatterers separated by a lateral distance, which are supposed to emit spherical waves. A portion of each spherical wave is collected by the objective lens and results in an Airy disc corresponding to a diffraction limited intensity point spread function (PSF). If incoherent illumination is employed the intensity PSFs related to different scatterers on an object are added resulting in the well-known Rayleigh resolution criterion. In interference microscopy instead of the intensity the electric field scattered or diffracted by an object will be affected by the transfer function of the optical imaging system. For a reflective object the lateral resolution of an interference microscope can be again characterized by the Abbe limit if the object under investigation is a grating. However, if two irregularities on a flat surface are being imaged the resolution no longer obeys the Rayleigh criterion. Instead, it corresponds to an optical system with an annular aperture and thus surpasses the prediction given by the Rayleigh criterion. This holds true for both, amplitude as well as phase objects, as it will be elucidated in this study by theoretical considerations, simulation results and an experimental proof of principle.
Introduction
Extending the lateral resolution capabilities of optical microscopes is an essential subject of current research as it directly enables the investigation of structure dimensions below the diffraction limit and thus broadens the range of application of microscopic two-and three-dimensional imaging. Various resolution enhancement and super-resolution techniques have been investigated in context with conventional and confocal optical microscopy with particular focus on biological objects [1,2]. These methods are mostly dedicated to fluorescence microscopy and thus require a special labelling of the specimen under investigation. Since we are primary interested in characterizing nanostructured surfaces produced by engineering processes, fluorescence methods are not subject of this contribution as labelling is not desired in this context. Label-free methods for resolution enhancement are based on structured illumination microscopy [3] and microsphere assisted microscopy [4], for example. In contrast, in interference microscopy resolution enhancement techniques are rarely applied till now. Nevertheless, techniques such as microsphere-assistance or immersion systems known from conventional microscopic imaging are being used in interference microscopy too [4][5][6][7][8][9].
However, due to the complex composition of signals additional options for resolution enhancement exist in interference microscopy. For example, it is well known that improved lateral resolution can be achieved by use of annular apertures located in both, the illumination pupil as well as the pupil plane of the imaging path of a microscope [10][11][12]. If reflective phase objects or three-dimensional micro-topographies are to be measured, interference microscopy is often employed in order to gain depth information [13]. A so-called depth scan changes the distance between the microscope and the object and thus, recording a series of images results in a 3D image stack, which enables the numerical reconstruction of the surface topography. As a consequence, the transfer characteristics of a depth scanning microscope can be best described by an optical 3D transfer function [14][15][16][17][18][19][20][21]. The two lateral spatial frequency coordinates of the 3D transfer function are related to the transversal spatial frequency axes of the microscopic image. The axial spatial frequency results from the interference signals occurring due to the depth scan. A familiar approach in this context is to define a so-called equivalent wavelength of the interferogram and to use conventional two-dimensional Fourier optics modeling [22]. However, we recently derived an analytic expression for the 3D transfer function of an interference microscope that holds for surface topographies characterized by specular reflection and diffraction [20,21]. It should be noted that the spectral composition of an interference signal depends not only on the spectral distribution of the light source and the spectral sensitivity of the camera but also on the pupil functions of the microscope and the predominant lateral spatial frequencies of the surface topography [23,24]. If, for simplicity, monochromatic light is used, the pupil function and the lateral spatial frequency distribution of the object's surface will be most dominant. Here, we presume constant intensity in the illuminating pupil plane and a constant apodization factor due to specular reflection, what leads to an unambiguous 3D transfer function [21]. Therefore, the lateral spatial frequency distribution of an interference image stack is given by the 3D spatial frequency representation of the complex reflection function of the measuring object. This spatial frequency distribution is weighted by the 3D transfer function. On the other hand, the axial spatial frequency corresponds to the frequency, for which an interference signal of a single camera pixel is analyzed and thus can be varied by setting the parameters of the signal processing algorithm. We call the relevant parameter 'evaluation wavelength' . This is the wavelength, which is used to calculate the phase value and the envelope position of a CSI (coherence scanning interferometry) measurement signal. With the evaluation wavelength λ eval the corresponding axial spatial frequency value q z,eval results in: i.e.
where λ 0 is the central wavelength of light, emitted by the light source and NA represents the numerical aperture of the system. In the following we will show that the 2D transfer function in the q x q y -plane corresponding to a certain choice of λ eval shows significant changes along the q z -axis. This is accompanied by a strong dependence of the lateral resolution capabilities on the evaluation wavelength. We demonstrate that the highest lateral resolution is reached for the maximum evaluation wavelength λ eval = λ 0 / √ 1 − NA 2 . For a grating the achievable lateral resolution δ A corresponds to the diffraction limited resolution given by the Abbe limit δ A = 0.5 λ/NA. Lateral resolution beyond the fundamental Abbe limit is sometimes referred to as super-resolution [25]. However, with respect to the smallest resolvable distance between two irregularities in microscopic imaging with incoherent illumination the Rayleigh resolution criterion δ R = 0.61 λ/NA becomes relevant [25]. In the following sections we show that lateral resolution well below the value given by the Rayleigh criterion can be reached employing interference microscopy and using what we call a virtual annular aperture.
The experimental setup used throughout this study is shown schematically in figure 1(a). An interference signal called correlogram is obtained during the depth-scan along the z-axis from a flat area of polished silicon. Figure 1(b) displays a correlogram employing a royal blue LED of 447 nm central wavelength and 20 nm wavelength bandwidth (FWHM) for illumination. The low number of visible interference fringes is due to the high NA of 0.9 of the objective lenses. The absolute value of the Fourier transformed correlogram plotted over the wavelength scale in figure 1(c) demonstrates that the spectrum of the correlogram is much broader than the spectrum of the light source and extends to wavelengths of more than 1000 nm. This phenomenon leads to the well-known NA-effect [26][27][28][29], which is usually considered by the equivalent wavelength instead of the central wavelength of the light source in CSI signal processing [22].
Three-dimensional transfer function
As mentioned above, three-dimensional transfer functions (3D TFs) represent the full transfer capabilities of microscope systems in the spatial frequency domain. 3D TFs can be obtained from the Ewald sphere representation of the wave vectors of incident plane waves and waves scattered from a point object [10]. The limitation of the angular distribution of these wave vectors by the numerical aperture of a microscope objective lens results in truncated spherical caps instead of full spheres as it was pointed out by McCutchen [30]. Thus, the corresponding Ewald spheres are sometimes called McCutchen spheres [18]. The 3D TF of a microscope results from a three-dimensional correlation of the spherical caps corresponding to the incident and scattered wave vectors [16,20]. According to our previous studies mentioned above, the shape of the 3D TF of a diffraction limited interference microscope with uniform monochromatic pupil illumination of wavelength λ 0 depends on the surface under investigation [20,21]. In contrast to surfaces characterized by single point scatterers for specularly reflective or diffractive continuous surfaces the normalized 3D TF results in: The vector q in the spatial frequency domain with transverse spatial frequency q ρ = √ q 2 x + q 2 y and axial spatial frequency q z represents the difference between the wave vector k s of the scattered light field and the wave vector k in of the incident wave: q is defined in terms of the wavenumber k 0 = 2π/λ 0 and the polar and azimuth angles θ in and ϕ in of the incident wave as well as the angles θ s and ϕ s of the scattered wave. Considering the distance q ρ from the q z axis of a system with the numerical aperture NA, the values q z,min , q z,0 and q z,max are given by: For q z > q z,max and q z < q z,min the 3D TF will cut off, i.e. there will be no more contribution to interference signals. Note that q z,0 and q z,max depend on the transverse spatial frequency q ρ . The 3D transfer function H(q ρ , q z , k 0 ) = H(q x , q y , q z ) for monochromatic light of wavelength λ 0 = 440 nm and NA = 0.9 is plotted in figure 2(a). The upper and lower meshes are related to the boundaries of H(q x , q y , q z ) at q z,max and q z,0 . The values of q z,max represent the outer sphere in figure 2(a), which corresponds to the backscattering directions. The lower mesh is given by q z,0 values belonging to the maximum angle of incidence θ max with respect to the optical axis. Finally the constant value q z,min represents the plane, were both, the angle of incidence and the scattering angle, equal θ max . The colors of the mesh correspond to the values of the function H(q x , q y , q z ). Figure 2(b) demonstrates that even for out-of-plane rays related to q z,min < q z < q z,0 , H(q x , q y , q z ) shows non-zero values. For q x = q y = 0, in agreement with [29,31] H(q x , q y , q z ) = H(q z ) is proportional to the axial spatial frequency q z . Since CSI signals are typically analyzed at a certain evaluation wavelength corresponding to a certain value of q z , figure 3 shows exemplarily four horizontal 2D cross sections of H(q x , q y ) at q z /2k 0 = 0.77, 0.67, 0.56, and 0.44. These cross sections are named 'partial transfer function H p (q x , q y )' in the following. The radius of the outer circular boundary of a partial transfer function represents the spatial frequency bandwidth of the interference microscope for this particular partial transfer function. The partial transfer functions shown in figure 3 correspond to evaluation wavelengths λ eval of 574, 658, 787, and 1004 nm, respectively. Their shape depending on the lateral spatial frequency q ρ determines details of the transfer characteristics.
Obviously, the lateral spatial frequency bandwidth, which corresponds to the radius of a given partial TF, increases as the evaluation wavelength increases. In CSI measurement the evaluation wavelength is typically adjusted such that it coincides with the central peak of the spectrum obtained from an interference signal [22,29]. This corresponds to the partial TF according to figure 3(a), where the evaluation wavelength λ eval = 574 nm is approximately 30% longer than the central wavelength λ 0 = 440 of the illuminating light. Hence, if we select a short evaluation wavelength, i.e. q z > q z,0 the corresponding partial transfer function will be a circular disc (see figure 3(a)) and its Fourier transform leads to an Airy disc in object space as it was found by Abdulhalim [32,33].
In contrast, the highest lateral resolution is reached for the longest evaluation wavelength λ eval = 1004 nm, which corresponds to q z /2k 0 = q z,min /2k 0 = 0.44. Note that according to figure 3(d) the shape of this partial transfer function equals the partial transfer function for incident and scattered rays including a maximum angle θ max with respect to the optical axis. Thus, a similar result is to be expected for an annular aperture of maximum diameter k 0 NA. However, in case of interference microscopy the evaluation wavelength λ eval can be adjusted by software settings such that no physical annular aperture is necessary. Therefore, we use the term 'virtual annular aperture' in context with long evaluation wavelengths, where light rays contributing to the interference signal propagate under angles θ in ≈ θ s ≈ θ max with respect to the optical axis. In order to create a virtual CSI instrument the partial transfer function needs to be multiplied by the scattered light field in the spatial frequency domain. Based on the scalar Kirchhoff or physical optics approximation the scattered light field U s (q) can be obtained from the field U obj (x, y, q z ) directly above the surface of an object [22, 34,35]: where the area A of integration corresponds to the field of view of the microscope. Defining an appropriate field illumination function A(x, y) equation (6) can be written as a two-dimensional Fourier transform: Note, that U obj depends on the lateral coordinates x and y and on the axial spatial frequency q z . This is due to the fact, that in interference microscopy the object is often considered as a pure phase object such that U obj (x, y, q z ) results in: where U 0 is a constant field amplitude and s(x, y) is the surface height function, which modulates the phase of the reflected field directly above the object. The same object field results via Fourier transform with respect to the z coordinate from the so-called foil-model of a surface [21,36]. However, a reflective object may also cause a spatial amplitude modulation of the field instead of a phase modulation. If the angular dependence of the reflectivity is neglected this results in: where a(x, y) is the amplitude modulation function. Now, in q-space the intensity change ∆I (q) due to interference, which is transferred by the measuring instrument, results from frequency domain filtering of the Fourier representation of the light field U s (q) scattered from the object using the 3D transfer function: However, as mentioned above, in interference microscopy the resulting interferogram is usually analyzed at a certain evaluation wavelength λ z,eval = 4π/q z,eval . If this is considered in equation (10) the partial transfer function H p (q x , q y ) = H(q x , q y , q z,eval ) comes into play: Consequently, phase and amplitude of the complex interference intensity ∆Ĩ (x, y) in the object space directly follow from inverse 2D Fourier transform of ∆I ( q x , q y ) : If the object under investigation is a phase object the reconstructed surface height function s rec (x, y) results from the phase obtained from ∆Ĩ (x, y): If on the other hand the object is an amplitude object, its reconstructed amplitude distribution U 0 a rec (x, y) is given by: Note that U 0 a rec (x, y) is proportional to the amplitude reflection coefficient and thus may take negative values. In conventional microscopy the intensity distribution I(x, y) of an object can be equivalently obtained as the inverse Fourier transform of the product of the Fourier transformed object intensityĨ obj ( q x , q y ) and the well-known modulation transfer function MTF ( q x , q y ) , which holds for a diffraction limited system with spatially incoherent illumination [37]: Under the assumption of constant pupil illumination the two-dimensional MTF ( q x , q y ) and the 3D-TF H(q x , q y , q z ) are related to each other via the projection slice theorem [38], i.e.: Note that the intensities I(x, y) and I rec (x, y) are proportional to the reflectivity of intensity and are thus limited to positive values. The MTF ( is also used to characterize the 2D transfer characteristics of interference microscopes [22,39,40]. However, this approach is just a rough approximation, since in contrast to 2D imaging, interference signals of an image stack are analyzed pixel by pixel with respect to phase and envelope. In the following we are interested in the question, how good two amplitude or phase irregularities, which are a certain distance apart from each other, can be resolved by interference microscopy. For amplitude objects the results can be compared with the Rayleigh resolution limit, which follows from equation (15).
Simulation
In order to elucidate the consequences of the above mentioned transfer characteristics and to point out the differences in lateral resolution between conventional and interference microscopy this section introduces simulation results for both, phase objects as well as amplitude objects. In addition, the discrepancies in lateral resolution in case of gratings of certain period compared to two separated irregularities at a certain lateral distance d from each other will be discussed.
First, we define a test structure, which represents a single irregularity a i (x, y) of diameter d in the xy-plane: This single irregularity can be added periodically to form a 2D amplitude object: If the difference M max − M min is a large number a(x, y) is an amplitude grating in x-direction, and the same for the y-direction. Otherwise, if M min = 0, M max = 1 the object comprises two irregularities in a distance d in x-direction. Using equation (8) and introducing the surface height factor s 0 one can easily transform any amplitude object into a phase object, where: is the surface height function. In the following we assume that the field illumination function A(x, y) extends over an area with dimensions that are very large compared to the distance d between two neighboring surface irregularities. Hence, the periodic extension of A(x, y) due to the application of the discrete Fourier transform does not affect the final result and equation (7) can be seen as a two-dimensional Fourier transform of the object field U obj (x, y, q z ). Figure 4(a) shows a two-dimensional phase grating corresponding to the surface height function s(x, y) of d = 2 µm period and s 0 = 25 nm total height difference. Figure 4(b) is the reconstructed surface height function s rec (x, y), which results if the partial transfer function H p (q x , q y ) according to figure 3(a) is used. Figure 4(c) shows the difference between (a) and (b) and indicates a nearly perfect reconstruction, which is due to the rather long period of 2 µm. Note that the reconstructed surface height function shows maximum deviations below 0.1 nm, although the partial transfer function H p (q x , q y ) has a constant value of 0.766 over the whole transfer range of: This is due to the phase object, where both, the real and the imaginary part are multiplied by the same value, such that the phase angle remains unchanged. Figure 5(a) shows an amplitude grating a(x, y) with a period d of 170 nm in x and y direction, whereas in figure 5(b) a(x, y) represents an amplitude grating along the x-axis and a double slit in y-direction, i.e. two parallel gratings in x-direction separated by a distance d = 170 nm in y-direction. In figure 5(c), a(x, y) shows amplitude irregularities in a quadratic arrangement, i.e. a double slit with d = 170 nm in x-and y-direction.
Therefore, the angular grating frequency 2π/d = 36.96 µm −1 is blocked by the partial transfer function and in figure 5(g) the grating structure is no longer visible. According to figure 5(e) the Fourier transform of the two parallel gratings of figure 5(b) results in discrete vertical lines representing the diffraction orders of the grating, whereas the double slit arrangement along the y-direction leads to a cosinusoidal modulation along the q y -axis, which still shows first order contributions different from zero for |q y | ⩽ 25.7 µm −1 . Consequently, the vertical double slit structure is resolved in figure 5(h), whereas the grating structure is not. Figure 5(i) shows that the quadratic arrangement of irregularities will be resolved and thus confirms the above argumentation. However, note that the reflectivity changes of the reconstructed amplitude object are much smaller than the original values. This follows from the rather small values of the partial transfer function H p (q x , q y ) according to figure 3(d) [20]. Figure 6 shows the same effects but for phase objects with distances d of 170 nm between irregularities of height s 0 of 100 nm. Again the double slit arrangement will be resolved even if the diffraction orders due to the grating structure are filtered out by the partial transfer function. In this case of a phase object the amplitudes of the resolved surface irregularities are much smaller than the original height difference of 100 nm.
Finally, figure 7 shows results for two horizontally shifted irregularities separated by distances d = 170 nm in (a) and (d), d = 180 nm in (b) and (e) and d = 190 nm in (c) and (f). Comparison of the subfigures corresponding to the same distance value d reveals that there is no difference in amplitude and phase resolution capabilities. Due to normalization, according to the generalized Rayleigh criterion two irregularities are resolved if the value of the local minimum at x = 0 is 0.735 [25]. This criterion holds for distances d > 190 nm. Thus, compared to conventional imaging the lateral resolution is improved by 36%. The situation corresponding to d = 170 nm leads to figures 5 and 6. Even in this case a local minimum is visible at x = 0.
A more general concept in this context treats the irregularities related to an object as point sources emitting spherical waves. With the point spread function (PSF) obtained from the MTF by an inverse 2D Fourier transform this leads to the Rayleigh resolution criterion for a conventional brightfield microscope. Even more, the lateral resolution of a conventional brightfield microscope can be improved by use of an annular aperture. As pointed out in a previous paper [20] this kind of resolution enhancement is closely related to the concept of partial transfer functions in interference microscopy. The normalized inverse 2D Fourier transform of a partial transfer function of rotational symmetry, i.e. H P (q x , q y ) = H P (q ρ ) leads to what we call a partial PSF: and J 0 (. . .) the zero order Bessel function of the first kind. In order to achieve an optimum lateral resolution we take the partial transfer function at q z = q z,min . In figure 8(a) this partial PSF is compared to the Airy disc and the PSF for a narrow annular ring aperture of maximum diameter. Compared to the PSF for this annular aperture the partial PSF is slightly narrower, but shows stronger side lobes. This is due to the fact that the partial PSF directly corresponds to the inverse 2D Fourier transform of the partial transfer function, whereas the PSF of a brightfield microscope is related to intensity and thus to the absolute square of the inverse 2D Fourier transform of a circular ring. Figures 8(b) and (c) represent superpositions of two PSFs laterally separated by d = 192 nm (b) and 210 nm (c). For d = 192 nm the partial PSF h P (x) fulfills the generalized Rayleigh criterion, i.e. the value at x = 0 is 0.735 times the maximum value [25], whereas for the PSF of the annular aperture a distance d = 210 nm is needed in order to fulfill the Rayleigh criterion. Note that the Rayleigh resolution obtained from the Airy disc is 298 nm for an NA of 0.9 and a wavelength λ 0 of 440 nm. For comparison, the Sparrow criterion [12] leads to a lateral resolution of 164 nm for the partial PSF, 172 nm for the annular PSF, and 230 nm for the PSF given by the Airy disc. These results confirm the theoretical assumption according to which the two-point resolution is superior compared to the Abbe resolution limit for the corresponding grating. Note, that the experiments were conducted with a Linnik interferometer in a standard configuration. Only the evaluation wavelength for the lock-in phase calculation [42,43] was specifically adapted to values of 700 nm in figures 9(c), (e) and 740 nm in figures 9(d) and (f).
Experimental results
Another experimental result is shown in figure 10. In this case, the sample was fabricated by a focused ion beam system. The individual structures consist of two craters 200 nm apart with a bar of less than 50 nm width in between. Figure 10 measured height values are to the real surface heights. This effect is already mentioned in earlier publications [23,24]. However, the physical origin of the side lobes is due to the sharp transition at the edges of the partial transfer function. Consequently, reducing the sharp edges of the transfer function by an appropriate apodization filtering in the spatial frequency domain may reduce the side lobes without significantly affecting the lateral resolution capabilities.
The different evaluation wavelengths chosen in figures 9 and 10 elucidate the compromise between optimizing the lateral resolution and the signal-to-noise ratio of the interference signals. Note that the wavelength bandwidth of the light emitted by the blue LED is approximately 20 nm. Hence, the light is no longer monochromatic, what leads to a significant reduction of the values of H(q x , q y , q z ) at higher transversal spatial frequencies q x and q z compared to the monochromatic case shown in figure 3 [21].
Although the experimental results obtained so far demonstrate the increase in resolution in a single lateral direction only, we expect that the resolution enhancement behaves isotropic, i.e. apart from aberrations of the optical system the lateral resolution enhancement is independent of the direction. This is due to the fact that the 3D TF shows rotational symmetry.
Conclusion
In this contribution the previously introduced three-dimensional transfer function of a CSI instrument is used to build a virtual interference microscope of high numerical aperture and to study its lateral resolution capabilities close to the resolution limit. Both, amplitude and phase objects are being examined. Furthermore, these objects are either optical gratings, two-point irregularities or combinations of both. Hence, the corresponding criteria characterizing the lateral resolution capabilities are either based on the fundamental Abbe limit or on the Rayleigh criterion, which holds for two irregularities separated by a certain distance. It turns out that for grating structures the Abbe limit represents a fundamental limitation even in interference microscopy. On the other hand, if two points on an object are to be resolved, interference microscopy provides a significantly superior lateral resolution compared to conventional microscopy. This is due to the 3D transfer characteristics of an interference microscope, where the lateral resolution capabilities depend on the axial spatial frequency value, which is closely related to the wavelength, at which an interference signal resulting from a so-called depth scan is analyzed. Furthermore, the electric field reflected from the object is the input quantity in interference microscopy instead of intensity in conventional microscopy and other types of optical microscopes. The highest lateral resolution is achieved for the longest evaluation wavelength. In this case the lateral resolution of an interference microscope is approximately 36% better than the lateral resolution defined according to the Rayleigh criterion in conventional microscopic imaging with spatially incoherent illumination. This is a consequence of the fact that long evaluation wavelengths are due to oblique angles of incidence. Thus, choosing a long evaluation wavelength in the signal analysis algorithm affects the lateral resolution in a similar manner as an annular aperture. At the lateral resolution limit the relevant partial transfer function of an interference microscope acts as a virtual annular aperture, since only oblique light rays contribute to the object reconstruction from a stack of interferograms. The results presented here give an advanced understanding of the physical mechanisms of interference microscopy. They can be applied to both, phase as well as amplitude objects and demonstrate for the first time that the lateral resolution capabilities of an interference microscope surpass those of conventional brightfield microscopy.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors. | 2023-01-17T19:32:49.668Z | 2023-01-11T00:00:00.000 | {
"year": 2023,
"sha1": "ea869c19cfb06ab48263572c8f857f250fd2d828",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2515-7647/acb249/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "62afbd603c0f54de585dd5a4ec962f7ea48fd855",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
266928554 | pes2o/s2orc | v3-fos-license | Potential of Medicinal Plants in Pampang Village Samarinda As A Chemistry Learning Resource
: This research aims to identify the potential of medicinal plants in Pampang Village, Samarinda, East Kalimantan, which can be used as a source of chemistry learning at the junior and senior high school levels. This research used a descriptive method with a qualitative approach. Data collection was carried out through observation, interviews, and documentation. The research subjects, namely 11 residents of Pampang Village, were determined using purposive sampling. Data analysis used interactive model analysis, and data validity is measured using data triangulation techniques. The results of the research show that there are 20 medicinal plants that the people of Pampang Village widely used and eight medicinal plants have the potential to be used as a source of chemistry learning. These medicinal plants were sambung nyawa, meniran, bajakah, betel, seri, turmeric, coconut, and tamarind eggplant. These plants related to materials separating mixtures, reaction rates, functional groups, lipids, acid and base indicators, electrolyte and non-electrolyte solutions, acids and bases, chemical changes, and redox. Therefore, this can be used as a reference and consideration for chemistry teachers in the learning process. to help students understand chemistry lessons closely related to everyday life.
Introduction
Currently, many learning resources are used in the learning process, including books and the internet.The use of books and the internet in the learning process makes it very easy for students to get information, especially if we use the internet, the information we get only takes a short time and the process of searching for the information is easy (Sasmita, 2020).However, this does not rule out the possibility that students need help understanding the learning even though the teacher has conveyed it through these learning resources.One subject still very difficult for students to understand is natural science (science).It causes students' understanding of the concept of learning to be poor.One of the reasons science learning is considered difficult is because of the paradigm students develop (Supriyadi & Nurvitasari, 2019).Science learning consists of several aspects: Physics, Biology, and Chemistry (Wulandari, 2017).Chemistry is a natural science that studies matter's composition, structure, properties, and changes (Suhendar, 2017).Chemistry itself is found in everyday life.Linking chemistry to everyday life in the learning process can help students create relationships and better understand learning material theoretically by applying science in everyday life (Simanjuntak et al., 2019).
East Kalimantan is one of Indonesia's provinces with cultural and biological diversity.The people of East Kalimantan greatly utilize this cultural and biological diversity in their
Results and Discussion
Pampang Village is a sub-district in Samarinda City, approximately 11 km from the city.Pampang Village is mostly inhabited by indigenous Dayak people, although there are several RTs whose people come from other tribes.The people in Pampang village still uphold their culture and nature.It can be seen from the performances usually held every holiday, such as Saturday and Sunday.Apart from that, there are still many people who use various kinds of plants around them in their daily lives and grow crops in this village.The results of interviews with respondents obtained data on plant types that the people of Pampang Village widely use.Plant data, community knowledge, and the chemical content of the plants are presented in Table 2. Cassava leaves are used as an anti-breast cancer agent.
The contents of cassava leaves are lysine, arginine, leucine, and isoleucine which are essential amino acids.Then cassava leaves also contain antioxidants and saponins.These ingredients can help the growth of damaged cells and prevent premature aging (Muhammad et al., 2018).
Canescens)
Lots of sungkai leaves used by the community treat fever and malaria.
Sambung Nyawa Leaves (Gynura Procumbens)
Used by society to treat diseases like cholesterol.
The chemical contents of sambung nyawa leaves are saponins, tannins, flavonoids, and essential oils.These compounds can help reduce cholesterol in the blood (Oktaviani et al., 2019).
Meniran (Phyllanthus Urinaria)
Meniran is trusted by society and can treat kidney stones and maintain endurance.Apart from that, society also uses this plant to treat coughs and jaundice The chemical compounds contained in the meniran plant are saponins, flavonoids, polyphenols, phyllanthin, hypophyllanthin, and potassium salts.These compounds will increase antioxidant activity in the plant (Tambunan et al., 2019).
Kumis
kucing is utilized by the community to heal various kinds of diseases such as hypertension and diabetes.
The chemical content in kumis kucing is phenol, such as isopimaran diterpenoids, flavonoids, benzochromene, and organic acid derivatives.These ingredients are diuretic and antioxidant so they can reduce hypertension and diabetes (Hasbi et al., 2019).
Soursop (Annona muricata)
Soursop itself believes society has benefits to launch digestion.Soursop too can be trusted to cure disease gout.
The chemical contents of soursop are calories, carbohydrates, protein, fat, calcium, iron, phosphorus, vitamin A, vitamin B1, and vitamin C.These compounds are antioxidants that can increase the body's endurance and slow down the aging process, then can also prevent osteoporosis.The content in soursop leaves called Annocaeous Acetogenin can kill cancer cells (Elidar, 2017).
Galangal (Kaempferia galangal)
Many people use galangal for herbal medicine and consumption.
The chemical ingredients in galangal are essential oils, saponins, flavonoids, and polyphenols.These ingredients are beneficial for the body, one of which is to reduce sore throats (Mudaningrat & Nada, 2021).
Ginger (Zingiber officinale)
Ginger is used for treatments like cough and to warm the body.
Littoralis hassk)
The public uses it to treat various diseases one of them being cancer.
The chemical content in bajakah is phenolic compounds.This compound can have antioxidant activity which is very necessary for curing cancer (Ayuchecaria et al., 2020).
Sweet potatoes (Ipomoea batatas)
Sweet potatoes can be used to control pressure The chemical content of sweet potatoes is flavonoids in the form of anthocyanins, with the presence of these compounds sweet potatoes can be used as healthy food to reduce cell damage in the body (Salim et al., 2017).
Benalu (Loranthus)
Benalu is used by society to treat cancer and malignant tumors.
The chemical contents of mistletoe are alkaloids, flavonoids, saponins, tannins, and steroids.In the body of this benalu there is high antioxidant activity so that it can cure cancer and malignant tumors (Tarbiyah & Aceh, 2018).
Seri (Muntingia calabura)
Seri is usually utilized by society for consumed as food.
The chemical constituents of the seri are esters, alcohols, phenolic compounds, sesquiterpenoids, and furan derivatives.Not only is it consumed, but this plant can be used to treat jaundice and gout (Handayani, 2020).
The chemical contents of Putri Malu are tannins, steroids, alkaloids (mimosine), triterpenes, flavonoids, glycosides, C glycosylflavones, and flavonoid compounds from Putri Malu leaves which are anti-inflammatory, antioxidant, free radical scavenger, anti-allergic and phenolic compounds is hepatoprotective.The ingredients in this Putri Malu can help treat diabetes wounds (Lengkong et al., 2021).
Ciplukan leaves (Physalis)
Ciplukan leaves are used in society to treat stress.Apart from pressure, this plant is believed to be able to treat diabetes.
Turmeric (Curcuma Longa)
People take advantage turmeric to treat colds, heartburn, and sore throat.
The chemical ingredients found in turmeric are essential oils and curcuminoids.This compound has antibiotic properties so it can suppress or stop biochemical processes in an organism (Nobiola et al., 2020).
Coconut (Cocos nucifera) Coconut can improve body endurance
The chemical contents of coconut are auxin and cytokinin and vitamins.The content of coconut is antioxidant so it can help increase the body's endurance (Nurman et al., 2017).
Moringa Leaves (Moringa oleifera)
People believe that Moringa leaves can be used in the health sector, namely to treat diabetes.
The chemical content of Moringa leaves is phytosterol, a source of beta carotene, vitamin C, iron, and potassium (Hamzah & Yusuf, 2019).
Curcuma (Curcuma zanthorrhiza)
Many people use this material to treat the throat is starting uncomfortable.
Sour Eggplant (Solanum Ferox)
According to the community, sour eggplant can help maintain the body's immune system.Apart from that, this plant can be used to treat coughs, sore throats, asthma, fever, and vomiting.
The chemical contents of sour eggplant are flavonoids, soladin, and alkaloids.The soladin compound from Solanum species functions as an anti-inflammatory antioxidant, a steroid extracted from the roots and leaves (Arief et al., 2019).
Table 3 presents the reduction results of medicinal plants that can be used in chemistry learning.This reduction is based on literature studies and the results of interviews obtained by researchers with resource persons who are residents of Pampang Village.
Separation of mixtures
The process of processing these plants is mostly by boiling them and then taking the boiled water.The filtration or filtration method is used in the process.
Meniran (Phyllanthus urinaria) Bajakah (Spatholobus littoralis hassk) Betel (Piper betle) Turmeric (Curcuma longa) Meniran (Phyllanthus urinaria)
Reaction rate Meniran preparations are very diverse.Some preparations (capsules and the plant itself) can be used as an example of studying chemistry, namely reaction rates by comparing reaction rates in the body.
Functional groups
These plants have chemical compounds that can be studied by the functional groups contained in their contents.
Turmeric (Curcuma longa) Acid and base indicators
Turmeric contains anthocyanins which can act as natural acid and base indicators.
Coconut (Cocos nucifera)
Electrolyte and nonelectrolyte solutions Coconut water contains ions which are electrolyte solutions so it can be used as a reference in the chemistry learning process.
Acid and Base
Sour eggplant has a characteristic sour taste so it can be used as an example in the topic of acids and bases.
Seri (Muntingia calabura)
Chemical changes Series that are left in the open air will experience rot which is a chemical change.So that students can find out examples of natural ingredients that affect fat in the body.
Oxidation-reduction reactions
Each of these medicinal plants has antioxidants in its content.Antioxidants themselves are related to redox materials because oxidation reactions occur when they enter the body.So that we can study the redox reactions that occur in these contents.There are many chemical materials at both junior high school (SMP) and senior high school (SMA) levels that can be combined with medicinal plants known to the people of Pampang Village.These chemical materials are acids and bases, mixture separation, functional groups, acid and base indicators, electrolyte and non-electrolyte solutions, chemical changes, reaction rates, and redox reactions (Table 3).Material on the properties of acids and bases in solutions can be used in junior high school science learning through science investigation activities (Hidayat, 2023).Oxidation-reduction concept material in high school can be studied by utilizing natural materials in the environment (Tangio et al., 2023).
Meniran
The first material is acids and bases and acid and base indicators.Natural ingredients that can be used in this lesson are sour eggplant and turmeric.Tamarind eggplant has a sour fruit taste and is widely used as a traditional medicine and even as a flavoring in cooking.
This sour taste can be integrated with chemistry learning on acid and base material (Arief et al., 2019).Then turmeric can be used as an indicator of natural acids and bases.Turmeric contains curcumin which can provide clear and fast color changes.If it is alkaline, it will be brownish red; if it is acidic, it will be light yellow (Sundari, 2016).
The next material is the separation of mixtures.Some examples of plants used in this lesson are sambung nyawa leaves, bajakah, seri, and turmeric.These natural ingredients are mostly used by boiling them, so the plant juices come out.If you want to drink the water, you can use one of the mixture separation techniques in chemistry subjects.The mixture separation method is filtration (Mashadi et al., 2018).Finally, the boiled water with the plant will separate, so it can be consumed.This is what can be taught to students to understand the concept of learning better because they can see or apply the method directly.
Functional groups can be studied using natural materials.Each natural ingredient has its ingredients which have functional groups.One example is turmeric which contains curcumin (Nobiola et al., 2020).Other plant ingredients are presented in Table 2.These ingredients have functional groups that can be studied at school as examples of these functional groups in everyday life.In chemistry studies, there are alkanes, alkenes, and alkynes, as well as other functional groups studied in high school.These natural materials can make it easier for students to absorb chemical material taught by schools because there are examples in everyday life.
Next is the subject of chemical changes.Plants that can be used in this lesson include the series.People usually consume seri fruit directly.If this fruit is left in the open air, rot will occur.This rotting means that the fruit undergoes chemical changes, the properties of the fruit cannot be returned to their original properties, and new substances are produced (Iskandar & Kusmayanti, 2018).The people of Pampang Village also consume many coconuts in their daily lives.The electrolyte content in coconut water can make coconut an example of electrolyte and non-electrolyte solutions in everyday life (Rokana & Khusbana, 2018).
Next is macromolecular material, namely lipid.Many people use betel leaves for treatment.Apart from betel leaves, there is also seri fruit with similar benefits, namely lowering cholesterol.According to research by Rangkuti et al. (2018) and Tulung et al. (2017), betel leaves and seri can affect reducing cholesterol levels.Cholesterol is related to fat in the macromolecule material taught in school.Therefore, betel leaves and series, which are used to reduce cholesterol levels in the body, can be a link between natural ingredients and fat or lipid materials.So that students can find out examples of natural ingredients that affect fat in the body.
The following chemical material is the reaction rate.Reaction rates are also one of the chemistry subjects that can use the natural materials mentioned above.One of them is the meniran plant.Usually, people dry this plant and then consume it as tea.However, meniran plants have also been processed in the capsule dosage form.The application of this reaction rate material is that drugs in powder or tea dosage form can react more quickly than in capsule form due to their larger surface area (Fajriati et al., 2017).These medicinal plants contain antioxidants in the plant.Antioxidants themselves are substances that protect cells from damage caused by free radicals.Antioxidants are related to redox materials because oxidation reactions occur when they enter the body.From the explanation above, the potential of Pampang Village's medicinal plants can be used as content and media for chemistry learning based on local wisdom.For example, sour eggplant and turmeric plants can be used as learning resources in junior high school betel leaves and seri in people's lives can be used as a reference for learning chemistry, namely to lower cholesterol.The compounds contained in betel leaves and seri can help reduce bad fats in the body. | 2024-01-11T16:13:52.214Z | 2024-01-10T00:00:00.000 | {
"year": 2024,
"sha1": "11ac73659ea63586cab2a2cea87a420220b92427",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.undikma.ac.id/index.php/pedagogy/article/download/6172/5320",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "99e2aadffd2147ced2730198051d936c40864c62",
"s2fieldsofstudy": [
"Education",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
} |
191143943 | pes2o/s2orc | v3-fos-license | Effect of sub-effective dose of GABA agonists on attenuation of morphine tolerance in rats: Behavioral and electrophysiological studies
GABAergic drugs can change analgesic effect of morphine. Wide dynamic range (WDR) neurons play an important role in pain transmission and may change behaviors in morphine tolerance. In this study WDR neuron behaviors in morphine tolerant rats and rats treated with GABA agonists, were recorded to elucidate the effect of morphine and GABA agonists on WDR behavioral changes. Rats were divided to 4 groups: 1. Control, 2. Morphine tolerance (MT), 3. MT+ muscimol, 4- MT+ baclofen. To induce morphine tolerance in rats, they received morphine sulfate 10 mg/kg intraperitoneally for 8 days. In treatment group, GABA agonists were injected on days 1, 3, 5 and 8 before injection of morphine. To confirm morphine tolerance induced, formalin test was used. Extracellular single unit recording was used to record spinal WDR neurons. Results showed that chronic administration of morphine failed to attenuate formalin pain but GABA agonists improved analgesic effect of morphine.
Introduction
Morphine is an effective drug in improving acute and chronic pain syndrome. Especially in chronic pain, it can attenuate pain, improve patient's life and increase life expectancy in people with chronic pain. (Lilius et al., 2017). Although morphine is the most efficient drug in chronic pain but chronic administration of morphine leads to the development of tolerance to the anti-nociception effect of morphine. Tolerance limits the use of opioids and therefore co-administration of other drugs with chronic administration of morphine is considered (Mansouri et al., 2015). After several decades of study on morphine tolerance, it is still not understood very well. One of the most possible causes of tolerance is alternation on neurotransmitter's level (Mehrabadi and Karimiyan, 2018). There are several reports about alternation of GABA neurotransmitter on opioid tolerance and dependence (Dobashi et al., 2010;Hull et al., 2013). Recently, behavioral and molecular studies showed that GABA agonists could be used in the treatment of opioid tolerance and improve the alternation of GABA level in morphine tolerance and dependence (Vorma and Katila, 2011). Systemic administration of GABA agonists attenuated withdrawal syndrome induced by naloxone (Belozertseva and Andreev, 2000). GABA has an important role in pain modulatory pathway that is activated by morphine and affects spinal cord (Lueptow et al., 2018). The main center of action of morphine is the dorsal horn of the spinal cord (Haugan et al., 2008). There is no investigation that clarifies the effect of morphine tolerance and effect of GABA agonists on electophysiological changes in spinal cord level. In the present study, subeffective dose of GABA agonists that did not induce analgesia was determined to investigate the effect of GABA on morphine analgesic in chronic administration. Then, the effect of morphine tolerance on WDR neurons behavior was investigated as one of the most important neurons which has a role in pain transmission on spinal cord. Moreover, the effect of different GABA agonists on electrophysiological changes of WDR neurons was investigated to understand the possible effect of morphine tolerance and co-administration of GABA agonists with morphine in chronic use at spinal cord level and WDR neuron's behaviors.
Materials and methods
Animals 64 male wistar rats weighing 200-250 gr were used in this study (n=8 per group). They were housed four per cage under a 12 h light/dark cycle in a room with controlled temperature (22±1 °C). Food and water were available ad libitum.
Protocol
First, to find sub effective dose of GABA agonists, three different doses of drug were used as a single dose injection (0.5, 0.75 and 1 mg/kg; i.p.) in normal rats before the formalin test. Baclofen and muscimol (0.5 mg/kg, i.p.) had no analgesic effect on the normal rats in the formalin test, thus, these doses were selected for the current studies to evaluate the GABA agonist effect on morphine analgesia. Animals were divided into the 4 groups: (i) a control group that received saline; (ii) a group induced morphine tolerance: received morphine once every day for 8 days (morphine tolerance group); (iii) a group induced morphine tolerance and injected muscimol (0.5 mg/ kg; i.p.) and (iiii) a group induced morphine tolerance and injected baclofen as GABAB agonist (0.5 mg/kg; i.p.). There were eight rats in each group. The behavioral test (Formalin test) was performed on the 8 th day in the four groups. Also, in the electrophysiological part, there were 4 groups that had a same protocol like behavioral test, but on the 8 th day instead of formalin test, the animals anesthetized and spinal cord of rats were exposed and the behaviors of WDR neurons were recorded.
Drug administration
To induce tolerance to analgesic effects, morphine hydrochloride (Temad, Iran) was chronically administered at a daily dose of 10 mg/kg, i.p., from days 1 to 8 (Hill et al., 2016).
To determine the effect of GABA agonists on the development of morphine tolerance, muscimol (Sigma-Aldrich, USA) was used as GABAA agonist and baclofen (Zahravi CO, Iran) was used as GABAB agonist, which were administrated i.p., at a dose of 0.5 mg/kg at days 1, 3, 5 and 8, thirty minutes before injection of morphine. Then, nociceptive (Formalin test) and electrophysiological tests were performed on day 8 th . All the drugs were dissolved in physiological saline.
Anti-nociceptive test
The rats were placed individually in an open Plexiglas chamber (bowl-like cage 40×35 cm) with a mirror angled at 45° and positioned behind to allow an un-obstructed view of the paws by the observer. The animals were habituated to the observation chamber for 30 min prior to the experimental sessions. Formalin (50 μl) was injected s.c. into the plantar surface of the rat hind paw (left or right, counterbalanced across each treatment group) using a 27-gauge needle. After injection, rats were immediately returned to the observation chamber and formalin-induced behaviors were recorded by a trained observer continuously for 60 min. Formalin injection produced characteristic behaviors consisting of flinching and licking/biting of the injected paw. These behaviors were quantified based on pain severity. The nociceptive responses were scored every 15 s as follows: 0 (the injected paw is placed on the floor), 1 (the injected paw rests lightly on the floor and little or no weight is placed on it), 2 (the injected paw is elevated and not in contact with any surface), and 3 (the injected paw is licked, bitten, or shaken) (Roca-Vinardell et al., 2018). The total nociceptive score is expressed based on percentage of area under curve (AUC) over 0-5 minutes for phase I and 15-60 min for Phase II. We used one way ANOVA for evaluating of significance between data.
Electrophysiology study
Extracellular single unit recording was performed (n=8) on day 8 after morphine tolerance was induced. Animals were anesthetized with 2.0-2.5% isoflurane (66% N2O and 33% O2). The rat was placed in a stereotaxic frame and a laminectomy was performed on T13-L1 region of the spinal cord. A tungsten electrode (Friedrick Hear & CO., Bowdoinham, ME, USA) was lowered into the dorsal horn while receptive fields on the ipsilateral hind paw were stimulated. Extracellular single unit activity was recorded from neurons at the depths of 500-1000 μm from the surface of the dorsal horn. Recorded signals were amplified by a data acquisition system (Science Beam CO., Tehran, Iran), and were continuously captured on a Pentium 4 computer using the e-Probe 1-42 software (Science Beam CO., Tehran, Iran). The signals were filtered using a bandwidth of 300-3000 Hz. The number of stored digital spikes for each stimulus was counted in 1 ms bin sizes using e-Probe spike software (Science Beam CO., Tehran, Iran) to build histogram of post-stimulus time. Besides, the responses of different fibers were separated according to their latencies (Ab-fibre 0-20 ms; Ad-fibre 20-90 ms; and C-fibre 90-300 ms). Responses that occurred after the C-fibre latency were characterized as post-discharge (300 to 800 ms). Wind-up was calculated as the total number of evoked action potentials after 16 stimuli at three times. In this sense, the C-fiber threshold minus the input spike is multiplied by 16. Input spike is the number of C-fiber latencies draw out by the first electrical stimulus. The WDR neurons were identified by the depth of the microelectrode and characteristic response profiles of the neurons. After characterization of the neuron by means of natural stimuli, 16 electrical pulses (0.5 Hz, 2 ms wide) were applied via needles inserted into the center of the receptive field in rat paw. This provided a constant reproducible test stimulus for the experiment. Stimulation was applied at 3 times the threshold current for C-fiber activation and a post-stimulus histogram (PSTH) was built and displayed by e-probe software (Science beam-Iran). From the PSTH, the C fiber -evoked response could be separated by latency and threshold from the Aβ, Aδ, post discharge, input spikes and wind up activity, and then quantified. Then we used one way ANOVA for evaluating of significance between data.
Behavioral study
1. administration of formalin into the rats' hind paws could induce biphasic behaviors of sever pain (licking/biting/shaking ) observed between 0-5 min (first phase) and 15-60 min (second phase) as there is no responses recorded between 5 and 15 min (silence phase). 2. Administration of morphine (10 mg/kg daily i.p. for 8 days) to rats induces tolerance to the analgesic effects of morphine. There was no significant difference between the group that received chronic administration of morphine and the control group in both phases of the formalin test. However a single dose of morphine in non-tolerance rats provides a strong analgesic effect of the nociception stimulus in both phases of the formalin test ( Figure 1) (P<0.0001). 3. Single dose administration of muscimol and baclofen (0.5 mg/kg) could not decrease pain induced with formalin in the controlled group ( Figure 2) but repeated administration of muscimol and baclofen could significantly reduce pain in both phases of formalin test and delays to the analgesic effects of morphine tolerance development as measured by the formalin test in male tolerate rats (Figure 3) (P<0.001). Pretreatment with muscimol and baclofen 30 min before morphine injection in 1, 3, 5, 8 days of injection strongly influenced on morphine tolerance in 8 days and delay in the development of morphine tolerance and no tolerance developed during the experiment. There is no significant differences in the antinociceptive effect of morphine between the morphine-tolerance group and the control group in both phases of the formalin test; But significant differences were found between the morphine-tolerance group and non-tolerant one (n=6, mean±SEM) ** p<0.01 **** p<0.0001. Statistical analyze were performed using the one-way ANOVA and Bonferroni correction tests Figure 2. The effect of GABA agonists on morphine tolerance. There is significantly differences between baclofen and muscimol groups and morphine tolerance and control group. (n=6, mean±SEM) ** p<0.01 **** p<0.0001). GABA agonists reduced pain in both phases of formalin test. Compared baclofen+morphine tolerant group/morphine tolerance group and compared muscimol+morphine tolerant group/morphine tolerance group. Statistical analyze were performed using the one-way ANOVA and Bonferroni correction tests
Electrophysiology study
1. The Electrophysiology studies in morphine tolerate male rats in dorsal horn WDR neurons on day 8 showed that Aβ fiber evoked responses which were not significantly different with the controlled group. But, Aδ and C-fiber evoked responses from electrical stimuli that were performed on RFs of WDR neurons in hind-paw and were higher in the morphine-treated group in comparison to the controlled group (p<0.01). Moreover, a comparison of WDR neurons in morphine tolerance and the control group showed significant increases in post-discharge (P<0.01), input spikes (P<0.05), and windup spikes (P<0.01) (Figure 4). 2. The effect of administration of GABA agonists in morphine tolerate rats were analyzed. The result showed significant decreases in Aδ-fiber (P<0.01) and C-fiber (P<0.01) mediated transmission to WDR neurons, as well as post-discharge (P<0.01), input spike (P<0.05) and windup spikes (P<0.01) in comparison to the morphine tolerance group ( Figure 5). Induction of wind up, input spike and post discharge that excitability reflect WDR neurons increase in comparison to the controlled group. This result showed that the chronic administration of morphine increases activation of WDR neurons. But, GABA agonists group could significantly attenuate induction of wind up, input spike and post discharge as they couldn't return these parameters to a baseline state. Although, when we used baclofen or muscimol alone (0.5 mg/kg) with bolus injection in non-tolerate rats, there was no significant response in Aδ and C-fiber evoked-responses in comparison to the controlled group in which the result showed that GABA agonists with this dose in non-tolerate rats don't have analgesic effect while when we used them with chronic injection of morphine, they could decrease Aδ and C-fiber evoked-responses, increased antinociceptive effect of morphine tolerance and prevented the development of morphine.
Control group
Morphine tolerance group Figure 4. The PSTH of WDR neurons to electrical stimuli, in morphine tolerance and control groups. The responses evoked by the different fiber were quantified on the basis of latency measurements (Aβ-fiber, Aδ-fiber, and C-fiber). Results are presented as mean±SEM (n=10). The Aδ and C-fibre transmission onto WDR neurons were increased in morphine tolerate rats compared to control group, as well as PD, IS and Wind-up of WDR neurons (*P<0.05,** P<0.01) Figure 5. GABA agonists attenuate hyper activity of WDR neurons in morphine tolerate rats. GABA agonists 0.5 mg/kg inhibited Aδ and C-fiber mediated transmission onto WDR neurons (n=10) compared to morphine tolerate rats (n=10). GABA agonists also had a significant inhibitory effect on WDR neuronal post-discharge (PD) and inhibited wind-up, a potentiated response mediated by nociceptive C-fiber activity. GABA agonists had no effect to Aβ-evoked responses since there was no difference in Aβ evoked responses between control and morphine tolerance group. (*P<0.05, **P<0.01)
Discussion
In the present study, it was showed that choronic administration of morphine (i.p.: 10 mg/kg for 8 days) caused tolerance to the antinociceptive effects of morphine during the formalin test. It has been reported that intraperitoneal injection of chronic administration of morphine induces morphine tolerance in tail flick and hot plate test (Javan et al., 2003;Sepehri et al., 2010). The present data also indicated that acute administration of morphine (i.p.: 10 mg/kg) in rats has analgesic effect during the formalin tests. Also, the result of the current study showed that i.p. administration of GABA agonists augments chronic anti-nociceptive effects of morphine and block tolerance in the formalin test. GABA agonists could suppress the induction of the wind up of Aδ and C-fiber-evoked responses in single WDR spinal neurons and attenuate augmented activation of WDR neurons in morphine tolerant male rats in 8 days of morphine tolerance induction. There are other studies that confirm the present results. Using spinal cord extracellular single unit recording, another study showed that morphine tolerance increases c-fiber evoked activity and induction of LTP (Haugan et al., 2008). It has been shown that GABA has an important role in the development of morphine tolerance (Zarrindast and Mousa-Ahmadi, 1999). The interaction between opioid and GABA is a very interesting subject for study and has been studied in different models of morphine tolerance and dependence (Bannister et al., 2011;Zeng et al., 2006). There is always a controversial result with regards to GABA effects on the part of the CNS in morphine tolerance. It is established that GABA agonists can augment anti-nociceptive effect of morphine by reducing dopamine neurotransmitter in mesolimbic system (Zarrindast and Moghaddampour, 1991). Also, in this study, it was shown that GABA agonists below effective dose (0.5 mg/kg) can delay induction of morphine tolerance with impact on the spinal cord through reduction of hyper-activity of WDR neurons in morphine tolerance. Also, GABA could reduce induction of wind up in WDR neuron by attenuating C-fiber and Aδ-fiber activity, and also reduce the input spike; post discharge activity showed that the activity of WDR neurons decreased. In behavioral and clinical studies, it was found that chronic use of morphine causes hyperalgesia phenomenon (Hay et al., 2009). Studies on opioid tolerance and opioid-induced hyperalgesia have determined neuroplastisity alternations in the CNS. The most important site of opioid actions is the dorsal horn of the spinal cord. Interestingly, opioid induced pain and analgesia tolerance to opioid seem to share the same mechanisms with abnormal pain after peripheral nerve injury (neuropathic pains) (Mao et al., 1995). Both states are associated with reduction of anti-nociceptive effect of opioid and may be reversed by spinal GABA agonists by decreasing induction of wind up in morphine tolerance (Ji et al., 2003). Also, previous studies showed that in neuropathic pain models, WDR neuron had similar behavior with morphine tolerance model used in this experiment. This study emphasizes that (based on the result), one of the reasons for hyperalgesia development may be hyperactivity of WDR neurons which facilitates the induction of wind up in morphine tolerant rats. Also indicating the model of morphine tolerance, neuroplastic changes occurred in the spinal cord which can be similar to many models of neuropathic pain and long-term administration of morphine not only does not reduce pain, but also disrupts the pain pathway and consequently induce hyperalgesia. Also, the studies indicated that both GABA agonists could attenuate hyperalgesia in neuropathic pain and morphine tolerance in behavioral study (Patel et al., 2001;Eaton et al., 1999;Cohen and Mao, 2014). In this study, it is shown for the first time that GABA could decrease hyperactivity of WDR neurons and indicated the role of GABA agonists on the spinal level in morphine tolerance which has not been studied before. Both GABA agonists were used to show that activity of both receptors in the spinal cord could help to reduce morphine tolerance. For better understanding of the neuroplasticity changes in the morphine tolerance model, it is suggested that more molecular and electrophysiological studies on different types of morphine tolerance and in different sections and level of the spinal cord be conducted. In conclusion, the results of this study indicate that administration of GABA agonists is an effective way for attenuating development of morphine tolerance, and the underlying mechanism is reduction of WDR neuron hyper-responsiveness.
Conclusion
This study provides a new way for preventing the development of morphine tolerance in long term administration of morphine by GABA agonists, a general understanding on the development of morphine tolerance and the effect of chronic use of morphine and morphine along with GABA agonists on WDR neuron behaviors in spinal cord. | 2019-06-14T14:20:48.333Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "bf14b120da792e2637f9e12b68e75ebe6cd0a674",
"oa_license": null,
"oa_url": "http://www.ijabbr.com/article_35425_6f3f9cf896ff94bdd5b23d023dd3a554.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "a3a208cd92eecb193ee7f770823aece1da1a6d2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234468455 | pes2o/s2orc | v3-fos-license | Effect of Nutrients on Growth, Flowering and Yield of African Marigold (Tagetes erecta L.) cv. Pusa Basanthi at Hadauti Region
In Randomized Block Design, the experiment was conducted with 10 treatments consisting of 3 levels each of T1-Control, T2-1% urea spraying, T3-2% urea spraying, T4-3% urea spraying, T50.2% MgSO4, T6 -0.4% MgSO4, T70.6 % MgSO4, T80.2% Boron spraying, T90.4% Boron spraying, T100.6% Boron spraying. replicated three times to evaluate the effect of nutrients on African marigold growth, flowering attribute and yield characters. Among all the treatments, urea (0.2%) resulted in maximum plant height (55.56), plant spread (40.30), number of branches (25.53), initiation of early flower buds (57.16), first flower opening (93.74), minimum flowering time(36.90), number of flowers per plant(53.63), length of flower stalk (7.60), flower weight (12.74), flower weight per plant (53.63), and flower yield (348.64) per hectare was identified in this experiment , in order to increase soil fertility and crop production in sustainable agriculture.
INTRODUCTION
Marigold (Tagetes erecta Linn.) is India's largest commercial herb, belonging to the Asteraceaee tribe (Compositeae). The common name "marigold," originating from "Mary's Gold," is synonymous with the Christian legends of the Virgin Mary. It originated in Mexico, especially in Central and South America. The marigold is one of the most widely cultivated flowers in India. Because of their ease of cultivation, wide adaptability to changing soil and climatic conditions, long flowering time and attractive flower colours of outstanding quality. Marigold can be cultivated in all seasons, i.e. seasonal, winter and summer, the major crops of which are rainy and winter season crops in eastern U.P. In the months of July-August and September-October, respectively, seedlings are transplanted, while summer season crops are transplanted between February and March. Marigold is primarily cultivated in India, Tropical Africa, Sri Lanka and Madagascar. In the world, India occupies 15 percent of the conventional flower field (Jawahar, 2004).
Tamil Nadu, Karnataka, Andhra Pradesh, Maharashtra, West Bengal, Orissa, Delhi, U.P and Uttarakhand are big producing marigold states. The marigold is one of the most popular flowers in our country and is widely used in various forms for religious and social functions. Flowers are sold on the market as loose flowers or as garlands. For plant growth and production, nitrogen is an essential metabolic factor. It is essentially considered to be a metabolic activity, an energy transfer that is necessary for protein metabolism and other biochemical products such as nucleic acid, chlorophyll and protoplasm. Magnesium sulphate foliar application is a means of improving the nutritional status of crops under conditions of deficiency. In order to increase the chlorophyll concentration and vegetative yield of plants, foliar application of Mg has been shown to be a constituent of chlorophyll, polyribosome chromosomes and a carrier of P in plants, especially in concentrations with high oil content seed formation; promotes oil and fat formation, starch translocation, catalytic action. Boron's function is the preservation of the integrity of the cell wall by binding to pectic polysaccharides.
Boron is involved in plant processes such as sugar translocation and permeability of the membrane, leaf photosynthesis, leaf expansion and differentiation, biosynthesis of the cell wall, fixation of nitrogen, protein, amino acids, and metabolism of nitrate. It also has a strong effect on flower development, pollen germination, fertilisation, seed growth, and fruit abscission. Boron is an essential element found in the meristematic regions of plant such as root tips, emerging leaves and buds. Keeping in view the role of these nutrients, present investigation was conducted to assess the effect urea, magnesium sulphate and boron spraying on marigold.
MATERIALS AND METHODS
At the School of Agriculture Sciences, Career Point University, Kota, Rajasthan, India, the experiment was carried out. In Randomized Block Design, the experiment was conducted with 10 treatments consisting of 3 levels each of T 1 -Control, T 2 -1% urea spraying, T 3 -2% urea spraying, T 4 -3% urea spraying, T 5 -0.2% MgSO4, T 6 -0.4% MgSO4, T 7 -0.6 % MgSO 4 , T 8 -0.2% Boron spraying, T 9 -0.4% Boron spraying, T 10 -0.6% Boron spraying. A 30-dayold African marigold seedling (Tagetes erecta L.) cv. At a distance of 40 x 40 cm, Pusa Basanthi Gainda was planted. As plant height, plant spread, number of branches and flowering characters viz., the significant vegetative growth. Days taken for the initiation of the first flower bud, days taken for the opening of the first flower, flowering period, flower stalk length (cm), flower diameter (cm), number of flower characters per plant and yield characters such as flower weight (g), flower yield per plant (g) and flower yield per hectare (q) were registered for each reproduction in five randomly selected plants. The knowledge was evaluated by the approach proposed by Fisher and Yates (1949).
RESULTS AND DISCUSSION
The results presented in Table 1 show that the variation in plant height was significantly influenced by the foliar application of nutrients. The result shows that plant height at 75 DAT was significantly maximum (55.56 cm) with T 3 urea applications at 2 percent spray and found at boron (0.4 percent) and boron (0.4 percent) par (0.6 percent).
In plant processes such as sugar translocation and membrane permeability, leaf photosynthesis, cell elongation and division, cell wall biosynthesis and nitrate, the effect of urea on the growth of plants that are considered to be metabolic activities, energy transformation, essential for the metabolism of protein and other biochemical products such as nucleic acid, chlorophyll and protoplasm and boron, Similar findings were also reported in China aster and Gladiolus by Dashora et al. (2004) and Jat and Gupta (2007) in African marigold, Kakade et al. (2009) and Reddy and Chaturvedi (2009).
The foliar application of urea (2 percent), which was found at par with urea (1 percent), MgSO 4 (0.4 percent), MgSO 4 (0.6 percent), boron (0.4 percent) and boron (0.4 percent), plant spread (40.30cm) was recorded to a significant maximum (0.6 percent). While in control, the lowest plant spread was recorded. Urea culminated in hyper elongation of the internodal length causing plant height extension thus increasing the total number of latent buds from where primary branches originated, resulting in optimal plant distribution. These findings are in close accordance with the results in China aster of Nagaich et al. (2003) in marigold and Kakade et al. (2009). Maximum branch numbers were registered for urea (2%), which was found to be equivalent to urea (1%), urea (3%), MgSO4 (0.2%) and MgSO4 (0.4 percent). Khan (2000) reported controlling the maximum number of branches per dahlia plant with Zn @ 4 percent, boron @ 0.2 percent and Mn @ 0.2 percent care, and Mathew et al. (2004) and Kumar et al. (2010) reported an increased number of major branches in marigold.
With the application of urea 2 percent, followed by boron 0.6 percent, the earliest bud initiation and flowering was observed and decreases Juvenile times while maximum days taken to bud initiation with the application of MgSO4 0.4 percent was noted. The nitrogen function is to trigger the meristematic activity of plants. The division of cells and the enlargement of cells are both accelerated by sufficient nitrogen supply. In Tagetes erecta, Singh et al. (2004) in Gladiolous and Muthumanickam et al. (1999) in Gerbera, early flowering and maximum flower diameter with nitrogen application were recorded by Acharya et al. (2004).
Nitrogen was found most effective in extending the flowering period particularly with urea 2 percent followed by urea 3 percent and urea 1 percent and it may be due to advanced stage of flowering in marigold. The results are similar to those of Muktanjali et al. (2004) in gladiolus marigold Jauhari et al. (2005). The maximum length of flower stalk was dramatically recorded with 2 percent foliar urea spray. In African marigold, Yadav et al. (2000) also observed increased pedicel length in marigold with nitrogen. Data provided in Table 1 clearly showed that the maximum numbers of flower per plant were registered with foliar application of urea 2 percent followed by MgSO4 0.4 percent. The increase in the number of flowers per plant may be attributable to the development at the early stage of growth of a large number of laterals that had ample time to accumulate carbohydrates for proper separation of flower buds due to improved reproductive ability and reduced plant form photosynthesis. The result was in near conformity with the chrysanthemum of Barman et al. (1993) and Kumar et al. (2009). The maximum flower diameter was substantially found with urea 2 percent, followed by 0.4 percent with MgSO 4 , while the minimum flower diameter was reported under regulation. It is obvious from the data provided in Table 1 that with foliar application of urea 2 percent, the weight of flower and flower yield per ha were recorded substantially maximum. The rise in flower weight with growing amounts of nitrogen was observed by Yadav et al. (2000) and maximum weight was obtained with the application of 180 ppm nitrogen in marigold. The results are consistent with the results in rose, Kumar et al. (2010) in marigold of Bhattacharjee et al. (1992). | 2021-04-16T17:56:42.161Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "1e8811d9b7aebc91dd8aeeddd8c638a2b235cd90",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18782/2582-2845.8575",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1e8811d9b7aebc91dd8aeeddd8c638a2b235cd90",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
257660710 | pes2o/s2orc | v3-fos-license | Work-family conflict and its related factors among emergency department physicians in China: A national cross-sectional study
Background Work-family conflict is common among emergency department physicians. Identifying the factors associated with work-family conflict is key to reducing its negative impact on mental health and work attitudes. However, the work-family conflict of Chinese emergency department physicians and the related factors have been scarcely studied. Objective This study aimed to investigate the current status and related factors of work-family conflict among Chinese emergency department physicians. Methods A national cross-sectional study was conducted among emergency department physicians in China from June 2018 to August 2018. A standard questionnaire was used to investigate the demographic characteristics, work-related factors, and work-family conflict of emergency department physicians. The generalized linear regression analysis was used to identify the related factors of work-family conflict. Results A total of 10,457 licensed emergency department physicians participated in the study. The average score of work-family conflict among the enrolled emergency department physicians was 19.27 ± 3.94, and the prevalence of high levels of work-family conflict was 69.19%. The multivariable regression analysis showed that emergency physicians who were female (linear regression coefficient, −0.25; SE, 0.08; P = 0.002), older than 40 years (linear regression coefficient,−0.53; SE, 0.14; P < 0.001), and earning more than 4,000 CNY per month (e.g., 4,001~6,000 vs. ≤4,000 CNY: linear regression coefficient, −0.17; SE, 0.09; P = 0.04) had lower work-family conflicts. However, emergency department physicians who were married (linear regression coefficient, 0.37; SE, 0.11; P < 0.001), highly educated (linear regression coefficient, 0.46; SE, 0.10; P < 0.001), had a high technical title (e.g., intermediate vs. junior technical title: linear regression coefficient, 0.61; SE, 0.09; P < 0.001), worked in a high-grade hospital (e.g., tertiary hospital vs. emergency center: linear regression coefficient, 0.38; SE, 0.11; P < 0.001), had a higher frequency of night shifts (e.g., 6~10 night shifts per month vs. 0~5 night shifts per month: linear regression coefficient, 0.43; SE, 0.10; P < 0.001), self-perceived shortage of physicians in the department (linear regression coefficient, 2.22; SE, 0.08; P < 0.001), and experienced verbal abuse (linear regression coefficient, 1.48; SE, 0.10; P < 0.001) and physical violence (linear regression coefficient, 0.84; SE, 0.08; P < 0.001) in the workplace had higher work-family conflict scores. Conclusion Most emergency department physicians in China experience a high-level work-family conflict. Hospital administrations are recommended to develop family-friendly workplace policies, establish a scientific shift system, and keep the number of emergency department physicians to meet the demand to reduce work-family conflict.
Background: Work-family conflict is common among emergency department physicians. Identifying the factors associated with work-family conflict is key to reducing its negative impact on mental health and work attitudes. However, the work-family conflict of Chinese emergency department physicians and the related factors have been scarcely studied.
Objective: This study aimed to investigate the current status and related factors of work-family conflict among Chinese emergency department physicians.
Methods: A national cross-sectional study was conducted among emergency department physicians in China from June to August . A standard questionnaire was used to investigate the demographic characteristics, work-related factors, and work-family conflict of emergency department physicians. The generalized linear regression analysis was used to identify the related factors of work-family conflict.
Results: A total of , licensed emergency department physicians participated in the study. The average score of work-family conflict among the enrolled emergency department physicians was . ± . , and the prevalence of high levels of work-family conflict was . %. The multivariable regression analysis showed that emergency physicians who were female (linear regression coe cient, − . ; SE, . ; P = .
Introduction
Work-family conflict is an inter-role conflict that results from the incompatibility of role pressures between work and family domains (1). According to scarcity theory, personal resources, such as time and energy, are limited. The devotion of more resources to work role will inevitably lead to a reduction in the devotion of resources to family role (2, 3). Emergency department physicians are the first line of defense in hospitals (4). In addition to work at a fast pace and with high intensity (5,6), they are required to respond to unforeseen medical situations around-the-clock (7,8), making them devote more resources to work role and prone to work-family conflict. The existing studies also reported that the work-family conflict among emergency department physicians was significantly higher than that of physicians in other departments (4,9).
The work-family conflict has a series of negative impacts on both physicians and hospitals. At the individual level, work-family conflict has been reported to be related to psychological distress (10). For example, work-family conflict was found to be associated with mental stress among German physicians (11) and anxiety symptoms among Chinese doctors (12). A prospective study in the United States found a significant relationship between work-family conflict and a higher prevalence of depressive symptoms among physicians (13). Furthermore, conflict between work and family is known to increase the risk of both acute and chronic physical health issues (14). At the hospital level, work-family conflict positively correlates with job burnout (15) and turnover intention (16), which can reduce physicians' productivity and increase hospital operating costs (17,18). Given these unfavorable outcomes, it is necessary to identify the related factors of work-family conflict among emergency department physicians.
However, most of the studies on physicians' work-family conflict have mainly focused on its negative consequences (11,(19)(20)(21)(22), and few studies have explored the factors associated with work-family conflict (23). Moreover, there is a lack of research on the related factors of work-family conflict among emergency department physicians. In China, there is a severe shortage of emergency department physicians, making them more vulnerable to work-family conflict than in other countries. Therefore, we aimed to conduct a national survey in China to explore the current status and related factors of workfamily conflict among emergency department physicians, so as to provide a scientific basis for the hospital administrations to formulate interventions.
Ethics statement
The study was approved by the Research Ethics Committee in Hainan Medical University (approval number: HYLL-2018-035). All participants volunteered to take part in this survey and all private information of them was kept confidential.
Participants and data collection
A nationwide cross-sectional study of emergency department physicians was conducted in China from July 2018 to August 2018 under the coordination of the Medical Administration Bureau of the National Health Commission. Data were collected through a widely used online survey platform, Questionnaire Star (website: https://www.wjx.cn). The link of electronic questionnaire was posted on the emergency department physicians' work platform of the prehospital emergency facility configuration monitoring department. Emergency department physicians from 2,965 public hospitals that provided pre-hospital emergency care in 31 provinces could click the link. Survey link was re-posted to the work platform every 7 days during the survey period. All respondents were required to complete an informed consent form before answering the questionnaire. Also, each questionnaire could only be submitted if all questions were answered, so there was no missing data for each variable. In this study, 15,288 emergency department physicians clicked the link of the electronic questionnaire, and 10,457 submitted it. The completion rate was 68.4%.
Measurements
The questionnaires covered demographic characteristics, work-related factors, and work-family conflict. Demographic characteristics included gender, age, educational level, and marital status. Work-related factors included technical titles, monthly .
income, years of service, frequency of night-shift per month, and self-perceived shortage of physicians in the emergency department. The question "Do you think the number of physicians in the emergency department meets the demands of daily work?" was used to measure the perceived shortage of physicians in the emergency department. If the respondents answered that the number of physicians could meet daily needs, it represented no self-perceived shortage of physicians; on the contrary, it represented a self-perceived shortage of physicians. Work-family conflict was measured by the 5-item Work-family Conflict scale developed by Netemeyer et al. (24). The items were rated on a 5-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree). Higher scores indicated higher levels of workfamily conflict. Furthermore, the average scores of the five items were re-classified into three categories, those who scored <2.5 were re-classified into the "low work-family conflict" group, those who scored between 2.5 and 3.6 were re-classified into the "medium work-family conflict" group, and those who scored more than 3.6 were re-classified into the "high work-family conflict" group (4). In this study, Cronbach's α for the scale was 0.934.
Statistical analysis
SPSS 25.0 for Windows was used to perform data analyses. In descriptive analyses, continuous variables were represented by mean and standard deviation (SD), while categorical variables were represented by frequency and percentage. T-test and oneway ANOVA were performed to examine the differences of workfamily conflict scores among groups with diverse characteristics. Spearman correlations were used to test multicollinearity among independent variables. We started with univariable analysis to screen for candidate variables associated with work-family conflict using a cutoff value of P < 0.1. A generalized linear regression model was used to identify the related factors of work-family conflict. All comparisons were two-tailed and the significance threshold was P < 0.05.
Results
The basic characteristics of the participants are shown in Table 1. Among 10,457 emergency department physicians, 72.98% were males. Nearly two-thirds of the participants were younger than 40 years old. Most of them were married, accounting for 84.42%. About five-sixths of participants obtained a bachelor's degree. Nearly half of the participants had junior technical titles, worked in secondary hospitals, engaged in emergency work for 6 years or more, and worked 6∼10 night shifts per month. Only 29% of physicians earned more than 6,000 CNY per month. Approximately 70% of physicians perceived a shortage of emergency department physicians. 81.81 and 27.63% of physicians experienced verbal abuse and physical violence in the workplace, respectively.
The average score of work-family conflict among the enrolled emergency department physicians was 19.27 (SD = 3.94). Moreover, 7,235 participants (69.19%) were in the "high workfamily conflict" group, 2,741 participants (26.21%) were in the "medium work-family conflict" group, and 481 participants (4.60%) were in the "low work-family conflict" group.
The univariate analysis results are shown in Table 1. There were significant differences in work-family conflict scores in gender, age, marital status, educational level, technical title, type of hospital, monthly income, years of service, frequency of night-shift per month, self-perceived shortage of physicians, verbal abuse and physical violence at workplace.
Discussion
This study investigated the work-family conflict and related factors of emergency department physicians in China. The results showed that ∼70% of emergency department physicians were in the high work-family conflict group, which is higher than that of French emergency department physicians (50.1%) (4). It may be attributed to the differences in the emergency department working environment in different countries. A previous report revealed that the average annual income of Chinese physicians was lower than that of developed countries (25), and our results indicated that participants with high monthly incomes had lower scores of work-family conflict.
Gender difference of work-family conflict has always been a concern in the world (13,26). This study revealed that male emergency department physicians had significantly higher workfamily conflict scores than females. However, in Japan, females were reported to be more easily to experience work-family conflict (26). This may be caused by cultural differences in different countries and regions. In traditional Chinese social culture, men, as the . /fpubh. . primary breadwinners, are asked to dedicate more time and energy to work (27). At the same time, they are allowed to take on less responsibility in the home (28). However, with the increase in dualearner families, a new fathering ideal has emerged in recent years in which fathers are expected to be involved in child care and domestic responsibilities (29). Because men are expected to not only take responsibility for raising a family, but also share care work with their partners at home, they are more likely to experience work-family conflict in China nowadays. Our findings showed that emergency department physicians over 40 years old had a lower work-family conflict. This may be due to the fact that most participants in this age group were in a relative balance of work and family (30). They are more capable of dealing with the role conflict between the two fields. Besides, married emergency department physicians scored higher on work-family conflict than physicians in single or other marital status, which is consistent with the previous study (21). The probable reason may be that the married ones have more opportunities to share family responsibilities, such as parenting and doing housework (31). Hospital administrators should pay more attention to aged <40 years old and married physicians on the issue of workfamily conflict.
In terms of work-related factors, emergency department physicians with higher educational level and technical title had higher work-family conflict scores. As we all know, these physicians have accumulated more medical knowledge and professional skills, and they undertake heavier emergency tasks in department (32). Their work takes up a greater proportion of time and is prone to conflict with their family roles (33). Therefore, the work-family conflict of emergency department physicians with highly educated and higher professional titles also needs extra attention.
Regarding to the hospital environment, the type of hospitals significantly associated with emergency department physicians' work-family conflict. The more serious the work-family conflict faced by physicians working in high-level hospitals, except in primary hospitals. It is reported that the number of hospital visits in descending order in China was tertiary hospital (1,854.79 million), secondary hospital (1,284.93 million), primary hospital (224.64 million), and other hospitals (213.01 million) (34). Therefore, physicians in high-level hospital are more easily to suffer from time-conflict between work role and family role. Moreover, physicians who experienced workplace violence, whether verbal abuse or physical violence, had higher scores on work-family conflict in this study. This may be because workplace violence can increase the psychological strain of emergency department physicians and negatively influence their family life with partners (35). It is recommended to develop friendly workplace policies for emergency department physicians, especially for tertiary hospitals.
This study also revealed that variables reflecting workload, such as the frequency of night shifts and self-perceived shortage of physicians in department, were significantly associated with workfamily conflict of emergency department physicians. Participants with a high frequency of night shifts were more likely to experience work-family conflict, which was consistent with previous studies (36,37). This is because more night shifts per month mean more time spent at work, which inevitably conflicts with family obligations. In addition, long-term irregular work schedules can affect physicians' moods, which in turn affects their family life (38, 39). In addition, respondents who perceived a shortage of emergency department physicians experienced a higher level of work-family conflict. The possible reason could be that a shortage of physicians leads to an increased workload for the physician on staff. As the work takes up more and more time and energy, it will interfere with the emergency department physicians' family life (23). Therefore, hospital administrators are suggested to establish a scientific shift system and keep the number of emergency department physicians to meet work demands.
Strengths and limitations
This is the first nationwide study to explore the current situation and related factors of work-family conflict among emergency department physicians in China. What's more, the work-related factors identified in this study are of importance in reducing work-family conflict among emergency department physicians. However, there are still some limitations. First, this was a cross-sectional study, which is limited in establishing a causal relationship between dependent and independent variables. Prospective studies are needed in further studies. Second, this study was conducted in China, and thus, the generalizability of our conclusion to other countries may be limited. Third, there are possibly more factors associated with work-family conflict among emergency physicians than explored in this study; therefore, we could not explore them all.
Conclusion
Most emergency department physicians experience high levels of work-family conflict in China. Hospital administrations should pay more attention to emergency department physicians who are male, younger than 40 years, married, highly educated, highly titled, working in a high-level hospital, earning <4,000 CNY per month, working a high number of night shifts, perceived understaffing, and experiencing verbal abuse and physical violence in the workplace. To reduce work-family conflict in the emergency department physicians, hospital administrators should develop family-friendly workplace policies, like job sharing, maternity or paternity leave, and parental leave, establish a scientific shift system, and keep the number of physicians to meet work demands.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Research Ethics Committee in Hainan Medical University. The patients/participants provided their written informed consent to participate in this study.
Author contributions
SJY, CJL, JWZ, and NJ were responsible for the conception, design, and writing of the manuscript. JLZ, YFW, and MGT were responsible for the acquisition of data and literature research. NJ, CJL, LL, and XZ were responsible for the analysis and interpretation of data. All authors read and approved the final manuscript. | 2023-03-22T15:11:33.170Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "8e4ed9e73da6a4b114e2a4b6a7499b7f69c7f871",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1092025/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "6f3fdcbf67237ba329e68b02697c992147c56d07",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119256836 | pes2o/s2orc | v3-fos-license | Detectability of Bell-CHSH nonlocality by qubit detectors with optimal local filters
We investigate the detection problem of quantum nonlocal correlation by two qubit detectors. The detectors with an initial product state interact with a massless scalar field in the vacuum state, and then the out-state of the detectors are correlated after the interaction. Under the perturbative treatment in the second order of the coupling, the detectors' state can be entangled but satisfies the Bell-CHSH inequality. It is known that the violation of the Bell-CHSH inequality for such an entangled state is obtained after a local filtering operation. In this paper, we construct the optimal filtering operation for the qubit detectors and derive the success probability of the filtering operation, which characterizes the reliability of revealing the Bell-CHSH nonlocality by the filtering operations. By applying the optimal filtering, it is shown that the detected Bell-CHSH nonlocality depends on the coherence of the detectors' state and the spontaneous emission of scalar particles from each detector. We also comment on a trade-off relation between the success probability and the size of the parameter region showing quantum correlation.
I. INTRODUCTION
In the quantum information theory, the entanglement is a crucial property which describes a nonlocal correlation in the quantum mechanics. Due to the nonlocal feature of quantum entanglement, we can perform various protocols, such as quantum teleportation, superdense coding, quantum error correction, and so on [1]. The Bell-CHSH inequality characterizes a nonlocal correlation of the entanglement [2]. Based on the mathematically rigorous argument, the Bell-CHSH inequality [3] is satisfied only for the local hidden variable theories and a quantum state which violates this inequality cannot be yielded in hidden variable models.
The two aspects of quantum correlations, quantum entanglement and the violation of Bell-CHSH inequality (the Bell-CHSH nonlocality), are not equivalent and they are non-trivially related to each other [4]. Also in the quantum field theory, the quantum correlations play important roles. The vacuum state in the quantum field theory shows entanglement between spatial regions, which induces the Unruh effect [5] and determines the structure of the wave function of the vacuum (in particular, it is described by the tensor network [6]). Reeh and Schlieder showed that an arbitrary state of quantum field can be approximated by acting some local operators on the vacuum state [7], and such a property implies that the vacuum state is entangled among separated spatial regions. Also, it was shown that the free vacuum state violates the Bell-CHSH inequality by considering correlations between two spatial separated regions [8]. The vacuum of quantum field displays quantum entanglement and the violation of the Bell-CHSH inequality, hence these correlations essentially characterize the (many-body) property of the quantum field itself.
In connection with the vacuum entanglement of the quantum field, the detection of such quantum correlations by local observers has been investigated. The local detection problem tells us how the quantum resources of the vacuum is available, and provides the model of a suitable experimental setting to detect the spacelike quantum correlation of the vacuum. The local observer is usually modeled by a harmonic oscillator [9][10][11] or a qubit system [12][13][14][15][16][17][18][19].
Reznik et. al [12] considered two qubit detectors initially not correlated. These detectors are coupled to a massless scalar field and do not interact directly with each other. Then it was shown that the entanglement can be detected but the violation of the Bell-CHSH inequality was found only after applying a local filter [20]. The local filtering operation is a kind of measurement process acted on each qubit by local observers Alice and Bob, which is constructed by post-selected (probabilistic) local operations and classical communication (LOCC). When we choose the operation properly, the Bell-CHSH nonlocality of the detectors' state can be enhanced. This method is also applied to the cosmological situation to reveal the quantum nonlocality in the early universe [21]. In the quantum information theory, the optimal construction of the local filter which gives the maximal violation of the Bell-CHSH inequality was provided by papers [22,23]. However, the optimal construction is not used in Ref. [12] and it is unclear whether its given filtering is optimal or not. Also, the local filtering operation is a probabilistic process and hence we should consider its probability to discuss the feasibility of the detection of the violation of the Bell-CHSH inequality.
In this paper, we investigate the detection problem of quantum nonlocality by two qubit detectors. The initial state of the detectors is usually assumed to be the uncorrelated ground state, however we also treat the excited state of the detectors. By such a generalization of the detectors' state, we clarify what is playing a crucial role to detect the quantum entanglement and the Bell-CHSH nonlocality. As an entanglement measure, we compute the negativity of the qubit detectors, which completely characterizes the entanglement for two qubits system [24]. Also we yield the optimal filtering operation for the two detectors by the construction method given in Ref. [23] to reveal the violation of the Bell-CHSH inequality.
We show that the local filtering constructed by the systematic method corresponds to that given previously in Ref. [12], and the explicit formula of the success probability of the filtering operation is derived. For the reliable detection of the Bell-CHSH nonlocality between a spacelike regions in the vacuum, we explore the better setting of the detectors with a high success probability of the optimal local filtering. Through the analysis of the entanglement and the Bell-CHSH nonlocality revealed by the optimal filter for three different initial states, we show that the detected quantum correlation is determined by the two effects; one is the coherence of the detectors' state and another is the spontaneous emission given by the local dynamics of each detector. In addition, it is shown that as the transition probability of the spontaneous emission grows, the quantum correlation between the detectors decreases and the success probability of the optimal filtering increases. Thus, there is a trade-off relation between the size of the parameter region indicating the quantum correlation and the success probability.
This paper organized as follows. In Sec. II, we introduce the system composed of two qubit detectors and a massless scalar field. For the second order of the coupling, we solve the dynamics under initial product states of the detectors and the vacuum state of the massless scalar field. Then we obtain the reduced density matrix of the detectors represented by a X state. In Sec. III, we calculate the negativity and the expectation value of the Bell operator for a X state. In Sec. IV, we explicitly construct the optimal filtering for a X state and derive the success probability of the filtering. In Sec V, we discuss the quantum entanglement and the Bell-CHSH nonlocality of detectors system and show the quantum correlation is determined by the coherence and the spontaneous emission of scalar particles.
Sec. VI is devoted to summary and conclusion.
SCALAR FIELD
The vacuum state of a many body system or a quantum field has the nonlocal quantum (long-range) correlation. To investigate the detectability of the quantum correlation by local observers, we consider qubit detectors coupled to a massless scalar field. The free where σ z A,B is the Pauli matrix, Ω is the energy gap of the qubits, H φ is the free Hamiltonian of the massless scalar field φ and π := ∂ t φ is the conjugate momentum of the scalar field.
The interaction Hamiltonian is
where x A and x B denote each spatial position of the two detectors, that is, the two detectors are at rest at each position and locally interact with the scalar field. We assume that the switching function g(t) is the Gaussian function where g 0 is a coupling constant and σ is a time interval while the interaction turns on.
Roughly speaking, the detectors interact with the scalar field for |t − t 0 | ≤ σ. The choice of the Gaussian switching is more appropriate to extract the quantum correlations than a sudden switching function [16]. We assume that the initial state of the total system is a product state where a, b = ±1 denote eigenvalues of σ z A,B and |0 φ is the vacuum state of the scalar field. We also use the notation | ↑ = |+1 , | ↓ = |−1 to represent the state of qubits. In the interaction picture, the out-state under the second order of the coupling is given by whereṼ is the interaction Hamiltonian in the interaction picture, T denotes the time ordering, and the operators Φ A a and Φ B b acting on the state of the scalar field are defined by Each term in the equation (5) can be interpreted using the diagrammatic representation described in Fig. 1. For example, the second term in the equation (5) denotes that the detector A interacts once with the scalar field, then the qubit A is flipped. By tracing out the state of the scalar field, the reduced density matrix of the two detectors after the interaction is derived as follows: where ρ ij = i|ρ AB |j (i, j and the diagonal components are given as where r = |x A − x B | and the formula of ρ 44 (a, b) is derived by the Wick theorem. Note that the non-diagonal components ρ 23 (a, b) and ρ 14 (a, b) depends on the Wightman function for the massless scalar field where is the UV cutoff parameter. The detectors with an initial product state can be entangled by the local interaction with the scalar field in the equation (2). We can explicitly compute ρ 22 (a), ρ 33 (b), ρ 23 (a, b) and ρ 14 (a, b) as where Erfc[z] is the error function defined by The detailed derivation (16) and (17) is presented in the Appendix A. From the explicit formulas of the density matrix, the quantum correlation of the scalar field detected via the two detectors can be computed.
III. NEGATIVITY AND BELL-CHSH INEQUALITY FOR X STATE
As the state of the detectors depends on the two-point function for the scalar field, we expect that the initially product state of the detectors becomes correlated after the interaction. To evaluate the quantum correlation between the two detectors, we consider the negativity and the Bell-CHSH inequality. The negativity is defined by the eigenvalues of a partial transposed density matrix ρ T A AB as where λ i are the eigenvalues of the partial transposed density matrix ρ T A AB . If the negativity does not vanish, then the state is entangled. Especially, the opposite of the statement is true Thus, the negativity has a nonzero value if and only if the given state is entangled [26], and hence the negativity completely characterizes whether the state of the detectors is entangled or not. For an X state, the negativity is explicitly obtained as The conditions N 1 > 0 or N 2 > 0 are rewritten in the simple form as For the detailed understanding of the quantum non-local correlation, it is important to evaluate the Bell-CHSH inequality [3] given by the correlation function for the qubit A and B. To compute the Bell-CHSH inequality, we introduce the Bell operator where n, n , m, m are unit vectors. We consider the maximum expectation value β of the Bell-CHSH operator For separable states, β(ρ AB ) satisfies the following Bell-CHSH inequality The inequality (26) holds for the local hidden variable theory which includes any separable states. For any physical states, β(ρ AB ) has the upper bound called the Tsirelson bound [27] β For an X state, the maximum value β(ρ AB ) can be calculated explicitly as where we used the Horodecki theorem [25] which provides the method to obtain the explicit form of β from the singular value of the matrix Note that the Bell-CHSH inequality is satisfied for the state of two detectors system given by (14)-(17) because of its perturbative treatment: The order of the coupling g 0 for the non-diagonal components ρ 23 and ρ 14 is O(g 2 0 ), and then β 1 and β 2 for a small g 0 are evaluated as where ρ 22 and ρ 33 are O(g 2 0 ). Hence the maximum expectation value of the Bell operator β is smaller than unity and the Bell-CHSH inequality is always satisfied. On the other hand, it is possible for the detectors to have a nonzero negativity because the condition for the entangled state (23) does not depend on the strength of coupling (the both sides of the inequality (23) have the same order for the coupling). Figure 2 shows the contour plot of the negativity in (Ωr, Ωσ) space for the detectors' initial state | ↓ A ↓ B . The dashed line denotes the "null" curve r = σ and then we find that the negativity has a nonzero value for a spacelike region r > σ. As we have seen above, the state of the detectors is entangled and satisfies the Bell-CHSH inequality. Interestingly, it is known that the violation of the Bell-CHSH inequality (the Bell-CHSH nonlocality) for such a state can be revealed by a local filtering operation [20].
IV. LOCAL FILTERING OPERATION FOR X STATE
We introduce a local filtering operation for the two qubit detector system. The local operation is defined by where M A , N B are local operators (2 × 2 matrices) for each subsystem and p = represents the success probability to attain the filtered state. Those operators have inverse matrices and satisfy the conditions The local filtering operation is regarded as the local measurement process of each qubit and selects one outcome after this operation (regarded as the probabilistic LOCC). Although the stochastic process with the probability p is a local process, but the Bell-CHSH nonlocality of the bipartite system can be enhanced.
A. Key theorems
There are two important theorems to reveal the Bell-CHSH nonlocality by the local filtering operation [22,23]: By a local filtering operation, a two-qubit state ρ AB can be uniquely transformed into a Bell diagonal state.
Theorem 2 [23] If the optimized β(ρ AB ) over all local operations M A and N B is larger than unity, then the filtered state ρ AB is a Bell diagonal state. The statement is represented as where |Bell µ According to above theorems, we need the local operation transforming a given state to a Bell diagonal form to reveal the Bell-CHSH nonlocality for the state because the Bell diagonal form of the state is necessary for max β > 1. In general, it is complicated to construct a local operation which transforms a given state to a Bell diagonal state, however we easily get it for an X state. We note that a Bell diagonal state 3 µ=0 λ µ |Bell µ AB Bell µ AB | has the form of X state with its components given by This state corresponds is the X state with All we have to do is to transform a given X state to the X state satisfying these conditions by an appropriate filtering operation. We apply the local z rotation exp[−iθ σ z A /2 − iφ σ z B /2] to a given X state. The diagonal components are invariant and the non-diagonal components are transformed as We can choose the parameters θ, φ so that ρ 14 , ρ 23 are positive and satisfy ρ 23 = ρ * 23 , ρ 14 = ρ * 14 . Without loss of generality, we assume that the diagonal components satisfy ρ 11 ≥ ρ 22 ≥ ρ 33 ≥ ρ 44 . From the theorem 1, we can uniquely transform the two qubit system to a Bell diagonal form by a local filtering operation. Hence it is sufficient to find one of the filtering operations converting a given X state to a Bell diagonal state. For this purpose, we consider the local operation defined by where 0 < η 2 A ≤ 1 and 0 < η 2 B ≤ 1. This operation corresponds to the amplitude damping channel with a post selection and was used in Ref. [12] to detect the Bell-CHSH nonlocality.
Under the local operations (38), the X state is transformed to then the X state becomes the Bell diagonal state with the spectrum {λ µ } given by Eq. (40) provides the optimal values of the local filters for detection of the Bell-CHSH nonlocality and the success probability p of the optimal filtering is where we used the equation (35). Hence, whenever the largest eigenvalue of λ µ exceeds 1/2 (the spectrum {λ µ } is biased towards any one of the four Bell states), then the Bell diagonal state is entangled. Further, we focus on the Bell-CHSH nonlocality for the Bell diagonal state. When the maximum value β is larger than 1 (that is, β 1 > 1 or β 2 > 1 in the equation (28)), the eigenvalues satisfy where λ 0 ≥ λ 3 and λ 1 ≥ λ 2 are imposed by the equation (41). If we assume λ 0 > 1/2 then To summarize, the typical region of the spectra satisfying the entanglement condition (43) and the Bell-CHSH nonlocality (necessary) conditions (45) are presented in Fig. 3. As is shown, the Bell diagonal state has the Bell-CHSH nonlocal correlation when {λ µ } concentrates in one of the Bell basis.
V. DETECTION OF BELL-CHSH NONLOCALITY WITH LOCAL FILTER
In this section, we examine the quantum entanglement and the Bell-CHSH nonlocality detected by the two qubit detectors with the initial conditions | ↓ A ↓ B , | ↑ A ↑ B and | ↓ A ↑ B . For the detection of the Bell-CHSH nonlocality, we apply the local filter to the qubit detectors' state given in Sec. IV and evaluate the success probability of the optimal filtering.
Then we clarify what properties determine the detection of the quantum correlation of the scalar field and the success probability of the filtering.
A. The initial condition | ↓ A ↓ B
We consider the initial condition of the detectors (a, b) = (−1, −1) corresponding to the state | ↓ A ↓ B . From the equation (14), (15), (16) and (17), we derive and ρ 33 (−1) = ρ 22 (−1). The left panel of Fig. 4 shows the contour plot of the negativity for the filtered X state with the initial condition | ↓ A ↓ B . The coupling g 0 is fixed to 10 −2 . The green dashed line denotes β = 1, and the region above this line represents β > 1 where the Bell-CHSH inequality is violated. In addition, we observe existence of the region where the Bell-CHSH nonlocality is not found even if the optimal filtering is acted on each detector.
In the right panel of Fig. 4, the value β for the optimal filtering and the success probability p of the filtering are presented. According to the equation (42), the probability p is O(g 4 0 ) and given by The value p is around 10 −15 for Ωσ = 2.5 and Ωr = 3 in the right panel of Fig. 4 and also for these parameter, β is larger than 1. Hence, the probability p is much smaller than g 4 0 , which means that the success probability of the Bell-CHSH nonlocality detection is very small.
Also we analyze how the quantum correlation of the scalar field is detected through the detectors. In Sec. IV, we give the simple form of the spectrum {λ µ } obtained from the components of the X state (41). Figure 5 shows the behavior of those spectra with Ωσ = 2.5 and we observe that λ 0 is dominant compared to the others. The condition (23). Thus the coherence |ρ 14 | We consider the detection of the quantum correlation for the initial state | ↑ A ↑ B . The components ρ 22 , ρ 33 , ρ 23 and ρ 14 of the reduced density matrix are and ρ 33 (1) = ρ 22 (1). These components are also given by replacing the frequency Ω with −Ω in the reduced density matrix for (a, b) = (−1, −1). This is because the total Hamiltonian is invariant under the unitary transformations σ x A σ x B and Ω → −Ω. Further, we find that |ρ 14 (1, 1)| = |ρ 14 (−1, −1)|, that is, those coherences with the two different initial conditions are equivalent. We have considered that the qubit A and B interact with the scalar field in the same manner and the total system evolves under the second order dynamics. Hence For Ωσ 1 the difference is proportional to Ωσ, which corresponds to the Fermi's golden rule. This implies that the detector with the initially excited state emits the scalar particle spontaneously. We note that the components ρ 22 and ρ 33 are the transition probability of respectively (|1 φ is a one-particle state of the scalar field). As the spontaneous emission is determined by the local dynamics, the detectors' entanglement is not generated by such a emission. Indeed, from the equations (41) we note that the eigenvalue λ 0 is rewritten as and hence the inequality |ρ 14 (−1, −1)| − ρ 22 (−1) ≥ |ρ 14 (1, 1)| − ρ 22 (1) implies the smallness of the eigenvalue λ 0 . Therefore, it is difficult to reveal the spatial entanglement and the spatial Bell-CHSH nonlocality with initial excited state | ↑ A ↑ B .
C. The initial condition | ↓ A ↑ B
We consider the detectors' initial condition | ↓ A ↑ B . The components ρ 22 , ρ 33 , ρ 23 and ρ 14 of the reduced density matrix are given as The left panel of Fig. 8 shows We observe that the region with the nonzero negativity is smaller compared to the result obtained with the initial state | ↓ A ↓ B , however there is the spacelike region which shows the quantum nonlocality unlike the results with | ↑ A ↑ B . In the right upper panel of Fig. 8, the behavior of the eigenvalues of the Bell diagonal state is presented. As expected, the nonlocality detection is possible by the exchange of the scalar particle.
Also we denote the ratio |ρ 14 (1, −1)|/|ρ 14 (−1, −1)| in the right lower panel of Fig. 8 and observe the coherence |ρ 14 (1, −1)| is larger than |ρ 14 (−1, −1)|. This is because the detector with the initial excited state generates more real or virtual particles compared to the detector with the initial ground state | ↓ A ↓ B . We have clarified that the process of the spontaneous emission from each detector reduces the negativity of the detectors' state. The reduced density matrix of the detector with the initial condition | ↓ A ↑ B has the component ρ 33 (1) which is larger than ρ 33 (−1). Thus, the coherence ρ 14 (−1, 1) and the transition probability ρ 33 (1) of the spontaneous emission non-trivially determine the eigenvalue λ 0 for the initial The left panel of Fig. 9 presents the violation of the Bell-CHSH inequality and the success probability of the optimal filtering. The qualitative behavior is similar to that with the initial like points is much larger for this setting. In the right panel of Fig. 9, the behavior of the ratio p(−1, 1)/p(−1, −1) is presented with fixed Ωσ = 2.5. We find that the probability p(−1, 1) is much larger than the probability p(−1, −1) for the initial condition | ↓ A ↓ B . Hence the detectors' initial condition | ↓ A ↑ B is more suitable to detect the spacelike Bell-CHSH nonlocality compared to the initial condition | ↓ A ↓ B .
Finally let us comment on the trade-off relation between the dimensions of the parameter space showing the Bell-CHSH nonlocality and the success probability p of the optimal filtering operation. Roughly speaking, from the equations (49) and (54), the success probability is determined by the sum |ρ 14 | 2 + ρ 22 ρ 33 + |ρ 23 | 2 , and the eigenvalue λ 0 is given by the difference |ρ 14 | − √ ρ 22 ρ 33 . Thus as the success probability p increases, the eigenvalue λ 0 decreases because the transition probability ρ 22 or ρ 33 of the spontaneous emission grows.
In Fig. 8 and 9, the detected region of the Bell-CHSH nonlocality is small but the success probability is large. Hence the trade-off relation between the size of detectable parameter region of the Bell-CHSH nonlocality and the success probability is demonstrated for the detection problem with the qubit detectors.
VI. SUMMARY AND CONCLUSION
We investigated the detection of the quantum correlation of a massless scalar field by two qubit detectors. As an initial state, we considered a product state of the detectors and the vacuum state of the scalar field. Under the second order perturbation of the total system dynamics, the two detectors' state can be entangled by the two-point function of the scalar field. Also we focused on the violation of the Bell-CHSH inequality for the qubit detectors.
It is known that the Bell-CHSH nonlocality can be revealed only after the local filtering operation, which is post-selected LOCC by each of two local observers. In general, although it is complicated to construct the optimal filtering operation for revealing the Bell-CHSH nonlocality, we can simply obtain the optimal filtering as the considering detectors' out-state is the X-state. The constructed filtering is given by a post selection after passing through an amplitude damping channel, and the probability for the post selection corresponds to the success probability of the optimal filtering. By examining the negativity and the violation of the Bell-CHSH inequality under the optimal filter, we found that the detection of nonlocal correlation strongly depends on the initial state of the detectors. When the detectors are initially in the ground state, the spacelike region in the parameter space showing the quantum nolocality is larger compared to the region obtained with the initially excited states. This is because the excited detectors spontaneously emit the scalar particles, and such a local dynamics cannot generate the quantum correlation between the the detectors.
Further we focused on the success probability of the optimal filtering for the Bell-CHSH nonlocality detection between spatial separated regions. When the transition probability ρ 22 or ρ 33 describing the spontaneous emission is large, the Bell-CHSH nonlocality is small but the success probability is large. Due to this trade-off relation, the reliable detection of the Bell-CHSH nonlocality becomes non-trivial and we found that the detection of the spacelike Bell-CHSH nonlocality with a high success probability of the optimal filtering is performed when the detectors' state is initially |↓ A ↑ B . This result gives the suitable model for the reliable detection of the spacelike Bell-CHSH nonlocality through the two qubit detectors.
(A8)
The y integration is equivalent to the complex integration given in Fig. 10. Hence, ∞ 0 dy e iηy (y − i /σ) 2 − (r/σ) 2 = iπσ r e iη(r/σ+i /σ) − i ∞ 0 e −ηy (y − /σ) 2 + (r/σ) 2 θ(η) where the second and third terms are the integration along the imaginary axis. For → 0 the sum of those terms is an odd function, and then it does not contribute to the η integration (note that the function of η in front of the equation (A9) is an even function). Thus we get the following formula where ρ out = Tr φ |Ψ out Ψ out | and we used the translation invariant of the vacuum state. | 2018-08-12T06:03:06.000Z | 2018-08-12T00:00:00.000 | {
"year": 2018,
"sha1": "a81b36693e18a41a89066e6837aec28d29fae720",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a81b36693e18a41a89066e6837aec28d29fae720",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
139249493 | pes2o/s2orc | v3-fos-license | Effect of microwave irradiation power for the morphological changes of ZnO nanoparticles
The present investigation focuses on the effect of microwave watt power for the synthesis of ZnO nanoparticles (NPs) by microwave irradiation using two different precursors. The prepared samples were confirmed by XRD, SEM and EDXA to analyse the particle size, morphology and chemical composition of ZnO NPs. SEM images were recorded for the synthesis of ZnO NPs from different watt power using different precursors to analyse the morphological changes. The particle size was calculated by Debye-Scherrer formula using XRD pattern. Apart from that, the band gap was calculated from UV spectroscopic technique. The results confirmed that by increasing the watt power ranging from 240 to 420, a wurtzite structure (flower-like) are formed. By changing the zinc acetate with zinc sulphate at various watt powers, the morphology of ZnO NPs changes from spiral to tubular shaped particles were formed.
Introduction
Recently, metal oxide nanomaterials have been a major potential material for various engineering applications 1 . Among the metal oxides, ZnO nanoparticles have shown the significant importance and their excellent properties in both technical and fundamental applications. These ZnO nanoparticles have semiconductor material with high band gap energy (3.2-3.60 eV), which has great attention in optoelectronics 2 , solar cells 3 , gas sensors 4 and catalysis 5 . There is a high specific surface area of ZnO NPs and their electrostatic behavior it mainly used in the field of biomedical applications 6 . In attention to that due to the neutral hydroxyl group on the surface of ZnO nanomaterial plays a key role in the charge behavior 7 . The ZnO NPs were synthesized by using microwave irradiation technique. In this method proved as a cost-effective 8 , reduces the particle size with narrow particle distribution 9 , increases the yield and purity compared to that of a conventional method. Currently, the microwave irradiation technique is often used for the rapid synthesis of micro-nanostructures of different morphologies by varying the reaction conditions. Like nanorods, flower-like, flakes, hexagonal tube, and spherical tubes. However, the effect of the microwave irradiated power and the influence of the precursor on the morphology and optical properties of ZnO NPs were rarely reported 10 -.12 .
In the present study focused on the synthesis of ZnO nanoparticles with zinc acetate and zinc sulphate used as a precursor by microwave irradiation of nanoparticles in an aqueous medium and the influence of microwave watt power and precursors on the structural morphology and optical properties of ZnO NPs.
Chemicals
Chemicals were purchased from different distributors like Sigma Aldrich, Merck etc. Analytical reagent grade chemicals were used in the experiment and were used without further purification. Milli-Q water was used throughout the analysis. . The subsequently prepared sample was irradiated with a microwave oven for 20 minutes at different watt power i.e 240, 360 and 420 W 8 . The obtained zinc oxide nanoparticles are centrifuged and washed with 1:1 water and ethanol. Thereafter, the sample was dried at 80 ℃ for about 5 hours in a hot air oven. The schematic representation of the synthesis of ZnO nanoparticles is shown in figure 1.
Characterisation of ZnO nanoparticles
Prepared ZnO NPs were confirmed by X-ray diffraction spectroscopy (RIGAKU smart lab X-ray Diffractometer operating at 40 kV), structure and composition of ZnO NPs were recorded by Scanning electron microscopy and Energy dispersive X-ray analysis was carried out by (GEMINI ULTRA 55). Absorbance were recorded by ELICO made SL-159 UV-visible spectrophotometer.
Structural Characterisation
The XRD spectrum for synthesized ZnO nanoparticles with two different precursors showed in figure 2. All the obtained peaks are well matched with JCPDS card no (03-065-3411) and JCPDS card no (01-080-0075) with no impurity peak for nanoparticles synthesized from zinc acetate and zinc sulphate precursor respectively. A most intense peak is at the Bragg angles (2 ) = 31.71, 34.35, 36.20 for ZnO NPs synthesized using zinc acetate as precursor. Similarly, ZnO NPs synthesized using zinc sulphate as precursor, most intense peak located at the (2 )= 31.75, 34.45, 36.50. all these respective peaks are of high intensity corresponds to the Miller indices (1 0 0), (0 0 2) and (1 0 1) respectively. The most intense and sharp peak indicates the synthesized ZnO NPs with high purity and crystalline nature. The obtained data confirmed that the synthesized NPs have a hexagonal wurtzite structure with the average crystallite sizes (D) of ZnO NPs were calculated as follows using the Debye -Scherrer equation 12 . The average crystal sizes of ZnO NPs with different precursors were found to be 50 nm and 73 nm.
D = Kλ βcosθ
Where, K-Debye constant, λ -wavelength of the X-ray source, β -full width at half maximum of the diffraction peak, and θ -Bragg angle of an intense peak. The EDXA images for the ZnO NPs are represented in the figure 3. It is a good corresponding agreement with the XRD pattern. There is no impurity peak or trace elements are observed. It confirms the synthesized ZnO NPs with high purity. As the microwave watt power is increased from 240 to 420, the morphology of the ZnO NPs are completely changed into the wurtzite structure. A common trend has been observed for zinc sulphate precursors used for the synthesis of ZnO NPs are shown in figure 4d, 4e and 4f. Initially, the nanoparticles shown in the spiral morphology then get changes to tubular structure while increases the watt power. Gusatti et al. 13 , phlegmatic influence of the precursor on the structural changes of ZnO NPs, the zinc acetate precursor was replaced by zinc nitrate and hydrazine hydride used as reducing agent. In figure 4, clearly indicates that the changing the morphology of nanoparticles is strongly dependent on the effect of the precursor and irradiation power.
optical characterization
The absorption maximum of ZnO NPs was observed at 365 and 371 nm that shows the blue shift related to the electronic transitions and the quantum confinement effect along with band gap energy shown in figure 5. The band gap energy (Eg) of synthesized ZnO NPs was calculated using Tac plot by the following equation.
= ℎ
Where, h -plank constant and ν -frequency (ν = C / ). ). In the present study, the value found in the range 3.322 -3.602 eV. As reported in the literature 12 , the Eg valve for ZnO was found to be 3.37 eV.
Conclusions
ZnO NPs were successfully synthesized by microwave irradiation method with two different zinc precursors. XRD data confirms that synthesized ZnO NPs are of high purity and crystalline nature. The result obtained from EDXA analysis is good corresponding agreement with the above results. The key observation was that the nanoparticles synthesized from zinc acetate were well organized and changed the morphology into a wurtzite structure to change the watt power from 240 to 420 watts. However, the morphology of ZnO synthesized from zinc sulphate slightly changes from spiral to tubular morphology due to change in watt power. The band gap energy for ZnO NPs found to be 3.34-3.602 eV, which is higher than that of the bulk material. The main focus is to expand the study by using these nanoparticles for their efficiency in various applications such as optoelectronics and mechanical fields. | 2019-04-30T13:09:11.937Z | 2019-12-07T00:00:00.000 | {
"year": 2019,
"sha1": "1da78b5bb92fe96fb721cf13064e94d6c042737f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/577/1/012120",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3b7ba0a44e9c0d4a05c2a7c9bd699245b1359ffd",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
99776623 | pes2o/s2orc | v3-fos-license | Determination of the 121Te gamma emission probabilities associated with the production process of radiopharmaceutical NaI[123I]
The 123I is widely used in radiodiagnostic procedures in nuclear medicine. According to Pharmacopoeia care should be taken during its production process, since radionuclidic impurities may be generated. The 121Te is an impurity that arises during the 123I production and determining their gamma emission probabilities (Pγ) is important in order to obtain more information about its decay. Activities were also obtained by absolute standardization using the sum-peak method and these values were compared to the efficiency curve method.
Introduction
The presence of impurities that emit gamma rays can be identified through the technique of gamma spectrometry. However, the determination of nuclear decay related parameters of this radioisotope will provide a greater insight into the method of production and the possible effects of the presence of this radionuclide when administered in the form of radiopharmaceutical to the patient. The determination of the gamma emission probability (Pγ) is fundamental for this control, as well as being an opportunity to confirm the efficiency of the spectroscopy system for determination of radionuclidic impurities enabling the measurement procedure both activity and Pγ. The sum-peak method was adopted here since it allows to obtain very low values for activity of radionuclides emitting gamma radiation, as is the case of 121 Te. This method had already been used by (Brinkman et al., 1977;Silva et al., 2006) for calibration of 123 I, providing good results with low uncertainty.
Sum-peak method
The spectrum of 121 Te with energy peaks can be observed in Figure 1. Gamma spectrometry system used here consisted of an HPGe coaxial detector, with 20 % relative efficiency, and a good resolution in the range from 3 to 300 keV. The sum-peak coincidence method is an absolute method used in calibration processes of radionuclides that have emission lines x or gamma. Therefore, the combinations that will result in sum-peak can be of type: γ-γ, x-x or x-γ. Thus, the application of the method requires a well-calibrated spectrometry system, combined with associated electronic units, as well as a program for data acquisition, integrated with a multichannel analyzer. According to the decay scheme, 121 Te decays by electron capture to 121 Sb. The absolute value of the activity is calculated using the following equation: where N 0 is the activity; N T is the total count; N is the counting corresponding to the extrapolation to zero in multichannel analyzer; Nx is the number of counts in the x photopeak; Nγ is the number of counts to the γ photopeak; Nxγ is the number of counts of coincidence due to x and γ peaks. The estimate of uncertainty is given taking into account uncertainty components of A-and B -type for an expanded uncertainty k = 1 (de Almeida et. Al., 2007). Figure 1 HPGe spectrum obtained by the spectrometry system with the respective energy peaks associated with the radionuclide 121 Te.
Spectral separation method
Here was developed a spectral separation method to remove the contribution of 125 I that appears in the sample due to the production mode of 123 I. This other radionuclidic impurity has a characteristic energy represented by the peak of 35.5 keV, which interferes with quantification of 121 Te x-ray due to its proximity and high intensity. Point sources were prepared from the master solution of 123 I, the first step consisted in identifying all energy peaks in the region of 30-40 keV that appear in the spectrum, which is stored in the internal memory of the automated multichannel Analyzer. Then one obtains the gross count values for this region of the spectrum and subtract the corresponding background radiation. Hence, it is bounded on the region of interest in the spectrum that corresponds to the peak of 35.5 keV and, with appropriate command, gets the count for just this peak, generating a peak area correction factor of this region. This factor is obtained by division between the 35.5 keV peak area in the spectrum of the source to be calibrated (mixture) and the corresponding area in another spectrum of a source contained only 125 I (1) (pure) in the same conditions of measurements. This area factor obtained in the previous step is considered to subtract the contribution of the 125 I contaminant. Due to small discontinuity that appears at the beginning of the detector window is made the extrapolation to zero in multichannel to get the value of N, which will be added to total N so that, finally, is determined the value of N 0 , according to equation 1.
Source preparation
The master solution used for these measurements was provided in ampoule by the producer (IEN/CNEN) and then was eluated in the form of iodide (¹²³I) with a sodium hydroxide solution for obtaining sodium iodide (NaI). Before starting the counts, was here set a timeout, around a few halflives of ¹²³I, in order to reduce the intensity of the sources so that its secondary energy lines do not hide the ¹²¹Te characteristic peaks observed in the spectrum. The masses were determined by gravimetric method with differential weighing. Point sources were prepared with the help of a picnometer, depositing drops of solution of radionuclide in a polystyrene film, with a thickness of 0.05 mm, set in an acrylic ring. The ring has an external diameter of 25 mm, inner diameter of 4 mm and 1 mm thick. Once dried, the sources were covered with the same polystyrene film (Bernardes et al., 2002). HPGe spectrometry system has been calibrated in efficiency through point standards of 60 Co, 152 Eu, 166m Ho and 226 Ra. The efficiency curve obtained for an energy range between 100 and 1000 keV can be seen in Figure 2, and the uncertainties for values of activity varied around 2% (k = 1).
Radionuclidic impurities assessment
For carrying out of measurements was used a system consisting of a planar HPGe detector with the standard electronic. This detector is known "d4" and the measurements were carried out in two different positions: the first at a distance of 10 cm, "p2", from the detector and the second on the detector "p 0 ". HPGe spectrometry system has been calibrated in efficiency through point standards of 60 Co, 152 Eu, 166m Ho and 226 Ra. The efficiency curve obtained for an energy range between 100 and 1000 keV can be seen in Figure 2, and the uncertainties for values of activity varied around 2% (k=1). Significant sum effects were not observed in for the preparation of the efficiency curve, at "p2", mainly for 152 Eu and 166m Ho standards. This allowed the calibration point source, quantification of impurities and determining the gamma emission probability for 121 Te. The spectrum acquired in the position "p0" was used by the sum-peak absolute method to get the activity of 121 Te. Radionuclidic impurities of 121 Te and 125 I, both with half-life more than 123 I, identified in the spectrum were quantified in order to assess possible damage to the patient during the period of incorporation. As the main radionuclide decay during the acquisition of multiple spectra it was possible to verify the presence of the peaks associated with impurities. Brazilian law follows the recommendation of the American Pharmacopoeia, which, for the production of 123 I, stipulates a limit of 15% for impurities in relation to the main radionuclide ( 123 I). And in this study the presence of 125 I was a negative contribution factor, which could be properly evaluated with the aid of the spectral separation technique, ensuring its content to be compared to the percentage indicated by the current recommendations.
Gamma-ray emission probabilities measurement
In order to associate the main peaks of the spectrum to the radionuclidic is need to obtain the relation energy-channel, for calibrating in energy the spectrometer. After, the total absorption efficiency curve is determined, in function of energy, to calculate the radionuclidic activity from net areas under each interest peak. The expression of activity area is: where: CPS (CORRECTED) is the count rate of photopeak; εγ is the photopeak efficiency for specific energy; and Pγ is the emission probability for specific energy. However, as the source activity was obtained directly by sum-peak method, Pγ was calculated for two main energies of 121 Te, 507 and 573 keV, by means of the following expression, taking into account the corrections as decay, background, position: = ( ) /( . ) where: CPS (CORRECTED) is the count rate of photopeak; A is the absolute activity measured by sum-peak method; εγ is the photopeak efficiency for specific energy. The precision to determine Pγ depends on the precision achieved in efficiency curve with the choice of interest energy of the standards: 60 Co, 152 Eu, 166m Ho and 226 Ra. The peak area evaluate method taken into account the channel integrations that define the region of each peak, after subtracts background. Spectra were analyzed by Maestro II code. The samples and standards were measured three times at least. All measurements were made with point sources in "p2" position using the same geometry.
Results and Discussion
The experimental results for the activity of 121 Te are presented in Table 1, accompanied by their respective uncertainties both for efficiency curve method, position p2 as to the sum-peak method, position p0.
(2) Positions: p2 = 10 cm (distance between source detector); p0 = 0 cm (top of detector) The result for the source 175S14 obtained by the efficiency curve method suffered some interference and unable to again measure the value of the activity of the source. The values obtained in both methods are compatible and this enabled the standardization of absolute 121 Te. As can be seen in table 2, it should be noted also the low uncertainties associated with the sum-peak method, indicating values that are more reliable. The identification of impurity for the 125 I brought several problems in order to calibrate 121 Te absolutely. Table 3 shows the results for the gamma emission probabilities compared to the values of the Physikalisch-Technische Bundesanstalt (PTB) and Laboratoire National Henry Becquerel (Nucleide-LARA/LNHB). In general, the values for the emission probabilities evaluated were consistent and uncertainty values obtained are slightly below the published values. Finally, it was possible to develop a methodology for analysis of these impurities that took into account the interference contribution for 125 I during the data acquisition. The proper calibration of the photon spectroscopy system, as well as the successful implementation of sum-peak method, made it possible to obtain the values of the emission probabilities with associated uncertainty until 1.2 % for 507 keV and 0.5% for 573 keV. The Pγ value for 507 keV is 5.6 % lower than of reference publications. And for 573 keV the value Pγ is until 2% is compatible with the reference. However, the uncertainties here obtained are lower than of literature data.
Conclusions
The absolute standardization process of 121 Te here developed presented a new approach for the treatment of radionuclidic impurities. Activities obtained by efficiency curve were satisfactory and important enough for the comparison of two methods here adopted. The sum-peak method made it possible to obtain values of activity with uncertainties below 0.5 %. At the same time made it possible to obtain good values for both the activity and the determination of precise nuclear parameters as gamma emission probabilities for the two main radionuclide and its impurities. | 2019-04-08T13:06:53.037Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "a2ea35725f6096980fc4328ee7039bde36f5ec02",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/733/1/012097",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fbb51a5ddf89893d5e0d5f69ff9b59d297e31ec4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
1219413 | pes2o/s2orc | v3-fos-license | Glycosaminoglycan Sulphation Affects the Seeded Misfolding of a Mutant Prion Protein
Background The accumulation of protease resistant conformers of the prion protein (PrPres) is a key pathological feature of prion diseases. Polyanions, including RNA and glycosaminoglycans have been identified as factors that contribute to the propagation, transmission and pathogenesis of prion disease. Recent studies have suggested that the contribution of these cofactors to prion propagation may be species specific. Methodology/Principal Finding In this study a cell-free assay was used to investigate the molecular basis of polyanion stimulated PrPres formation using brain tissue or cell line derived murine PrP. Enzymatic depletion of endogenous nucleic acids or heparan sulphate (HS) from the PrPC substrate was found to specifically prevent PrPres formation seeded by mouse derived PrPSc. Modification of the negative charge afforded by the sulphation of glycosaminoglycans increased the ability of a familial PrP mutant to act as a substrate for PrPres formation, while having no effect on PrPres formed by wildtype PrP. This difference may be due to the observed differences in the binding of wild type and mutant PrP for glycosaminoglycans. Conclusions/Significance Cofactor requirements for PrPres formation are host species and prion strain specific and affected by disease associated mutations of the prion protein. This may explain both species and strain dependent propagation characteristics and provide insights into the underlying mechanisms of familial prion disease. It further highlights the challenge of designing effective therapeutics against a disease which effects a range of mammalian species, caused by range of aetiologies and prion strains.
Introduction
Transmissible spongiform encephalopathies (TSE) or prion diseases are a group of invariably fatal neurodegenerative disorders associated with misfolded conformers (PrP Sc ) of the normal cellular prion protein (PrP C ). In animals the disease occurs naturally as scrapie in sheep, bovine spongiform encephalopathy (BSE) in cattle and chronic wasting disease (CWD) in deer and elk. In humans the disease occurs in sporadic, familial and acquired forms with phenotypes including Creutzfeldt-Jakob Disease, Gerstmann-Strä ussler-Scheinker syndrome (GSS) and Fatal Familial Insomnia [1]. The transmissible nature of prion disease has been attributed to the template directed misfolding of PrP C , which is supported by the absolute requirement of PrP C expression for disease transmission and pathogenesis [2]. The protein only hypothesis proposes that PrP Sc is the principal component of this infectious agent or template [3]. However, it is not clear whether PrP Sc is the only component of the infectious and/or pathogenic entity.
Cell-free models of template directed PrP C misfolding (or conversion to PrP Sc ) have demonstrated that PrP Sc can induce a conformational change in PrP C , rendering it protease resistant (referred to as PrP res ) [4,5,6] and infectious under prescribed conditions [7]. Previously, the efficiency of this process using partially purified constituents has been low, often requiring a large excess of PrP Sc , which has been proposed to reflect the need for a catalytic co-factor in the process [8,9]. This view is further supported by the low levels of infectivity produced by folding recombinant PrP into a protease resistant form, although this may also reflect the absence of post-translational modification of the recombinant protein and the nature of the transgenic mouse model used in the bioassay [9,10].
The reported ability of polyanions to stimulate the misfolding of partially purified mammalian or recombinant PrP C and generate infectivity in the absence of an initiating PrP Sc seed provides compelling evidence for the role of a cofactor for the acquisition of prion infectivity [11,12]. Negatively charged macromolecules or polyanions, including nucleic acids [11,12,13,14,15,16,17,18,19,20], phospholipids [21,22,23,24] and glycosaminoglycans (GAGs) have been implicated as facilitating cofactors in the conversion of PrP C to PrP Sc and thereby in the transmission and pathogenesis of prion disease. Mechanistically, GAGs have been proposed to act as scaffolds to support the misfolding of PrP C [25]. Further, GAGs have been reported to act as receptors for PrP Sc on the cell surface [26,27], affect PrP C trafficking [28,29,30] and are also found in PrP Sc associated plaques [31,32]. Treatments, which modify the GAG content of prion infected cells, or treatment of infected cells with GAGs (or GAG mimetics) have been shown to clear prion infection [28,33]. Pentosan polysulphate (PPS), a heparan sulphate mimetic, can prolong incubation time in prion infected mice [34] and is currently being used on a compassionate basis in variant CJD [35,36]. Significantly, unlike RNA, GAGs are found at the cell surface and along the endosomal pathway where PrP Sc formation has been proposed to occur [25].
Whilst the ability of polyanions to stimulate PrP res formation in cell-free assays and from recombinant PrP appears to be species independent [14,37,38,39], PrP res formation following the specific depletion of polyanions from the PrP C substrate appears to be host species specific [40]. Using a cell-free model to investigate reaction conditions and cofactors affecting the susceptibility of a murine PrP C substrate to seeded PrP res formation, we report here that PrP res formation is significantly and specifically inhibited by the degradation of endogenous nucleic acids or heparan sulphate. We further show that treatment to modify the degree of GAG sulphation has a differential effect on the ability of wild-type PrP and PrP encoding a mutation associated with familial prion disease to act as a substrate for conversion to PrP res . This may be attributed to the differing ability of wild-type and mutant PrP C to bind to GAGs, suggesting that cellular cofactors differentially modulate sporadic and familial forms of prion disease and implicates subtle changes in the GAG repertoire in the pathogenesis of prion disease.
Results
Heparan sulphate and electrostatic involvement in cell free PrP res formation The Conversion Activity Assay (CAA) generates PrP res from a PrP C substrate derived from an uninfected brain homogenate (UBH) seeded with a prion infected brain homogenate (IBH). Using the M1000 mouse adapted prion strain [41] as the IBH seed, PrP res formation occurs in a time ( Figure 1A) and PrP C dependent manner with PrP res generated from the balb/c (WT) but not Prnp 2/2 (KO) mouse brain homogenates ( Figure 1B). While the PrP C contained within the WT UBH was efficiently converted, there is evidence of further limiting, non-PrP factors in the process as only a small proportion of the available PrP C substrate (2469%, n = 8) is converted in the reaction using UBH derived from PrP C over expressing Tga20 mice [42]. That an increase in PrP C does not significantly increase conversion efficiency suggests that factors other than PrP C in the UBH may limit the output of the assay ( Figure 1B).
Electrostatic forces mediate many biological interactions [43,44,45] and have been reported to affect the folding and stability of PrP [46]. To investigate whether electrostatic forces play a role in the cell-free formation of PrP res the CAA was performed in buffers of increasing ionic strength (Figure 2A). Using a similar assay, the ability of IBH derived PrP Sc to drive the amplification of PrP res has been shown to decrease in the absence of NaCl [47]. However, the interaction was also significantly reduced in high ionic strength buffers ($300mM; p,0.01 one-way ANOVA analysis relative to125mM NaCl), consistent with a Figure 1. Conversion activity of brain derived PrP C in the CAA seeded with infected brain homogenates. (A) UBH from balb/c (WT) mice were subjected to the CAA in the presence of IBH for differing periods of time (0-24 hours). B) The CAA was performed for 16 hours using IBH added to DPBS, or UBH from KO, WT or PrP over expressing Tga20 (TG) mice. DPBS represents total (DPBS 2 without PK treatment), and protease resistant (DPBS + with PK treatment) PrP present in the IBH used to seed the CAA. Relative PrP C expression (without PK treatment) is shown in right of panel for KO, WT and TG mice. Conversion activity was determined as the fold increase in immunoreactive signal of WT relative to KO reactions after overnight (or as indicated) incubation at 37uC and treatment with PK (100mg/ml, 1hr at 37uC). Blots developed with 03R19. Molecular weights (kDa) are shown. Western blots are representative of replicated experiments, quantification is based on at least three experiments, mean and SEM are shown. *p,0.05, **p,0.01, ***p,0.001 using one-way analysis of variance (ANOVA) with Tukey's multiple comparison test (GraphPad, Prism). doi:10.1371/journal.pone.0012351.g001 physiologically relevant interaction and implicating electrostatic interactions in the seeded formation of PrP res .
Electrostatic interactions may exist between polyanionic molecules, such as sulphated GAG (sGAG) species and the polybasic regions of PrP [48,49,50]. The contribution of sGAG to PrP res formation using the CAA described here was investigated by specific depletion of the endogenous sGAG content of the UBH used as the PrP C substrate in the CAA ( Figure 2B). Following optimisation of the conditions required for efficient sGAG digestion, the presence of sulphated species in the UBH was decreased ( Figure 2C) and a reduction of polysaccharide chains shown by decreased absorbance of purified GAGs separated using an anion exchange column ( Figure 2D). The capacity of the UBH to act as a conversion substrate in the CAA was specifically and significantly (p,0.001) reduced following heparinase III treatment to preferentially degrade heparan sulphate but not other sulphated GAG species, including heparin and chondroitin sulphate species.
Treatment to deplete GAGs from the substrate did not reduce the amount of available PrP C substrate (data not shown).
It has been previously reported that the conversion activity of 263K, a hamster adapted sheep scrapie strain, is decreased by enzymatic treatment to reduce the nucleic acid content [14] and recently suggested that nucleic acids do not contribute to the conversion activity of mouse adapted prion strains [40]. To determine if this is true of all mouse prion strains the CAA was performed using the M1000 strain of mouse adapted human prions. The effect of MgCl 2 concentration, a divalent cation required for the efficient activity of the nucleic acid digesting enzyme, Benzonase was first investigated to ensure that the effect of the treatment was enzyme specific ( Figure 3A). It was found that concentrations of MgCl 2 required for optimal activity of Benzonase (1-2mM) did not significantly affect conversion activity, while concentrations at or over 5mM significantly decreased conversion activity. Benzonase treatment of the mouse derived Figure 2. Conversion activity of brain derived PrP C in the CAA seeded with infected brain homogenates is sensitive to ionic strength and inhibited by the specific depletion of heparan sulphate. (A) The CAA was performed using IBH diluted in UBH prepared from WT and KO mice in Tris-HCl pH 7.4 and the indicated concentrations of NaCl. ** Indicates a significant reduction in conversion activity relative to 125mM NaCl. B) The CAA was performed using IBH diluted in UBH prepared from WT mice in 125mM NaCl/Tris-HCl pH 7.4 after treatment with Heparinase I (H), Heparinase III (HS), Chondroitinase ABC (Ch), their corresponding buffers (underlined) or without treatment (Con). Conversion activity was determined as the fold increase in immunoreactive signal of WT relative to KO reactions after overnight incubation at 37uC and treatment with PK (100mg/ml, 1hr at 37uC). Quantification (A, B) is based on at least three experiments, mean and SEM are shown. **p,0.01, ***p,0.001 using one-way analysis of variance (ANOVA) with Tukey's multiple comparison test (GraphPad, Prism). C) The amount of sGAG purified from UBH treated with Heparinase I (H), Heparinase III (HS) and Chondrotinase ABC (Ch) or untreated (Con) was determined by Blyscan analysis and normalised to the amount of sGAG recovered from buffer controls (not shown). D) The absorbance (254nm) of sGAG eluted from a Q-Sepharose HiTrap anion exchange column in increasing concentrations of NaCl (0-1M). GAGs were purified from control (%), Heparinase I treated (e) and Heparinase III treated (#) or Chondroitinase ABC treated (+) brain homogenates. Quantification (C, D) is based on an analysis performed in duplicate. doi:10.1371/journal.pone.0012351.g002 UBH, significantly decreased conversion activity, relative to the buffer control, whereas pre-treatment of the IBH seed of the CAA had no effect ( Figure 3B). This suggests that nucleic acids present in the UBH substrate, but not the IBH seed, can act as catalysts or scaffolds for PrP res formation.
Familial prion disease mutations affect sGAG binding and conversion activity of PrP C in the CAA Mutations associated with familial prion disease located in the C-terminal region of PrP (121-231) do not reduce the stability of PrP [51]. However, the proline to leucine mutation at residue 101 of full length mouse PrP (P102L human PrP sequence) has been reported to alter the alpha-helical content of full length PrP [52] and this and other familial mutations have been reported to increase the GAG binding capacity of PrP [53]. Expression of endogenous levels of 101L mutation are not sufficient to induce spontaneous disease in knock-in mice, although the mutation does alter the susceptibility of mice to prion infection [54]. To investigate how GAGs may affect the susceptibility of the 101L mutation to undergo seeded misfolding we developed a CAA model using mouse PrP C exogenously expressed in RK13 cells as the substrate (Figure 4). RK13 cells do not express detectable levels of PrP, but become susceptible to infection by mouse adapted prion strains through exogenous expression of mouse PrP [55,56,57]. When lysates of mouse PrP C expressing cells were used as the substrate in the CAA a significant increase in PrP res was detected relative to RK-13 cells that had been transfected with the empty expression vector (puroRK). This exogenous expression system also enabled the investigation of the conversion activity of moPrP harbouring a P101L mutation ( Figure 4A). The conversion activity of reactions containing mutant 101L-moPrP was significantly greater than those of wild-type 101P-moPrP, despite detection of lower 101L-moPrP levels ( Figure 4B).
To investigate whether the association of sGAG with PrP C affects the conversion process wild-type 101P-moPrP and mutant 101L-moPrP cells were treated with chlorate, a general inhibitor of GAG sulphation [58], and PrP C formed in the presence of modified GAG sulphation used as substrate in the CAA. The conversion activity of wildtype 101P-moPrP was not significantly affected by chlorate treatment of the cells ( Figure 4A,C). In contrast the conversion activity of mutant 101L-moPrP was significantly increased following chlorate treatment ( Figure 4A,D).
To understand the different response of 101P and 101L moPrP to chlorate treatment we investigated their relative GAG binding capacities. The heparin binding capacity of 101L-moPrP was significantly greater (p,0.001, two way ANOVA) than that of 101P-moPrP ( Figure 5A, C). An N-terminally truncated form of PrP does not appreciably bind to sGAG [48], consistent with a GAG binding site in the N-terminal region of PrP [49,50]. Mutations associated with familial prion disease have been shown to reveal a cryptic GAG binding site down stream of the residue 90, which may enable the C2 fragment of PrP to bind to GAGs [53]. PNGaseF treatment revealed a truncated fragment of 101L-moPrP bound to heparin whereas the same fragment present in the 101P-moPrP expressing cells was not detected ( Figure 5B). This fragment was detected using antibodies 03R19 but not with the N-terminal antibody 03R17 indicating that it lacked Nterminal residues 23-30 (data not shown).
Discussion
Large negatively charged macromolecules (GAGs, nucleic acids and phospholipids) have been implicated in the pathogenesis of prion diseases. Nucleic acids, and in particular RNA has been identified as a potential co-factor in the formation of the disease associated isoform of the prion protein, in hamster [11,14] but not mouse models of prion disease [40]. The current study using a mouse adapted human prion strain, provides further insight into the prion strain and species specific requirements for prion propagation. We report that depletion of either nucleic acids or sGAG and in particular heparan sulphate, prevent the cell free formation of PrP res from murine PrP C seeded with mouse derived PrP Sc . Changes to GAG sulphation through chlorate treatment increased the ability of PrP C encoding the P101L mutation linked with familial prion disease to form PrP res , which may be related to the ability of this molecule to associate with under sulphated GAG species.
Recent reports have suggested that the cofactors required for efficient hamster PrP res formation may be species specific as the depletion of RNA from a murine derived substrate did not affect PrP res formation [14,40]. In contrast prion infectivity can been generated from either hamster PrP or recombinant murine PrP, in the absence of a PrP Sc seed, by the addition of RNA, albeit in the presence of lipids [11,12]. Thus it would appear that RNA stimulation of de novo PrP res formation is species independent. Furthermore the data presented here showing that depletion of nucleic acids from the PrP C substrate of a mouse adapted model of human prion disease prevented PrP res formation, indicates that the requirement for endogenous RNA may be prion strain dependent. Moreover the use of a prion strain originally derived from a patient with GSS in the current study rather than scrapie or bovine spongiform encephalopathy strains raises the possibility that human prion strains have different cofactor requirements to animal prion strains. We are investigating this possibility further using a mouse adapted prion strain developed from a T2MM sporadic prion strain [56].
A further species or strain specific effect is raised by the significant and specific effect of heparan sulphate depletion on conversion activity shown here. Unlike earlier reports, in which the heparinase III treatment did not affect the conversion activity of a hamster PrP C substrate seeded with 263K hamster adapted scrapie brain homogenate [14], we show that specific depletion of endogenous heparan sulphate inhibits the conversion activity of a mouse PrP C substrate when seeded with the M1000 mouse adapted human prion strain. The curing effect of heparinase III, but not heparinase I and chondroitinase ABC treatment of prion infected N2a cells has been previously reported and proposed to relate to either the relative GAG content of N2a cells or relate to the cleavage specificity of the enzymes [30]. In particular it was Conversion activity was determined as the fold increase in immunoreactive signal relative to puroRK reactions after overnight incubation at 37uC and treatment with PK (100mg/ml, 1hour at 37uC). Blots developed with 03R19. Molecular weight (kDa) is shown. Western blots are representative of replicated experiments, quantification is based on at least three experiments, mean and SEM are shown. *p,0.05 two-tailed t-test of indicated pairs. In (C) and (D) CAA performed using KO and WT mouse brain homogenates (with quantitation shown as brain in A) and cell lysate derived from puroRK (N), 101P (P) and 101L (L) moPrP expressing cell lines. Truncated fragment (r) was not a consistently observed in either wildtype or mutant cell lines and was not included in analysis. doi:10.1371/journal.pone.0012351.g004 suggested that prion propagation might require under sulphated GAGs that are the target of heparinase III or relate to the length of the stub that survives enzymatic treatment. Our results are consistent with a role for specific GAG sulphation patterns in the conversion process.
In the model studied here either depletion of nucleic acids or heparan sulphate led to the complete abolition of conversion activity. The apparent requirement for two cofactors in PrP res formation is not unexpected as the formation of infectivity from recombinant murine PrP required both RNA and lipids and the presence of lipids in infectivity derived from RNA stimulated mammalian PrP could not be excluded [11,12]. Glycosaminoglycans and nucleic acids are both large negatively charged molecules based on an underlying carbohydrate backbone. We considered whether the high concentrations of Benzonase used to digest the nucleic acids as described here and in other reports (1000 times the manufacturers' recommended levels for general digestion of nucleic acids) may have non-specifically affected the GAG content of the substrate. However, in preliminary experiments to investigate this possibility we found no definitive change in the sulphated GAG content of homogenates following Benzonase treatment. Therefore for this strain at least both RNA and heparan sulphate are absolutely required for PrP res formation.
Point mutations (including the P102L human PrP mutation, equivalent to the P101L mouse PrP mutation studied here) and octapeptide repeat expansions associated with familial prion disease increase the association of recombinant PrP with sGAGs [53]. Using PrP C expressed in a mammalian cell line capable of supporting prion infection [55,56,57] it was shown that the affinity of 101L-moPrP for heparin was significantly increased relative to wild type moPrP. This confirms that the increased affinity of recombinant PrP encoding familial mutations for heparin [53] is also observed for PrP expressed in mammalian systems.
The 101L-moPrP mutation was more susceptible to conversion to PrP res in the CAA than the wild type 101P-moPrP. As previously reported [59], introduction of the 101L mutation increased the protease resistance and insolubility of PrP expressed in RK13 cells (Welton and Lawson unpublished observations). Introduction of the same mutation does not alter the stability [60] or confer protease resistance [52] on recombinant PrP produced in a cofactor free environment, although the alpha helical content of the protein is decreased. The alpha helical content of purified PrP is decreased by binding to PPS, which has been proposed to increase the susceptibility of the protein to conversion in cell free assays by reducing the transition barrier [38]. We therefore propose that subtle conformational changes associated with the 101L-moPrP [52] result in an increase in the proportion and affinity of the 101L-moPrP population for a binding partner (present in a mammalian expression system) and increase its ability to convert to the PrP res form. An alternative possibility not investigated here is the origin of the M1000 strain from a patient with GSS associated with the P102L mutation [61,62]. Although adapted to mice and therefore on a wild type moPrP background we cannot exclude the possibility that the PrP Sc from the original prion strain preferentially converts PrP C encoding the original mutation. It may also reflect a faster replication kinetics as has been reported for PrP encoding octapeptide repeat insertions in a cell-free conversion assay [63].
Both the P102L and E220K mutations associated with familial prion disease do not require residues 23-27 for GAG binding, with binding of mutant PrP C mediated through a cryptic GAG binding site located between residues 109-136 [53]. Consistent with this we report the increased binding affinity of 101L-moPrP for heparin and the preferential binding of a 22kDa fragment consistent with C2 (89-230) from 101L but not 101P moPrP. The association of PrP with GAGs through this alternativebinding domain may play a role in the pathogenic process.
It was surprising that modification of GAG sulphation with chlorate did not decrease the conversion activity of moPrP. Chlorate treatment does not change the PK-resistance or solubility of wild type 101P-moPrP (Welton and Lawson unpublished), although it did increase the PrP levels, perhaps by altering the metabolism of PrP C [29]. Chlorate competitively inhibits the formation of the sulphate donor 39-phosphoadenosine 59-phosphate (PAPS) required for GAG sulphation. When cells are grown in medium containing normal sulphate supplementation, as performed here, sulphation of heparan sulphate is selectively inhibited, with 6-O-sulphation inhibited before 2-O-sulphation [64]. Previous studies have highlighted the importance of 2-O but not 6-O sulphation for the interaction of wildtype PrP with heparin [49] and the role of under sulphated GAGs in prion propagation [30]. Therefore it is possible that under the conditions used in this study sulphation required for the interaction of wild type 101P-moPrP with sGAG remained unaltered. Whereas due to the altered GAG binding pattern of 101L-moPrP, selective inhibition of sulphation may have increased the profile of GAGs that could bind and facilitate the conversion of mutant 101L-moPrP. Intriguingly, chlorate increased the solubility of mutant 101L-moPrP (Welton and Lawson unpublished observations), which may have affected the ability of this species to be converted to PrP res .
This study has revealed a further complexity to the role of cofactors in the propagation of prions. Although prion infectivity can be generated from PrP in the absence of cofactors it appears that the addition of cofactors may augment the conversion process [9,10]. This may explain both species and strain dependent propagation characteristics and provide insights into the underlying mechanisms familial prion disease. It further highlights the challenge of designing effective therapeutics against a disease which effects a range of mammalian species, caused by range of aetiologies and prion strains.
Ethics statement
The use of tissue sourced from prion infected (AEEC#04154, 0707227) and uninfected (AEEC#05090, 0810787) mice was approved for this study by the University of Melbourne Animal Ethics Committee
Preparation of prion infected brain homogenates (IBH)
Brains were collected from balb/c mice in the terminal stage of disease following intracerebral inoculation with M1000 prions [41]. For use as a seed in the cell free assay of PrP res formation (Conversion Activity Assay described below), 10% (w/v) brain homogenates were prepared in calcium and magnesium free Dulbecco's phosphate buffered saline (DPBS) or 20 mM Tris-HCl pH 7.4 supplemented with 1% (v/v) Triton-X 100. Homogenates were prepared by passing tissue through a graded series of needles (18G, 21G, 24G, 26G). The final sample was then cleared at 2006g for 2 minutes, the supernatant snap frozen in liquid nitrogen and stored at 280uC.
GAG lyase treatment of UBH
GAG specific lyases were obtained from Sigma. Heparinase I and Heparinase III from Flavobacterium heparnium were prepared in 10 mM Tris-HCl pH 7.4, 4 mM CaCl 2 , 50mM NaCl. Chondroitinase ABC from Proteus vulgaris was prepared in 0.01% (w/v) BSA. Reconstituted lyases were stored at 280uC.
Following treatment, 40 mg wet tissue equivalents was diluted to 10% (w/v) in 20 mM Tris-HCl pH 7.4 and 125 mM NaCl (final concentration), snap frozen in liquid nitrogen and stored at 280uC for subsequent use in the cell-free conversion activity assay (CAA). GAGs were purified from the remaining 60 mg wet tissue equivalents as described previously [65]. Briefly homogenates were delipidated at room temperature for 2 hours in 4 volumes 1:2 chloroform: methanol (v/v). After centrifugation (3,0006g, 10 minutes) the pellet was dissolved in ethanol at a ratio of 1.5mL/g initial tissue equivalents to remove organic solvents. After centrifugation (3,0006g, 10 minutes) the pellets were dried overnight at 37uC and suspended at 1.5mL/g initial tissue equivalents in 0.1 M Tris-HCl pH 8.0, 1mM CaCl 2 and treated with 10 mg/mL pronase (pre-incubated for 30 minutes at 60uC to eliminate exogenous glycosidase activity, Sigma) for 72 hours at 37uC by adding 40ml of enzyme at 0, 24 and 48 hours. The preparation was then treated with Benzonase (0.25U/mL, Progen) for 18 hours at 37uC, followed by 40ml pronase for a further 24 hours at 60Cu. O-linked carbohydrates were released from the remaining peptides by titration with 10mM NaOH to pH 10-11 before the addition NaBH 4 (1M) and incubation for 16 hours at 45uC. Samples were neutralized by the addition of acetic acid and then centrifuged at 4,0006g for 15 minutes. The supernatant was recovered and the pellet washed a further two times in purified water, with the wash supernatant added to initial supernatant. After the addition of 2 volumes of acetone the resulting precipitate (4uC, for 24 hours) was recovered by centrifugation at 4,0006g for 15 minutes and was air dried and dissolved in 1mL purified water.
Purified GAGs were analyzed for their sulphated GAG content by Blyscan analysis (Bicolor Ltd) as per the manufacturers instructions using Chondroitin sulphate C from shark cartilage (Sigma) to generate a standard curve. Samples were analyzed in triplicate using a microplate reader at a wavelength of 650nm.
Further purification of GAG species was obtained by separation on a 0.762.5cm Q-Sepharose HiTrap anion exchange column (GE Healthcare). The column was prepared with 5 column volumes of binding buffer (20mM Tris-HCl pH 7.4). Purified GAGs were injected onto the column (1mL/minute) and washed with 5 volumes of binding buffer to remove unbound molecules before bound GAGs were eluted using a continuous salt gradient (0-1M NaCl in binding buffer), 1.5mL fractions were collected at a flow rate of 1mL/min and elution measured at an absorbance wavelength of 256nm.
Benzonase treatment of brain homogenates UBH and IBH prepared in DPBS were treated with Benzonase (Merck, 1.2 mU/ml, 10 mM MgCl 2 , 5 minutes at 37uC). IBH were then diluted 1 in 50 in DPBS/1% (v/v) TX-100 before use in the Conversion Activity Assay. UBH were diluted K by the addition of IBH in the Conversion Activity Assay.
Generation of RK13 cells expressing wildtype and mutant mouse PrP
Wildtype mouse PrP coding sequence was cloned into the pIRESpuro2 vector (Clontech) and verified by DNA sequencing [57]. The P101L mutation encoding the mouse sequence equivalent of the P102L mutation in human PrP was generated from wild-type mouse PrP cloned into the pIRESpuro2 vector using the Quikchange II site-directed mutagenesis kit (Stratagene) following a modified protocol. Briefly, primers (forward, 59CCATAAT-CAgTggAACAAgCTCAgCAAACCAAAAACC 39, reverse 59 ggTTTTTggTTTgCTgAgCTTgTTCCACTgATTATgg 39) were designed to introduce a mismatch at residue 305, resulting in the coding of a leucine at codon 101 instead of a proline. Thermocycling was modified from the manufactures guidelines and consisted of denaturation at 95uC for 50 seconds, followed by 18 cycles of denaturation at 95uC for 50 seconds, primer annealing at 60uC for 50 seconds and elongation at 68uC for 12 minutes. The reaction was held at 68uC for 7 minutes. The vector was then transformed and expressed as per the manufacturers directions. Mutations were confirmed by DNA sequencing.
Cell lysates were prepared for use in the CAA by suspending 10 6 cells in 50 ml DPBS + PI and subjecting to two rounds of freeze thawing to lyse membranes.
For heparin binding experiments, cell monolayers were washed twice in ice cold PBS and lysed in the flask with lysis buffer (10 mM Tris pH 8.0, 100 mM NaCl, 1% (v/v) NP-40) at 4uC. Lysates were transferred to microfuge tubes and centrifuged for 3 minutes at 2,7006g. Total protein concentration was determined by bicinchoninic acid assay (BCA; Pierce).
Conversion Activity Assay (CAA)
To 50ml PrP C substrate (UBH or cell lysate as prepared above), 50 ml of IBH, diluted a further 1/50 in the appropriate buffer supplemented with 1% (v/v) Triton X-100, was added. Where the CAA was performed in buffers of increasing ionic strength, homogenates prepared in 20mM Tris-HCl were supplemented with NaCl to give the final concentrations indicated. The samples were agitated overnight at 300 rpm, 37uC. Samples were then treated with PK (100mg/ml) for 1 hour at 37uC. The reaction was stopped by the addition of Pefabloc SC (Roche) to 4 mM and an equal volume of 26 sample buffer added to each sample. Samples were heated to 100uC before electrophoresis and western blot analysis.
Western immunoblot analysis
Samples prepared in NuPAGE sample buffer (Invitrogen) supplemented with 3% (v/v) b-mercaptoethanol were heated to 100uC and subjected to SDS-PAGE electrophoresis using NuPAGE Novex 12% Bis Tris gels and transferred to PVDF membranes as previously described [68]. PrP was detected with a polyclonal antibody raised against residues 89-103 of mouse PrP (03R19; [56]) developed with ECL-Plus chemiluminescent reagent (GE Healthcare) and imaged using ECL Hyperfilm (GE Healthcare) or LAS-3000 imaging system (Fuji). Deglycosylation of samples before western blot analysis was performed by PNGaseF treatment as previously described [56].
Chlorate treatment of cell lines GAG sulphation was inhibited by sodium chlorate (Sigma) treatment of RK13 cell lines as previously described [69]. Briefly, 70% confluent cells cultured in Optimem 10% FCS were treated with 30mM sodium chlorate and maintained for 2 passages in chlorate before cells were harvested for use in the CAA. Chlorate treatment of RK-13 cells reduces the Alcian blue reactive species in the conditioned medium, which is consistent with the loss of GAG sulphation (data not shown).
Heparin binding of cell lysate derived PrP C
Heparin-Sepharose 6 Fastflow beads (GE Healthcare, UK) were equilibrated in lysis buffer (10 mM Tris-HCl pH 8.0, 100 mM NaCl, 1% v/v NP-40) for 15 minutes at room temperature and resuspended to their original volume in lysis buffer. Beads were added to cell lysates prepared as described above (30ml bead preparation: 400mg total protein in a final volume of 800ml lysis buffer). In salt competition studies, lysates were prepared in the indicated concentration of NaCl before beads and lysates were incubated with agitation for 1-2 hour at room temperature and then centrifuged to pellet beads. The pellet was washed 3 times in lysis buffer and final bead pellet resuspended in 16 sample buffer and heated to 100uC before SDS-PAGE and western blot analysis. | 2014-10-01T00:00:00.000Z | 2010-08-23T00:00:00.000 | {
"year": 2010,
"sha1": "402b79289e95b2d856d9610c5732c5ce87314686",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0012351&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "402b79289e95b2d856d9610c5732c5ce87314686",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
5745432 | pes2o/s2orc | v3-fos-license | Medication Adherence and Technology-Based Interventions for Adolescents With Chronic Health Conditions: A Few Key Considerations
The number of children and adolescents with chronic health conditions (CHCs) has doubled over the past two decades. Medication adherence is a key component of disease management within these groups. Low adherence to prescribed medications is a known problem in adolescents with CHCs and is related to health outcomes, including quality of life, disease complications, and mortality. Adolescence is a critical time to create routines and health behaviors that optimize disease self-management and transition to adult care. The mounting interest in the development and use of mobile health tools provides novel opportunities to connect patients, particularly adolescents, with their providers outside of the clinic and to improve health outcomes. There is growing evidence to support the efficacy of technology-based approaches, in particular text-messaging and mobile apps, to improve adherence behavior in adolescents, although cost-effectiveness and long-term health benefits remain unclear. In this short viewpoint article, we review some important considerations for promoting medication adherence in adolescents with CHCs using technology-based approaches. (JMIR Mhealth Uhealth 2017;5(12):e202) doi: 10.2196/mhealth.8310
The number of children and adolescents with chronic health conditions (CHCs) continues to increase and has doubled over the past two decades, which represents an important public health concern [1]. Pediatric patients with CHCs, particularly adolescents, face challenges when trying to manage their illnesses and optimize their self-management skills. Adolescents with CHCs form a special subpopulation of pediatric patients (12-17 years old) learning how to self-manage medical decisions in preparation for an inevitable transition to adult care. Although the number of adolescents with CHCs is increasing, the number of validated self-reported scales for medication adherence that have been developed and tested specifically for adolescents is limited. Assessment of adolescent patients using parental proxy reports is not ideal. The use of tools designed for (and validated in) adults may be problematic, given the unique physiological, developmental, psychosocial, and education/vocational considerations of adolescence.
Adolescence is a critical time to create routines and health behaviors that optimize disease self-management and preparation for a seamless transition to adult care. The involvement of adolescents with CHCs in their own care can be demanding for both families and health care professionals, although it is an important investment given the short-term and long-term gains [2]. Medication adherence is a key component of disease management and low adherence to prescribed medications is a known problem in adolescents with CHCs, which is related to health outcomes, including quality of life, disease complications, and mortality [3]. Moreover, medication nonadherence has been associated with more frequent utilization of health services as well as higher health care expenses across pediatric CHCs [4]. Nevertheless, approaches to increase adherence to prescribed medications among adolescents with CHCs that are efficacious, practical, and cost-effective are lacking.
Taking daily medication(s) is a daunting task for many adolescents with CHCs, regardless of the prescribed regimen. Despite differences in disease-specific monitoring and treatment requirements among adolescents with CHCs, recent data suggest that barriers are similar across conditions [5]. Hanghoj and Boisen systematically reviewed data on perceived barriers from 2501 adolescents who had at least one of 14 chronic illnesses [5]. In order of frequency, the common barriers to medication adherence included: (1) aspects of physical well-being, such as side effects (including changes in physical appearance), reduction in symptoms/feeling well, and pill taste or swallowing problems; (2) forgetting to take medications, in part due to competing activities or changes in schedule; (3) desire to be normal and forget, ignore, or be free of their disease; and (4) lack of support from peers, parents, and health professionals. Therefore, the challenges that adolescents with CHCs need to overcome to optimize their medication adherence may be multi-faceted, but amenable to common adherence-enhancing interventions [5].
Clinics need information on evidence-based approaches to be able to implement these initiatives in the practice environment. Patient-centered and stakeholder-informed interventions developed with and for adolescents with CHCs are essential to improve adherence and enhance uptake, as well as engagement with interventions over time (particularly technology-based approaches). Access to personal technology, in particular smartphones, is becoming ubiquitous [6][7][8]. The mounting interest in the development and use of mobile health tools provides novel opportunities to connect patients with their providers outside of the clinic to improve health outcomes. Adolescents have adopted communication technology at a relatively fast pace, regardless of their socioeconomic status. A recent report indicates that most adolescents have widespread access to personal technology tools, including smartphones (73%), tablets (58%), desktop computers (87%), and/or laptop computers (81%) [6]. These findings suggest that technology-based interventions may present a unique opportunity to improve medication adherence and enhance self-management skills in adolescents across CHCs.
The use of personal and widely available technology-based approaches (in particular text-messaging, mobile apps, and mobile social media) to improve adherence behavior and other health outcomes in adolescents has shown overall acceptability and feasibility, with modest evidence for efficacy [9][10][11][12]. Nevertheless, the long-term health benefits, cost-effectiveness, and sustainability of patient engagement through technology-based approaches remain unclear [13,14].
Additionally, text messaging delivery methods often lack innovative features targeted to adolescents. Furthermore, methods to quantify patient fatigue, which is assumed to occur among adults with frequent text messaging, and the sustainability of patient engagement may apply differently to adolescents, representing a challenge for researchers. Therefore, while the evidence to date is encouraging and promising, further study of technology-based interventions for adolescent self-management and medication adherence, with rigorous study designs and across a wide range of CHCs, are needed. Moreover, further research is needed to explore adolescents' insights into the role and the design of technology-based interventions in identifying facilitators or preferred strategies to improve medication adherence. The consistent use of reporting guidelines for technology-based interventions is also critical to support the evidence generated, and conclusions that can be drawn, from adherence intervention studies [15].
While research efforts continue to produce better evidence for these technologies to promote health outcomes among adolescents with CHCs, we encourage medical providers to begin a conversation with leadership within their provider group or hospital about the incorporation of mobile technology into the practice environment, and to ask patients about their use of mobile technology and apps to promote self-care.
In conclusion, the number of adolescents with chronic illnesses continues to increase. Medication nonadherence is a challenge in adolescents across chronic conditions. Adolescents are frequent users of technology and engaging adolescents with chronic illnesses in their self-management could be invaluable for improving long-term outcomes. The use of technology-based interventions to improve medication adherence has shown promising results, and seeking adolescents' perspectives could enhance uptake and long-term engagement, and minimize patient fatigue. Following guidelines for reporting results of technology-based interventions, and validating adolescent-specific adherence assessment instruments, would enhance further comparative research across studies.
Conflicts of Interest
None declared. | 2018-04-03T04:21:37.229Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "e186bece107539c98a05f368c434fbf9c32c0fd1",
"oa_license": "CCBY",
"oa_url": "http://apps.who.int/iris/bitstream/10665/42682/1/9241545992.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d511e832c3356620773c8a62ed90d86b8229fbd7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17234352 | pes2o/s2orc | v3-fos-license | Molecular detection of Helicobacter pylori antibiotic resistance in stool vs biopsy samples
AIM To compare (1) demographics in urea breath test (UBT) vs endoscopy patients; and (2) the molecular detection of antibiotic resistance in stool vs biopsy samples. METHODS Six hundred and sixteen adult patients undergoing endoscopy or a UBT were prospectively recruited to the study. The GenoType HelicoDR assay was used to detect Helicobacter pylori (H. pylori) and antibiotic resistance using biopsy and/or stool samples from CLO-positive endoscopy patients and stool samples from UBT-positive patients. RESULTS Infection rates were significantly higher in patients referred for a UBT than endoscopy (overall rates: 33% vs 19%; treatment-naïve patients: 33% vs 14.7%, respectively). H. pylori-infected UBT patients were younger than H. pylori-infected endoscopy patients (41.4 vs 48.4 years, respectively, P < 0.005), with a higher percentage of H. pylori-infected males in the endoscopy-compared to the UBT-cohort (52.6% vs 33.3%, P = 0.03). The GenoType HelicoDR assay was more accurate at detecting H. pylori infection using biopsy samples than stool samples [98.2% (n = 54/55) vs 80.3% (n =53/66), P < 0.005]. Subset analysis using stool and biopsy samples from CLO-positive endoscopy patients revealed a higher detection rate of resistance-associated mutations using stool samples compared to biopsies. The concordance rates between stool and biopsy samples for the detection of H. pylori DNA, clarithromycin and fluoroquinolone resistance were just 85%, 53% and 35%, respectively. CONCLUSION Differences between endoscopy and UBT patients provide a rationale for non-invasive detection of H. pylori antibiotic resistance. However, the GenoType HelicoDR assay is an unsuitable approach.
INTRODUCTION
Helicobacter pylori (H. pylori) is a gram-negative bacterium that specifically colonizes the epithelium of the human stomach, in particular the gastric antrum. It infects approximately 50% of the world's population. The prevalence of H. pylori varies globally, increasing with older age and lower socio-economic status. Most infected individuals will not develop any clinically significant complications; however the most common symptoms of infection are gastritis and gastric or duodenal ulcers. The diagnosis and treatment of H. pylori infection are critical factors in the prevention and management of these conditions [1][2][3] . H. pylori infection can be detected by invasive and non-invasive means, using a variety of diagnostic tests. The Maastricht IV/ Florence Consensus Report recommends the "Test and Treat" strategy for patients presenting with uncomplicated dyspepsia with no alarm symptoms associated with an increased risk of gastric cancer [2] . In the Irish healthcare setting, the urea breath test (UBT) is the current gold standard non-invasive test for H. pylori infection in patients managed by the "Test and Treat" strategy. The UBT is highly accurate with a sensitivity of 88%-95% and specificity of 95%-100% [4] . For patients presenting with new onset dyspepsia (above 45 years; European guidelines) or dyspepsia along with accompanying alarm symptoms such as weight loss, gastrointestinal bleeding, abdominal mass or iron deficient anaemia, endoscopy is recommended [2] . When an endoscopy is performed, H. pylori infection can be diagnosed using gastric biopsy specimens. The most common test employed is the rapid urease-test for Campylobacter-like organisms (CLO), which has a sensitivity of > 90% and specificity of > 95% [5] . Treatment for H. pylori infection is recommended in all symptomatic individuals. However, eradication rates have fallen in many countries in recent years [6][7][8] mainly due to poor patient compliance and the emergence of antibiotic resistant strains of H. pylori, particularly to clarithromycin and levofloxacin [9][10][11] . The European Helicobacter and Microbiota Study Group (EHMSG) and the most recent Maastricht IV/Florence Consensus recommend local surveillance of existing and emerging antibiotic resistance and that the combination of antibiotics for H. pylori eradication should be chosen according to the local resistance patterns [2,10] . Clarithromycin-based first-line triple therapy is no longer recommended in regions where antibiotic resistance surveillance indicates that clarithromycin resistance is above 15%-20% [2] . Since H. pylori is a fastidious bacterium, culture and antimicrobial sensitivity testing is time-consuming. The sensitivity of culture of H. pylori from gastric biopsy samples has been reported to be as low as 55% [11] . Molecular testing represents an attractive alternative to culture-based methods and has been recommended by the Maastricht Consensus guidelines to detect H. pylori and both clarithromycin and fluoroquinolone resistance when standard culture and sensitivity testing are unavailable [2] . Single point mutations (most commonly A2146C, A2146G and A2147G) within the H. pylori rrl gene that encodes the 23S ribosomal subunit confer clarithromycin resistance [11,12] . The most significant mutations conferring fluoroquinolone resistance are located at positions 87 (N87K) and 91 (D91N, D91G, D91Y) of the H. pylori gyrA gene, which encodes the A subunit of the DNA gyrase enzyme [11,12] . The GenoType HelicoDR assay allows for the molecular genetic identification of H. pylori and its resistance to clarithromycin and fluoroquinolones, such as levofloxacin. The assay has been reported to be efficient at detecting mutations predictive of antibiotic resistance when applied to H. pylori cultures or gastric biopsy specimens [13][14][15][16] , with a sensitivity and specificity of 94%-100% and 86%-99% for detecting clarithromycin resistance and 83%-87% and 95%-98.5% for detecting fluoroquinolone resistance, respectively [16,17] . Currently, H. pylori antibiotic resistance surveillance is based primarily on patients undergoing invasive testing by means of endoscopy. However, most patients are diagnosed by non-invasive methods such as the UBT. As such, antibiotic resistance data obtained solely from endoscopy patients may not reflect the true prevalence of H. pylori infection and the rates of antibiotic resistance in symptomatic patients. The aims of this study were to (1) compare demographics and prevalence of H. pylori infection in patients referred for endoscopy with those of patients referred for a UBT; and (2) evaluate the potential use of the GenoType HelicoDR assay for the non-invasive detection of H. pylori and antibiotic resistant infection using stool samples.
Study design and ethics
A prospective study was carried out in a tertiary referral teaching hospital (Adelaide and Meath Hospital, Dublin, Ireland) affiliated with Trinity College Dublin. Patients who had been referred to the endoscopy clinic were included from August 2014 until March 2016. The study received ethical approval from the Adelaide and Meath Hospital Research Ethics Committee. Informed consent was obtained from all patients before enrolment.
Study population
Inclusion criteria were (1) ability and willingness to participate in the study and to provide informed consent; and (2) confirmed H. pylori infection by UBT or a positive rapid urease test (TRI-MED Distributors, PTY LTD, Washington, United States) at 60 min performed and/or histology.
Sample collection and antimicrobial susceptibility genotyping
A single corpus and antrum biopsy from each patient was placed into DENT transport medium (brain heart infusion broth containing 2.5% (w/v) yeast extract, 5% sterile horse serum and Dent Helicobacter Selective Supplement; Oxoid, Basingstoke, United Kingdom) for transport to the research laboratory. Biopsies were placed into fresh collection tubes and stored at -20 ℃ until processed for genomic DNA isolation using the QIAamp DNA Mini Kit (Qiagen GmbH, Hilden, Germany) according to manufacturer's instructions. Patients attending for endoscopy or the UBT were invited to provide a stool sample collected within 24 h of their appointment. Stool samples were stored at 4 ℃ until processed for genomic DNA isolation using the PSP Spin Stool DNA Plus Kit (STRATEC Molecular GmbH, Berlin, Germany) according to the manufacturers' instructions. All isolated DNA was stored -20 ℃ until genotyping for clarithromycin and fluoroquinolone-mediating mutations was performed using the Genotype HelicoDR assay (Hain Lifescience GmbH, Nehren, Germany). Multiplex amplification of DNA regions of interest was performed using the biotinylated primers supplied in the GenoType HelicoDR kit and the Hotstart Taq DNA polymerase kit (Qiagen). PCR products were reverse hybridised to DNA strips containing probes for gene regions of interest, developed and interpreted according to the manufacturers' instructions. Briefly, all strips were analysed for the presence of a conjugate control band (to indicate successful conjugate binding and substrate reaction), an amplification control band (to indicate a successful amplification reaction), a H. pylori control band (to document the presence of a H. pylori strain) and gene locus control bands for gyrA and 23S (to indicate successful detection of the gene regions of interest). In addition, the strips were analysed for the presence of wild type and/or mutation bands. An infection was considered clarithromycin sensitive when the 23S wild-type probe stained positive and clarithromycin resistant if one of the 23S mutation probes stained positive. As per manufacturers' instructions, results of both positions of the gyrA gene were combined to draw conclusions about fluoroquinolone resistance. Thus, an infection was only considered fluoroquinolone sensitive when one of the wild-type probes for codon 87 of the gyrA gene stained positive together with a positive wild-type probe for codon 91. Fluoroquinolone resistance was indicated if either the wild-type probes for codon 87 or the wild-type probe for codon 91 stained negative, or if one of the mutant codon 87 or 91 probes stained positive. For all mutations probes, only bands whose intensities were equal to or stronger than the amplification control were considered positive.
Statistical analysis
Statistical analysis was carried out using GraphPad Prism (GraphPad Software Inc., CA, United States). Continuous variables are presented as arithmetic mean and SD. P values for continuous variables were calculated and compared using the two-tailed independent t-test. Categorical variables are presented as percentages and 95% confidence intervals (95%CI). P values for categorical variables were calculated using the Fisher's exact test/Pearson χ 2 test. In all cases, a P value less than 0.05 was considered significant.
Prevalence of H. pylori infection and demographics of endoscopy and UBT patients
A schematic of patient inclusion and analysis is presented in Figure 1. In all, 616 patients were included in the study between August 2014 and March 2016; 389 patients (mean age 52.3 years, 42.2% male) underwent endoscopy and 227 patients (mean age 39.6 years, 30.4% male) a UBT ( Table 1). The overall prevalence of H. pylori infection was significantly higher in the UBT cohort than the endoscopy cohort [33.0% (n = 75) vs 19% (n = 74), P < 0.001; 95%CI: 6.58-21.54] (Figure 1). Of the H. pylori-positive endoscopy patients (CLO-positive), 17 had been previously treated for H. pylori infection, therefore the prevalence of primary H. pylori infection was 14.7% (n = 57). All of the H. pylori-positive UBT patients were treatment naïve, thus the prevalence of primary H. pylori infection was also significantly higher in patients referred for UBT than for endoscopy (33.0% vs 14.7%, P < 0.001, 95%CI: 11.07-25.65; Figure 1 and Table 1). In keeping with the guidelines recommending endoscopy for symptomatic patients over 45 years, H. pylori-positive patients in the endoscopy cohort were significantly older than those in the UBT cohort (48.4 years vs 41.4 years; P < 0.005, 95%CI: 2.19-11.81). There were a greater number of H. pylori-positive men in the endoscopy cohort than the UBT cohort (52.6% vs 33.3%, P = 0.03, 95%CI: 1.23-36.29; Table 1). Taken together, these findings indicate significant differences in demographics and the prevalence of both overall and primary H. pylori infection rates in patients referred for endoscopy and those referred for the UBT.
Comparison of H. pylori detection and the prevalence of antibiotic resistance-mediating mutations using the GenoType HelicoDR assay in endoscopy vs UBT patients using biopsies and stool samples, respectively
The GenoType HelicoDR assay is based on DNA strip technology that enables the molecular genetic identification of H. pylori and resistance to fluoroquinolones HelicoDR assay on biopsy specimens compared to culture and antimicrobial testing [14,16,17] . In order to evaluate the GenoType HelicoDR assay for the non-invasive detection of H. pylori using stool samples, we first set out to compare the detection rate of H. pylori infection using stool samples from H. pylori-positive UBT patients with that obtained using biopsy samples from CLO-positive endoscopy patients. Initial control experiments showed that the assay did not detect H. pylori DNA in stool samples from 2 uninfected UBT-negative patients (not shown). In H. pylori-infected patients, the GenoType HelicoDR assay was significantly more accurate at detecting H. pylori infection using biopsy samples than stool samples [98.2% (n = 54/55) vs 80.3% (n = 53/66), P < 0.005, 95%CI: 6.10-29.66] ( Figure 1 and Table 2).
In terms of gyrA genotyping, the gyrA locus control probe was positive in all DNA samples isolated from biopsy tissue (100%, n = 54/54), but only 86.8% (n = 46/53) of H. pylori-positive DNA samples isolated from stool. Fluoroquinolone resistance-mediating mutations were detected in 9.3% (n = 5/54) of biopsy samples from CLO-positive patients compared to 13% (n = 6/46) of stool samples from UBT-positive patients (P = 0.56, 95%CI: -9.99-18.28; Figure 1 and Table 2). For both endoscopy and UBT patients, all samples that were positive for fluoroquinolone resistance mutations were positive for clarithromycin resistance mutations ( Table 2). Taken together, these findings indicate that the GenoType HelicoDR assay is more accurate at detecting H. pylori DNA using biopsies from CLOpositive endoscopy patients than stool DNA isolated from UBT-positive patients. In addition, the assay detected a significantly higher rate of clarithromycin resistance using stool samples from patients diagnosed by the UBT than that obtained when biopsy samples from CLO-positive endoscopy patients were analysed.
Evaluation of the GenoType HelicoDR assay for the detection of resistance-mediating mutations by comparing stool and biopsy analyses from individual patients
Given the high rate of clarithromycin resistance detected using stool specimens from UBT-positive patients (96.2%; Table 2) and the lack of published data on the use of the GenoType HelicoDR assay for stool sample analysis, we next set out to directly compare a stool DNA sample with that of a biopsy DNA sample isolated from a subset of the CLO-positive endoscopy patients. In all, stool and biopsy samples from 20 CLO-positive patients were analysed (mean age 46.8 ± 15.8 years, 50% male). H. pylori DNA was detected in 95% (n = 19/20) of biopsy samples and 90% (n = 18/20) of stool samples from the CLOpositive patients. Concordance between results from biopsy and stool samples of individual patients for the detection of H. pylori DNA was 85% (n = 17/20; Figure 1 and Table 3). In terms of antibiotic resistance, results were compared in the 17 patients with concordant results for the presence of H. pylori DNA in 9218 November 7, 2016|Volume 22|Issue 41| WJG|www.wjgnet.com both their stool and biopsy samples. Concordance for the analysis of stool and biopsy samples of individual patients was just 52.9% (n = 9/17) for clarithromycin resistance and 35.3% (n = 6/17) for fluoroquinolone resistance (Figure 1, Table 3). Higher rates of both clarithromycin and fluoroquinolone resistance were detected in stool samples compared to biopsy samples obtained from the same patient (Table 3), suggesting a lack of specificity of the assay for the detection of antibiotic resistance-mediating mutations using DNA isolated from stool samples.
DISCUSSION
As the recommended first-line therapy for H. pylori infection should be guided by the local prevalence of primary clarithromycin resistance and third-line and subsequent treatment regimens should be guided by antimicrobial susceptibility testing [2] , methods for detecting antibiotic resistance are of great interest. Antimicrobial susceptibility testing for H. pylori is mainly performed using biopsy specimens obtained by invasive means at endoscopy. As a result, findings on the prevalence of H. pylori infection and antibiotic resistance based solely on this patient cohort may not represent the true rates of resistance in a given population. In order to determine whether H. pylori-infected endoscopy patients are representative of the wider H. pylori-infected population, we compared the prevalence of infection and patient demographics between endoscopy patients with those referred for non-invasive H. pylori diagnoses by the UBT. Indeed we found significant differences between the two patient cohorts. Both the overall infection rate and the prevalence of primary infection in H. pylori treatment-naïve patients were significantly higher in patients referred for a UBT than endoscopy (overall infection rates of 33% vs 19% respectively, and primary infection rates of 33% vs 14.7%, respectively). H. pylori-infected UBT patients were also significantly younger than H. pylori-infected endoscopy patients (41.4 vs 48.4 years, respectively), with a higher percentage of H. pylori infected males in the endoscopy compared to UBT cohort (52.6% vs 33.3%). Both age and sex have been reported as risk factors for H. pylori antibiotic resistance, for example age > 50 years has been reported as a risk factor for levofloxacin resistance and being female has been associated with metronidazole resistance in the most recent pan-European study on antimicrobial resistance [10] . Thus the statistically significant differences in age and sex between endoscopy and UBT patients in our study suggests that H. pylori-infected endoscopy patients are likely not representative of the wider H. pylori-positive cohort, providing a strong rationale for performing more widespread antimicrobial susceptibility testing.
Successfully extending molecular-based methods to diagnose H. pylori non-invasively would greatly enhance our ability to more accurately assess the prevalence of resistance to a range of antibiotics, and enable clinicians to offer personalised antimicrobial susceptibility-based therapy to a wider number of patients. H. pylori DNA has been detected in a number of clinical specimens including blood, stool samples and oral cavity specimens [18][19][20][21][22] . Analysis of stool samples has shown the most promise for the molecular detection of clarithromycin resistance-mediating mutations to date [18,[23][24][25][26][27][28] . Studies have demonstrated sensitivity and specificity values of 83%-98% and 98%-100%, respectively, for the detection clarithromycin resistance using the H. pylori ClariRes Assay (Ingenetix) to analyse stool samples [23][24][25][26] . However, data on the detection of H. pylori fluoroquinolone resistance using stool samples is lacking. Although the GenoType HelicoDR assay has proven useful for detecting clarithromycin and fluoroquinolone resistance in biopsy or culture specimens [13][14][15][16][17] , evaluation of the assay for the analysis of stool specimens presented herein proved suboptimal. Firstly, the assay detected H. pylori infection in a significantly lower percentage of H. pylori-infected patients when stool rather than biopsy specimens were analysed (80%-90% vs 95%-98.2%; respectively Tables 2 and 3). As H. pylori specifically colonizes the stomach and is not an intestinal bacterium, it is present only in low numbers in the stool, a factor which may have impacted the sensitivity of H. pylori detection using stool samples in our study. Additionally, H. pylori DNA may be exposed to enzymatic or mechanical degradation during transit from the stomach through the intestines [22] . When results using biopsy samples from individual H. pylori-infected patients were directly compared with those obtained from their stool samples, concordance scores for clarithromycin and fluoroquinolone resistance were just 52.9% and 35.6%, respectively. In addition, a higher rate of clarithromycin and fluoroquinolone resistance was detected in DNA isolated from the stool samples compared to DNA isolated from biopsy samples from the same patient (Table 3). Given that previous studies have demonstrated strong correlations between results obtained using the GenoType HelicoDR assay on biopsy specimens compared to culture and antimicrobial testing [14,16,17] , this would suggest that stool sample analysis using the GenoType HeliocDR assay is less sensitive than biopsy sample analysis, providing an explanation for the high rates of antibiotic resistance obtained using stool samples from the UBT patients in Table 2. The presence of large amounts of diverse commensal bacteria in the stool may hamper the specificity of the Genotype HelicoDR assay in detection of H. pylori antibiotic resistance-mediating mutations. Our findings suggest it is currently unsuitable for the accurate detection of clarithromycin and fluoroquinolone resistance-mediating mutations in stool specimens. Further studies are required to extend approaches for the non-invasive detection of H. pylori resistance to include multiple antibiotics. Recent advances in next generation DNA sequencing technologies may provide more robust opportunities for the accurate analysis of specific resistance-associated DNA regions. The successful optimisation of molecular-based antimicrobial susceptibility testing methods will enable resistance data obtained from patients managed by the "Test and Treat" strategy to be utilised in choosing effective antibiotics for the treatment of H. pylori. In this way, eradication rates for H. pylori may be improved.
Background
Currently antimicrobial susceptibility testing for Helicobacter pylori (H. pylori) is mainly performed using cultures isolated from tissue biopsy samples obtained at endoscopy by invasive means. However, many patients are diagnosed with H. pylori infection by non-invasive means, such as the urea breath test. As such, antibiotic resistance data based solely on endoscopy patients may not truly reflect the prevalence of antibiotic resistance in the wider H. pylori infected population.
Research frontiers
Molecular methods for the detection of H. pylori antibiotic resistance-mediating mutations offer a more rapid alternative to standard culture-based methods. Studies have shown that data generated using molecular methods on tissue biopsy samples correlates well with culture and antimicrobial susceptibility testing. Data on the use of molecular methods, in particular the GenoType HelicoDR assay, for the analysis of stool samples is limited.
Innovations and breakthroughs
The present findings suggest that the GenoType HelicoDR assay is not suitable for the accurate detection of antibiotic resistance-mediating mutations using stool samples from H. pylori infected patients. Alternative PCR or DNA sequencing-based methods may show more potential.
Applications
While the GenoType HelicoDR assay has been shown to be accurate for the analysis of clarithromycin-and fluoroquinolone-mediating mutations using biopsy tissue samples, the present findings indicate that this assay is not suitable for the analysis of stool samples. | 2018-05-08T18:16:23.452Z | 2016-11-07T00:00:00.000 | {
"year": 2016,
"sha1": "1dca3c244d05723ec0dcfc9a0f3ea2734314cea3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v22.i41.9214",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1dca3c244d05723ec0dcfc9a0f3ea2734314cea3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259915848 | pes2o/s2orc | v3-fos-license | Investigation of the rotational spectrum of CD$_3$OD and an astronomical search toward IRAS 16293$-$2422
Solar-type prestellar cores and protostars display large amounts of deuterated organic molecules. Recent findings on CHD$_2$OH and CD$_3$OH toward IRAS 16293-2422 suggest that even fully deuterated methanol, CD$_3$OD, may be detectable as well. However, searches for CD$_3$OD are hampered in particular by the lack of intensity information from a spectroscopic model. The objective of the present investigation is to develop a spectroscopic model of CD$_3$OD in low-lying torsional states that is sufficiently accurate to facilitate searches for this isotopolog in space. We carried out a new measurement campaign for CD$_3$OD involving two spectroscopic laboratories that covers the 34 GHz-1.1 THz range. A torsion-rotation Hamiltonian model based on the rho-axis method was employed for our analysis. Our resulting model describes the ground and first excited torsional states of CD$_3$OD well up to quantum numbers $J \leq 51$ and $K_a \leq 23$. We derived a line list for radio-astronomical observations from this model that is accurate up to at least 1.1 THz and should be sufficient for all types of radio-astronomical searches for this methanol isotopolog. This line list was used to search for CD$_3$OD in data from the Protostellar Interferometric Line Survey of IRAS 16293$-$2422 obtained with the Atacama Large Millimeter/submillimeter Array. While we found several emission features that can be attributed largely to CD$_3$OD, their number is still not sufficiently high enough to establish a clear detection. Nevertheless, the estimate of 2$\times 10^{15}$ cm$^{-2}$ derived for the CD$_3$OD column density may be viewed as an upper limit that can be compared to column densities of CD$_3$OH, CH$_3$OD, and CH$_3$OH. The comparison indicates that the CD$_3$OD column density toward IRAS 16293-2422 is in line with the enhanced D/H ratios observed for multiply deuterated complex organic molecules.
Introduction
Methanol, CH 3 OH, is among the most abundant polyatomic molecules in the interstellar medium (ISM) as evidenced by its early radio astronomical detection (Ball et al. 1970). It is observed both in its solid state and gas phase toward star-forming regions (e.g., Herbst & van Dishoeck 2009) and is an important product of the chemistry occurring on the icy surfaces of dust grains (e.g., Tielens & Hagen 1982;Garrod & Herbst 2006). As a slightly asymmetric rotor, whose excitation is strongly dependent on kinetic temperature, methanol presents a useful diagnostic tool for evaluating the physical conditions prevailing in star-forming regions (Leurini et al. 2004). Due to its ubiquity in the ISM, methanol is often taken as a reference for studies of the Transition frequencies from this and earlier work are given as supplementary material. We also provide quantum numbers, uncertainties, and residuals between measured frequencies and those calculated from the final set of spectroscopic parameters. The data are available at Centre de Données astronomiques de Strasbourg (CDS) via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.ustrasbg.fr/cgi-bin/qcat?J/A+A/ chemistry of more complex organic molecules (e.g., Jørgensen et al. 2020).
The ubiquity and high abundance of interstellar methanol make this molecule suitable for studying the degree of deuteration, which is considered as an indicator of the evolution of low-mass star-forming regions (Crapsi et al. 2005;Ceccarelli et al. 2007;Chantzos et al. 2018). Not only has singly deuterated methanol been detected in the ISM, but also doubly and triply deuterated as well. The singly deuterated methanol isotopologs, CH 3 OD (Mauersberger et al. 1988) and CH 2 DOH (Jacq et al. 1993), were detected first. Some time later, Parise et al. (2002) observed CHD 2 OH toward IRAS 16293−2422, followed by Parise et al. (2004) detecting CD 3 OH toward the same object. In addition, CHD 2 OH was also found toward several other low-mass protostars (Parise et al. 2006;Taquet et al. 2019) and, most recently, in a prestellar core (Lin et al. 2023).
Multiply deuterated isotopic species frequently appear to be overabundant in comparison to the D/H ratio inferred from the singly and non-deuterated species (see e.g., results for doubly deuterated isotopologs of methyl cyanide (CHD 2 CN; Cal-Article number, page 1 of 10 arXiv:2307.07801v1 [astro-ph.SR] 15 Jul 2023 cutt et al. 2018), methyl formate (CHD 2 OCHO; Manigand et al. 2019), and dimethyl ether (CHD 2 OCH 3 ; Richard et al. 2021) toward the low-mass protostellar system IRAS 16293−2422), which may reflect their formation processes at low temperatures (Taquet et al. 2014). Recently revisited abundances of CHD 2 OH and CD 3 OH toward IRAS 16293-2422 employing the Protostellar Interferometric Line Survey (PILS) data (Drozdovskaya et al. 2022;Ilyushin et al. 2022) also demonstrate this overabundance, suggesting that a search for fully deuterated isotopolog of methanol, CD 3 OD, would be promising and timely.
The rotational spectrum of CD 3 OD had already been observed in the lab in the 1950s in the context of other methanol isotopologs, in particular, to determine the molecular structure (Venkateswarlu et al. 1955). Lees (1972) published an account of the rotational spectrum of CD 3 OD in the microwave region. Additional rotational transition frequencies in the millimeter and/or submillimeter region and with infrared (Mukhopadhyay et al. 2004) or microwave accuracies were reported later (Baskakov & Pashaev 1992;Xu et al. 2004;Müller et al. 2006). Torsional transition frequencies were provided in two studies very recently (Mukhopadhyay 2021(Mukhopadhyay , 2022; these publications also contain some millimeter and submillimeter assignments. The rovibrational spectrum of CD 3 OD beyond the torsional manifold was also investigated in some studies, with Lees & Billinghurst (2022) being the most recent one dealing with the COD bending fundamental at 775 cm −1 .
The goal of our present investigation is to develop a spectroscopic model of the CD 3 OD isotopolog in low-lying torsional states which is sufficiently accurate to provide reliable calculations of line positions and linestrengths for astronomical searches for CD 3 OD in the ISM. New measurements were carried out to extend the covered frequency range up to 1.1 THz. The obtained new data were combined with previously published far infrared measurements to form the final dataset involving the rotational quantum numbers up to J = 51 and K = 23. A fit within experimental error was obtained for the ground and first excited torsional states of the CD 3 OD molecule using the so-called rhoaxis-method.
We generated a line list that was based on our present results, which we applied in a search for CD 3 OD in the Atacama Large Millimeter/submillimeter Array (ALMA) data of the Protostellar Interferometric Line Survey of the deeply embedded protostellar system IRAS 16293−2422 (Jørgensen et al. 2016). While we did not detect CD 3 OD confidently, a number of emission lines that can be attributed to CD 3 OD (or at least to a large part of it) suggest that a detection is within the reach of ALMA, for instance, by targeting IRAS 16293-2422 through deep observations at lower frequencies -where line confusion may be less problematic.
The rest of the manuscript is organized as follows. Section 2 provides details on our laboratory measurements. The theoretical model, spectroscopic analysis, and fitting results are presented in Sections 3 and 4. Section 5 describes our astronomical observations and the results of our search for CD 3 OD, while Section 6 gives the conclusions of our investigation.
Rotational spectra at IRA NASU
The measurements of the CD 3 OD spectrum at the Institute of Radio Astronomy (IRA) of NASU were performed in the frequency ranges of 34.5−184 GHz and 234−420 GHz using an automated synthesizer based millimeter wave spectrometer (Alek-seev et al. 2012). This instrument belongs to a class of absorption spectrometers and uses a set of backward wave oscillators (BWO) to cover the frequency range from 34.5 to 184 GHz, allowing for further extension to the 234−420 GHz range with the help of a solid state tripler from Virginia Diodes, Inc. (VDI). The frequency of the BWO probing signal is stabilized by a twostep frequency multiplication of a reference synthesizer in two phase-lock-loop stages. A commercial sample of CD 3 OD was used and all measurements were carried out at room temperature with sample pressures providing linewidths close to the Doppler limited resolution (about 2 Pa). Due to the high rate of D/H exchange at the OH group in CD 3 OD, the recorded spectrum contains numerous lines belonging to the CD 3 OH isotopolog. These lines do not pose any problem since they may be easily distinguished using the results of our recent study of the CD 3 OH spectrum (Ilyushin et al. 2022). Estimated uncertainties for measured line frequencies were 10 kHz, 30 kHz, 50 kHz, and 100 kHz depending on the observed signal-to-noise ratios (S/N).
Rotational spectra at the Universität zu Köln
The measurements at the Universität zu Köln were carried out at room temperature using a 5 m long single path Pyrex glass cell of 100 mm inner diameter and equipped with high-density polyethylene windows. The cell was filled with 1.5 Pa CD 3 OD and refilled after several hours because of the pressure rise due to minute leaks. We utilized three VDI frequency multipliers driven by Rohde & Schwarz SMF 100A synthesizers as sources and a closed cycle liquid He-cooled InSb bolometer (QMC Instruments Ltd) as detector to cover frequencies between 370 and 1095 GHz almost entirely; a small gap near 750 GHz occurred because of a strong water line. Other water lines or low power, especially at the edges, limited the sensitivity in some frequency regions. Frequency modulation was used throughout. The demodulation at 2 f causes an isolated line to appear close to a second derivative of a Gaussian. Additional information on this spectrometer system is available in Xu et al. (2012). We were able to achieve uncertainties of 5−10 kHz for very symmetric lines with good S/N, as demonstrated in recent studies on excited vibrational lines of CH 3 CN (Müller et al. 2021) or on isotopic oxirane (Müller et al. , 2023. Uncertainties of 10 kHz, 30 kHz, 50 kHz, 100 kHz, and 200 kHz were assigned, depending on the observed S/N and on the frequency range.
Spectroscopic properties of CD 3 OD and our theoretical approach
Fully deuterated methanol, CD 3 OD, is a nearly prolate top (κ ≈ −0.959) with a rather high coupling between internal and overall rotations in the molecule (ρ ≈ 0.82) and the torsional potential barrier V 3 of about 362 cm −1 . The torsional problem corresponds to an intermediate barrier case (Lin & Swalen 1959) with the reduced barrier s = 4V 3 /9F ∼10.9, where F is the rotation constant of the internal rotor. In comparison with the parent isotopolog, CD 3 OD has significantly smaller rotational parameters: (Xu et al. 2008). Thus, we expect that rotational levels with higher J and K values will be accessible in a room temperature experiment for CD 3 OD compared to CH 3 OH.
As the theoretical approach in the present study, we employ the so-called rho-axis-method (RAM), which has proven to be the most effective approach so far in treating torsional large amplitude motions in methanol-like molecules. The method is based on the work of Kirtman (1962), Lees & Baker (1968), and Herbst et al. (1984) and takes its name from the choice of its axis system (Hougen et al. 1994). In the rho-axis-method, the z axis is coincident with the ρ vector, which expresses the coupling between the angular momentum of the internal rotation p α and that of the global rotation J. We employed the RAM36 code (Ilyushin et al. 2010(Ilyushin et al. , 2013 that was successfully used in the past for a number of near prolate tops with rather high ρ and J values (see e.g., Smirnov et al. (2014), Motiyenko et al. (2020), Zakharenko et al. (2019)) and, in particular, for the CD 3 OH isotopolog of methanol (Ilyushin et al. 2022). The RAM36 code uses the twostep diagonalization procedure of Herbst et al. (1984) and in the current study, we keep 31 torsional basis functions at the first diagonalization step and 11 torsional basis functions at the second diagonalization step.
The labeling scheme after the second diagonalization step is based on an eigenfunction composition but, in contrast to our CD 3 OH study (Ilyushin et al. 2022), is not limited to searching for a dominant eigenvector component only. Since methanol is a nearly symmetric prolate top (κ ≈ −0.98), in which the angle between the RAM a-axis and the principal-axis-method (PAM) aaxis is only 0.07 • , it is assumed that the RAM a-axis in methanol is suitable for K quantization and that eigenvectors can be unambiguously assigned using dominant components. And indeed, searching for a dominant eigenvector component worked well in the case of the CD 3 OH isotopolog (Ilyushin et al. 2022), for which the angle between the RAM a-axis and the PAM a-axis is only 0.14 • and the asymmetry parameter (κ ≈ −0.977) is nearly the same as in CH 3 OH (κ ≈ −0.982). In CD 3 OD, however, the angle between the RAM a-axis and PAM a-axis is 0.68 • and κ ≈ −0.959, and it appeared that starting from J ≈ 25, some eigenvectors do not have a dominant eigenvector component that would allow for unambiguous labeling. Therefore, we have employed a combined labeling scheme. First, we search for a dominant eigenvector component (≥ 0.8) and if it exists, the level is labeled according to this dominant component. If such component is absent then we search for similarities in basis-set composition in torsion-rotation eigenvectors belonging to the previous J value and assign the level according to the highest degree of similarity found. The general idea assumes that for a given pair of K a , v t values, the torsion-rotation eigenfunctions vary slowly when J changes by one, and this slow change should appear as a high degree of similarity in the eigenvector compositions of the states corresponding to the same K a , v t , and adjacent J values. This approach allows one to transfer a given K a -label from lower J values, where it can be determined easily (either from eigenvector composition using dominant component or from energyordering considerations), to higher J values, which are characterized by extensive basis-set mixing. The details of this approach to K-labeling for torsion-rotation energy levels in low-barrier molecules can be found in Ilyushin (2004).
The energy levels are labeled in our fits and predictions by the free rotor quantum number, m, the overall rotational angular momentum quantum number, J, and a signed value of K a , which is the axial a-component of the overall rotational angular momentum, J. In the case of the A symmetry species, the +/− sign corresponds to the so-called "parity" designation, which is related to the A1/A2 symmetry species in the group G 6 (Hougen et al. 1994). The signed value of K a for the E symmetry species reflects the fact that the Coriolis-type interaction between the internal rotation and the global rotation causes the |K a | > 0 levels to split into a K a > 0 level and a K a < 0 level. We also provide K c values for convenience, but they are simply recalculated from the J and K a values, K c = J −|K a | for K a ≥ 0 and K c = J −|K a |+1 for K a < 0. The m values 0, −3, 3 / 1, −2, 4 correspond to A/E transitions of the t = 0, 1, and 2 torsional states, respectively.
Spectroscopic results
We started our analysis from the results of Müller et al. (2006), where the dataset, consisting of 488 t = 0 and 182 t = 1 microwave transitions, ranging up to J max = 25 and K max = 14, was fit with 56 parameters of the RAM Hamiltonian, and a weighted standard deviation of 3.13 was achieved. Since the BELGI code (Kleiner 2010) was used in the previous study (Müller et al. 2006), we refit the available dataset with the RAM36 program (Ilyushin et al. 2010(Ilyushin et al. , 2013 as the first step. The resulting fit was the starting point of our present analysis. New data were assigned starting with the Kharkiv measurements. First, the search for the t = 2 rotational transitions was performed with success, and further assignments were made in parallel for the three torsional states of CD 3 OD t = 0, 1, and 2. Submillimeter wave and THz measurements from Cologne were assigned subsequently, based on our new results. Whenever it was possible, we replaced the old measurements from Müller et al. (2006) and references therein with the more accurate new ones. At the same time, we decided to keep in the fits two measured values for the same transition from the Kharkiv and Cologne spectral recordings in that part of the frequency range where the measurements from the two laboratories overlap (370−420 GHz). A rather good agreement within the experimental uncertainties was observed for this limited set of duplicate new measurements. Finally, at an advanced stage of our analysis, the FIR data from Mukhopadhyay (2021) were added to the fit.
The assignment process was performed in a usual bootstrap manner, with numerous cycles of refinement of the parameter set while the new data were gradually added. In parallel, a search of the optimal set of RAM torsion-rotation parameters was carried out, and it finally became evident that the t = 2 torsional state poses some problems with fitting. The strong influence of intervibrational interactions arising from low lying small amplitude vibrations in CD 3 OD, which then propagate down through numerous intertorsional interactions, is a possible explanation for these problems. We encountered similar problems with CD 3 OH (Ilyushin et al. 2022). There we decided to limit our fitting attempts mainly to the ground and first excited torsional states. Taking into account that one of our goals was to provide reliable predictions for astrophysical searches of interstellar CD 3 OD, we adopted an analogous decision in the current case of the CD 3 OD investigation. Thus, at the final stage of model refinement, we limited our fitting attempts mainly to the ground and first excited torsional states of CD 3 OD. Only the lowest three K series for the A and E species in t = 2 were retained in the fits in order to get a better constraint of the torsional parameters in the Hamiltonian model. These t = 2 K levels should be affected least by the intervibrational interactions arising from low lying small amplitude vibrations. In the case of CD 3 OD, this corresponds to K = −1, 2, 3 for the E species in t = 2 and to K = −4, 0, 4 for the A species.
It should be noted that at the final stage of preparation of this manuscript, the new study of Mukhopadhyay (2022) appeared. This study emphasized the transitions involving the second excited torsional state levels. Taking into account that our efforts are essentially concentrated on the ground and first excited torsional states of CD 3 OD, we decided not to include any data from Mukhopadhyay (2022) in the present analysis even those involving the lowest three K series of levels for the A and E species in t = 2, which were retained in the fits otherwise. Submillimeter wave transitions appeared from Mukhopadhyay (2022) were also not included in our present fits since they are within the range of our current measurements, with our measurements being more precise. We intend to include the data from Mukhopadhyay (2022) at the next stage of our investigation of CD 3 OD, when we will try to fit transition higher in t and try to model the intervibrational interactions arising from low lying small amplitude vibrations in CD 3 OD. With this aim in mind, new measurements of the CD 3 OD IR spectrum between 500 and 1200 cm −1 were carried out at the Technische Universität Braunschweig, which we plan to consider in our future analyses of the CD 3 OD spectrum.
Our final CD 3 OD dataset for the purposes of this paper involves 4337 FIR and 10001 microwave line frequencies that correspond, due to blending, to 16259 transitions with J max = 51. Due to the duplication in measurements mentioned above, the number of unique transitions incorporated in the fit is somewhat lower at 15135. A Hamiltonian model consisting of 117 parameters provided a fit with a weighted root mean square (RMS) deviation of 0.71 which was selected as our "best fit" for this paper. The 117 molecular parameters from our final fit are given in Table A.1 (Appendix A). The numbers of the terms in the model distributed between the orders n op = 2, 4, 6, 8, 10, 12 are 7, 22, 45, 34, 8, and 1 respectively, which is consistent with the limits of determinable parameters of 7, 22, 50, 95, 161, and 252 for these orders, as calculated from the differences between the total number of symmetry-allowed Hamiltonian terms of order n op and the number of symmetry-allowed contact transformation terms of order n op − 1, when applying the ordering scheme of Nakagawa et al. (1987). The final set of the parameters converged perfectly in all three senses: (i) the relative change in the weighted RMS deviation of the fit at the last iteration was about ∼10 −7 ; (ii) the corrections to the parameter values generated at the last iteration are less than ∼10 −3 of the calculated parameter confidence intervals; and (iii) the changes generated at the last iteration in the calculated frequencies are less than 1 kHz even for the FIR data.
A summary of the quality of this fit is given in Table 1. The overall weighted RMS deviation of 0.71 and the additional fact that all data groups are fit within experimental uncertainties (see the left part of Table 1 where the data are grouped by measurement uncertainty) seems satisfactory to us. If we consider the weighted RMS deviations for the data grouped by torsional state we will see a rather good agreement between our model and the experiment for all three torsional states as well as for the torsional fundamental band. Further illustration of our current understanding of the microwave spectrum of CD 3 OD may be found at Figs. 1 and 2. It is seen that our current model reproduces the observed microwave spectrum quite well both with respect to line positions and line intensities.
Using the parameters of our final fit we calculated a list of CD 3 OD transitions in the ground and first excited torsional states for astronomical observations. The dipole moment function of Mekhtiev et al. (1999) was employed in our calculations where the values for the permanent dipole moment components of CH 3 OH were replaced by appropriate ones for CD 3 OD µ a = 0.867 D and µ b = 1.430 D taken from . The permanent dipole moment components were rotated from the principal axis system to the rho axis system of our Hamiltonian model. As in the case of CD 3 OH (Ilyushin et al. 2022), the list of CD 3 OD transitions includes information on transition quantum numbers, transition frequencies, calculated uncertainties, lower state energies, and transition strengths. To avoid unreliable extrapolations far beyond the quantum number coverage of the available experimental dataset, we limited our predictions by t ≤ 1, J ≤ 55 and |K a | ≤ 25. As already mentioned earlier, we label torsion-rotation levels by the free rotor quantum number, m, the overall rotational angular momentum quantum number, J, a signed value of K a , and K c . The calculations were done up to 1.33 THz. Additionally, we limit our calculations to transitions for which calculated uncertainties are less than 0.1 MHz. The lower state energies are given referenced to the K a = 0 A-type t = 0 level. We provide additionally the torsion-rotation part of the partition function Q rt (T) of CD 3 OD calculated from first principles, that is, via direct summation over the torsion-rotational states. The maximum J value is 90 and n t = 11 torsional states were taken into account. The calculations, as well as the experimental line list from the present work, can be found in the online Supplementary material with this article and will also be available in the Cologne Database for Molecular Spectroscopy, (CDMS, Endres et al. 2016).
Astronomical search for CD 3 OD
The new spectroscopic calculations were used to search for CD 3 OD in data from the Protostellar Interferometric Line Survey (PILS; Jørgensen et al. 2016). PILS represents an unbiased spectral survey of the Class 0 protostellar system IRAS 16293−2422 using the Atacama Large Millimeter/submillimeter Array covering the frequency range from 329 to 363 GHz. The data cover the region of IRAS 16293-2422 including its two primary components "A" and "B" that show abundant lines of complex organic molecules at an angular resolution of ∼0.5 and a spectral resolution of ∼0.2 km s −1 . Toward a position slightly offset from the "B" component of the system, the lines are intrinsically narrow, making it an ideal hunting ground for new species and several complex organic molecules and their isotopologs have been identified there, including other deuterated isotopologs of CH 3 OH such as CH 2 DOH and CH 3 OD (Jørgensen et al. 2018), CHD 2 OH (Drozdovskaya et al. 2022), and CD 3 OH (Ilyushin et al. 2022). The search for CD 3 OD was conducted following the approach applied in other papers from PILS: synthetic spectra are calculated assuming that the excitation of the molecule is characterized by local thermodynamical equilibrium, which is reasonable at the densities on the scales probed by PILS (Jørgensen et al. 2016). The source velocity offset relative to the local standard of rest is assumed to be 2.6 km s −1 and the line full width at half maximum (FWHM) is taken as 1 km s −1 . With these assumptions the kinetic temperature and column density of the species are left as the two free parameters. For the search we assume a temperature of 225 K similar to that of CD 3 OH (Ilyushin et al. 2022). Figure 3 shows the 16 lines predicted to be strongest for this temperature: the column density is taken to be the maximum possible without overproducing lines compared to the RMS noise level of the data with a value of 2×10 15 cm −2 . As we can see, three to four lines match the observed spectral features at or slightly above the 3σ level in the data. One line at 348.988 GHz is predicted at the 3σ level but does not show any observed emission. However, that one is overlapping with the absorption part of a nearby, stronger, transition so that may not be significant. The quoted column density would correspond to an CD 3 OD abundance of 0.02% relative to the main isotopolog CH 3 OH (the column density of that constrained by observations of optically thin transitions of CH 18 3 OH). If all D/H substitutions would be considered equally probable, this in turn would imply a D/H ratio of about 12% -similar to what is measured for other multiple deuterated species and enhanced relative to the ratios measured from the singly deuterated variants (CH 2 DOH and CH 3 OD). While this makes the assignments of the three to four transitions plausible it is not possible to claim a solid detection based on so few lines. However, the analysis demonstrates that the detection of CD 3 OD is within the reach of ALMA, for instance, by targeting IRAS 16293-2422 through deep observations at lower frequencies where line confusion may be less problematic.
Conclusion
In this work, we performed a new study of the torsion-rotation spectrum of the CD 3 OD isotopolog using a torsion-rotation RAM Hamiltonian. The new microwave measurements carried out in the broad frequency range from 34.5 GHz to 1.1 THz and transitions with J up to 51 and K a up to 23 involving the t = 0, 1, 2 torsional states were assigned and analyzed. After revealing perturbations in the second excited torsional state of CD 3 OD, presumably caused by the intervibrational interactions arising from low-lying small-amplitude vibrations in this molecule, we concentrated our efforts on refining the theoretical model for the ground and first excited torsional states only. A fit within the experimental uncertainties (weighted RMS deviation 0.71) was achieved for the dataset consisting of 4337 FIR and 10001 microwave line frequencies.
Based on our results, calculations of the ground and first excited torsional states were carried out and used in a search for CD 3 OD spectral features in data from the ALMA PILS survey of the deeply embedded protostar IRAS 16293−2422. While three to four CD 3 OD transitions match observed spectral features at or slightly above the 3σ level in the data, it is not possible to claim a solid detection based on so few lines. Nevertheless, our analysis demonstrates that the detection of CD 3 OD in IRAS 16293-2422 using ALMA is quite probable through deep observations at lower frequencies where line confusion may be less problematic. The upper column density limit of 2×10 15 cm −2 for CD 3 OD was derived based on the assumption of an excitation temperature of 225 K (taken similar to that of CD 3 OH (Ilyushin et al. 2022)). Comparison with the CH 3 OH main isotopolog (for which the column density is deduced from optically thin lines of CH 18 3 OH) yields a CD 3 OD/CH 3 OH ratio as high as ∼0.02%, thus implying that the fully deuterated methanol is in line with the enhanced D/H ratios observed for multiply deuterated complex organic molecules. Notes. a Estimated measurement uncertainties for each data group. b Number of lines (left part) or transitions (right part) of each category in the least-squares fit. Note that due to blending 14338 measured line frequencies correspond to 16259 transitions in the fit, which in turn due to presence of duplicate measurements represent 15135 unique transitions in the fit. c Root-mean-square (RMS) deviation of corresponding data group. d Upper and lower state torsional quantum number t . e Weighted root-mean-square (WRMS) deviation of corresponding data group. | 2023-07-16T15:07:21.505Z | 2023-07-14T00:00:00.000 | {
"year": 2023,
"sha1": "c5ab85166a9b60e4fb91f4c5b71ba4e4d837bc6f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8f10fa053fa69f22c67297341dc060f32c150732",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
244679034 | pes2o/s2orc | v3-fos-license | Age‐dependent timing and routes demonstrate developmental plasticity in a long‐distance migratory bird
Abstract Longitudinal tracking studies have revealed consistent differences in the migration patterns of individuals from the same populations. The sources or processes causing this individual variation are largely unresolved. As a result, it is mostly unknown how much, how fast and when animals can adjust their migrations to changing environments. We studied the ontogeny of migration in a long‐distance migratory shorebird, the black‐tailed godwit Limosa limosa limosa, a species known to exhibit marked individuality in the migratory routines of adults. By observing how and when these individual differences arise, we aimed to elucidate whether individual differences in migratory behaviour are inherited or emerge as a result of developmental plasticity. We simultaneously tracked juvenile and adult godwits from the same breeding area on their south‐ and northward migrations. To determine how and when individual differences begin to arise, we related juvenile migration routes, timing and mortality rates to hatch date and hatch year. Then, we compared adult and juvenile migration patterns to identify potential age‐dependent differences. In juveniles, the timing of their first southward departure was related to hatch date. However, their subsequent migration routes, orientation, destination, migratory duration and likelihood of mortality were unrelated to the year or timing of migration, or their sex. Juveniles left the Netherlands after all tracked adults. They then flew non‐stop to West Africa more often and incurred higher mortality rates than adults. Some juveniles also took routes and visited stopover sites far outside the well‐documented adult migratory corridor. Such juveniles, however, were not more likely to die. We found that juveniles exhibited different migratory patterns than adults, but no evidence that these behaviours are under natural selection. We thus eliminate the possibility that the individual differences observed among adult godwits are present at hatch or during their first migration. This adds to the mounting evidence that animals possess the developmental plasticity to change their migration later in life in response to environmental conditions as those conditions are experienced.
| INTRODUC TI ON
It is becoming increasingly clear that migratory populations consist of individuals that each have their own routes, timing, use of stopover sites and levels of consistency between years (Delmore et al., 2020;Flack et al., 2016;Lok et al., 2011;Vardanis et al., 2011). The source of this individual variation in seasonal migration patterns, or the developmental phase during which such individual routines arise, remains unclear (Battley et al., 2020;Pedersen et al., 2018;Verhoeven et al., 2019). This hinders an understanding of the evolution of migration (Piersma, 2011) and is fundamental for assessing the extent to which migratory animals can cope with current rates of environmental change Sutherland, 1998).
Three separate and interacting sources of variation in individual migratory behaviour have been identified: (epi-)genetic heritability, developmental plasticity and phenotypic flexibility (Piersma, 2011). Some individual differences in migratory behaviour have been attributed to genotypic differences, which can determine both how and when differences arise during an individual's life (Berthold et al., 1992;Pulido et al., 2001;Thorup et al., 2020). In such cases, aspects of migratory behaviour are both heritable and selected for by the environment. In a classic example, it has been argued that natural selection on genetic variation in blackcaps Sylvia atricapilla has enabled the use of a new migratory route connecting breeding populations in Germany with novel nonbreeding sites in Great Britain (Berthold et al., 1992). As a complex trait (Piersma et al., 2005), however, migration is unlikely to be encoded by a single gene. Instead, multiple or even many genes simultaneously contribute to an individual's migratory phenotype (Delmore et al., 2016;van Doren et al., 2017). Although epigenetic inheritance could lead to rapid phenotypic changes at the population level (Sheriff et al., 2010), adjustments through natural selection to polygenic traits are thought to be relatively slow, as they require genetic inheritance at presumably low-to-moderate heritabilities (Berthold & Pulido, 1994;Dochtermann et al., 2019).
Individual differences in migratory behaviour could also arise if individual seasonal routines are a consequence of developmental processes in response to environmental and social contexts-that is, developmental plasticity (sensu Piersma & Drent, 2003). For example, in Icelandic black-tailed godwits Limosa limosa islandica and pied flycatchers Ficedula hypoleuca, individual differences can arise because of different environmental conditions encountered during the first few months of life (Both, 2010;Gill et al., 2014). Year-to-year environmental variations at hatch, including atmospheric and magnetic properties, can also cause individual differences in migratory routines (Scott et al., 2014;Wynn et al., 2020). Populations that adjust migratory behaviour by means of developmental plasticity are expected to do so faster than those that must rely on genetic change (Eichhorn et al., 2009). However, most species exhibit only a limited window of time during which plastic adjustments can be made, potentially limiting the amount of among-individual variation that can be generated via this process (Lok et al., 2011;Piersma, 2011;Senner, Conklin, et al., 2015).
A second category of environmentally informed plasticity that can generate individual differences in migratory behaviour is phenotypic flexibility (sensu Piersma & Drent, 2003). In such cases, variations arise during adulthood; they are by definition impermanent, and are frequently caused by short-term environmental perturbations (Piersma & van Gils, 2011). The latter can include severe weather events (Boelman et al., 2017), interannual climatic fluctuations (Studds & Marra, 2005) and anthropogenic-driven variation in experienced habitat quality (Madsen, 2001). As a result, flexible differences in migratory behaviour are unlikely to explain consistent differences among individuals or across populations (Senner, Conklin, et al., 2015) but can interact with both genetic and developmentally plastic differences among individuals to help give rise to the dramatic variation in migratory behaviour that exists within some populations (Beaman et al., 2016).
Given the variety of processes by which phenotypic variation in migratory behaviours can arise, we need observations of individual animals followed from birth to adulthood, coupled with experiments, to determine whether, how and when in life individuals adjust their behaviours to the environment, in order to establish the precise nature of the environmental factors involved (Piersma, 2011). Recent studies incorporating individually unique colour markings and miniature tracking technologies have begun to approach these goals.
For instance, Sergio et al. (2014) tracked black kites Milvus migrans throughout their lives and found that they exhibit extended developmental periods lasting up to 7 years during which they appear to improve their migratory performance and, thereafter, still have flexibility in their ability to respond to environmental conditions as they are encountered. Nonetheless, while the weather conditions occurring during an individual's first southward migration have been shown to play a role , what remains unclear is exactly when during ontogeny differences among individuals begin to arise, and to what degree environmental conditions and genetics might be responsible for these differences. mounting evidence that animals possess the developmental plasticity to change their migration later in life in response to environmental conditions as those conditions are experienced.
K E Y W O R D S
evolution, godwit, migration, ontogeny, plasticity To address this gap, we simultaneously tracked juvenile and adult continental black-tailed godwits Limosa limosa limosa (hereafter 'godwits') from the Netherlands on their south-and northward migrations. Godwits breeding in the Netherlands represent a potentially informative study species, because their migratory timing and destination vary considerably and consistently among adult individuals . For example, some adults spend the nonbreeding period north of the Sahara, whereas others cross the Sahara to West Africa (Hooijmeijer et al., 2013;Kentie et al., 2017;. Some adults leave West Africa to fly northward again as early as September while others leave more than 5 months later Verhoeven et al., 2019). Finally, young godwits appear to have been the force behind recent shifts in the population's migration route over the course of only a few years . In combination, these observations indicate that the variation in godwit migratory behaviours may have arisen from either inherited routines or developmental plasticity.
Because adult godwits are consistent through time in their migratory behaviour , however, we can already rule out phenotypic flexibility as a mechanism for the emergence of individual differences. Therefore, in this study, we focused on disentangling two potential mechanisms for the emergence of individual differences in migratory behaviour: heritable (epi-)genetic factors and developmental plasticity.
We first explored whether the migratory behaviour of juveniles is related to environmental conditions during the first months of life, which would provide evidence for developmental plasticity. For this, we examined whether differences in the migration routes, timing and mortality rates of juveniles were related to their hatch date and hatch year-two among many potential variables influencing an individual's early-life environment. If individual differences among godwits arise as a result of hatch date, we predict that later hatch dates result in later departure on southward migration and a lower propensity to cross the Sahara (Both, 2010;Gill et al., 2014Gill et al., , 2019. Furthermore, if individual differences among godwits arise as a result of hatch year, we predict differences in migratory behaviour between annual cohorts (Scott et al., 2014;Wynn et al., 2020).
Second, we compared the routes, timing and mortality rates of juveniles and adults from the same breeding areas during the same years, enabling us to assess whether individual differences among godwits may also have an inherited origin. Godwits breeding in the Netherlands have limited genetic variation (Trimbos et al., 2011) and limited dispersal distances (<20 km; Kentie et al., 2014), and are genetically distinct from the species' other European breeding populations (Trimbos et al., 2014). Thus, in combination with our extensive tracking work within the Dutch population (this study; Hooijmeijer et al., 2013;Senner et al., 2019;Senner, Verhoeven, et al., 2015;Verhoeven et al., 2019;, we predicted that if migratory routines are inherited, juveniles would exhibit largely similar migratory routines to those of adults and/or that those individuals exhibiting dissimilar patterns would experience higher mortality rates and be selected out of the population before adulthood. Alternatively, if godwits exhibit a prolonged developmental window, we predicted that adults and juveniles would differ in their migratory routines, but that there would be no evidence of selection against these novel routines, and that the differences would be dissipated during ontogeny as juveniles arrive at individually consistent strategies that resemble those of adults. In presenting the results, we start with a comparison of (a) the timing and (b) the geographic patterns of the southward migrations of juveniles and adults. This is followed by an analysis of whether the mortality patterns of the two age groups show evidence for natural selection on specific migratory behaviours. Taken together, our results have the potential to shed light on a persistent mystery not only in the study of migration, but in our understanding of the evolution of individual differences more generally.
| Satellite tracking data
In both 2016 and 2017, we deployed 40 solar-powered 5-g PTT-100s from Microwave Technology Inc. on juveniles, for a total deployment of 80 transmitters. All 80 transmitters were programmed to turn on for 8 hr and off for 24 hr. As a result of this duty cycle, we could only observe the timing of migration on a daily basis. We captured these juveniles by hand in the days just before they gained the ability to fly. Most juveniles were caught within our 12,000-ha study area in southwest Fryslân, the Netherlands (see Senner, Verhoeven, et al., 2015 for more details). However, in 2016, the number of fledged juveniles in our study area was considerably lower than average, so we also caught four juveniles on the island of Ameland (53.45°N, 5.83°E; see Loonstra, Verhoeven, Senner, et al., 2019).
To attach the transmitters, we used a leg-loop harness of 2-mm Dyneema rope. We also took ~30 μl of blood from the brachial vein for molecular sexing.
We obtained migratory tracks from 28 of these juveniles (see Section 3): 24 from our study area and four from Ameland. Twentyseven out of the 28 juveniles were molecularly sexed (12 males, 15 females); one analysis failed, so we sexed this bird based on its growth and morphological characteristics during five recaptures before fledging . Fifteen of the 28 juveniles were marked with a code flag in the nest and their exact hatch dates were therefore known. The other 13 tracked juveniles were not captured in the nest, so we estimated their hatch dates using a sex-specific growth curve . This method yields an estimated hatch date that is accurate to within ±3 days, which is acceptable for our purposes given the large variation in hatch dates included in the study (range 2 May-13 June). The weight of the transmitter and the harness (~6 g) represented 3.2% ± 0.4 (range: 2.5%-4.4%) of the total body mass at release, but this likely diminished to ~2% as the individuals continued to grow to adult size.
To track the spatial distribution and mortality of adult godwits, we deployed 32 solar-powered 9.5-g PTT-100s from Microwave Technology Inc. in 2015 and 2016 (attachment ~10.5 g), and another four transmitters of 5 g in 2017. Thirty-four of these 36 transmitters were programmed to turn on for 8 hr and turn off for 24 hr. One of the remaining two transmitters was programmed to turn on for 8 hr and off for 25 hr, and the other was programmed to turn on for 10 hr and off for 48 hr. We captured all 36 adults on nests in the 220-ha Haanmeer polder, which lies in the centre of our larger study area.
We captured adults using walk-in traps, automated drop cages or mist nets placed over the nest. We attached the leg-loop harnesses as we did for the juveniles. Based on a combination of molecular sexing (using an ~30 μl blood sample taken from the brachial vein at capture, n = 26 individuals) and morphological characteristics (following Schroeder et al., 2008, n = 10 individuals), we determined that our sample of transmitter-carrying adults consisted of 34 females and two males. In 2015 and 2016, the loading factor of the transmitters was 3.4% ± 0.2 (range: 3.0%-4.0%) of a female's body mass at capture; in 2017, the loading factor was 1.9% for each of the two females and 2.2% for each of the two males (more details in .
We retrieved satellite-tracking locations via the CLS tracking system (www.argos -system.org) and passed them through the 'Best Hybrid-filter' algorithm (Douglas et al., 2012); this removed consecutive locations that exceeded a speed of 120 km/hr while retaining location classes with qualities of 3, 2, 1, 0, A and B. From these data, we knew where individual godwits crossed nine arbitrary spatial boundaries that were spaced 4° of latitude apart across the godwit migration corridor. These boundaries ranged from 52°N (the breed-
| Geolocator data
To track the timing of adult migration, we used geolocators instead of satellite transmitters. We deployed 219 geolocators on 173 adult godwits from 2015 to 2018 in our study area. Geolocators were attached to a coloured flag that was placed on the adult's tibia. The total weight of the attachment was ~3.7 g, representing 1%-1.5% of an individual's body mass at capture. In subsequent years (2016-2019), we recaptured geolocator-carrying godwits to retrieve their geolocators and download the stored light-level data. We downloaded light-level data from 78 geolocators retrieved from 64 adult godwits (24 males, 40 females). Twenty geolocators contained data for more than one season, although the second season was often incompletely logged because the battery stopped working. Thus, we obtained light-level data for a total of 98 complete and incomplete migrations.
We used the package FLightr (Rakhimberdiev et al., 2017) to reconstruct the annual schedules of godwits from these lightlevel data. Detailed examples of this analytical routine using our own godwit data can be found in Rakhimberdiev et al. (2016) and Rakhimberdiev et al. (2017). Briefly, using the FLightr function 'find.
times.distribution', we estimated when individual godwits crossed the same nine spatial boundaries mentioned above. In these analyses, we excluded the crossing of the spatial boundary at 36°N (the Strait of Gibraltar) because we could not distinguish between birds stopping in northern Morocco and those stopping in southern Spain (see Verhoeven et al., 2019 for more details).
The fieldwork for this study was conducted under license numbers 6350A, 6350G and AVD105002017823 granted by the national Dutch committee for animal experiments following the Dutch Animal Welfare Act Articles 9 and 11. Levene's test from the R-package car. If the variances were found to be equal, we used an ANOVA to test whether the mean was significantly different between adults and juveniles. If the variances were unequal, we compared the means with a Mann-Whitney U test. We did not account for an individual's sex in these analyses because we only tracked two adult males with satellite transmitters. However, we know from previous work that adult males and females do not differ in their migratory destinations (Hooijmeijer et al., 2013;Kentie et al., 2017;Senner et al., 2019;Verhoeven et al., 2019;, which is further supported by more recent satellite-tracking efforts (2019-2021) that include more males (T. Piersma, R. Howison, J. Hooijmeijer, A.H.J. Loonstra and M.A. Verhoeven unpubl. data). We have also previously shown that the only difference in the migratory timing of adult males and females is that males leave the Netherlands on average 5 days earlier.
| Timing, routes and orientation of juveniles and adults
The only likely consequence of a dataset with more males would therefore be an even bigger difference between adults and juveniles in their departure timing from the Netherlands than already observed (see Section 3). We therefore believe that our claims are robust and representative of godwit behaviour in general.
We used a generalized linear model with a binomial error structure and a logistic link function to test whether the likelihood that juveniles (a) crossed the Sahara on their first southward migration and (b) did so by flying non-stop from the Netherlands was related to their departure date, year or sex. We note that the dataset for the second analysis is a subset of the first dataset that includes only those individuals that crossed the Sahara. We also used a generalized linear model with a binomial error structure and a logistic link function to explore whether the adults and juveniles tracked in the same years on southward migration differed in the proportion of individuals that (a) crossed the Sahara and (b) did so with a non-stop flight from the Netherlands.
| Mortality
Where and when mortality occurred was assessed on the basis of data collected from our satellite transmitters. The adults outfitted with a 9.5-g transmitter were considered dead when their transmitter's built-in activity sensor remained constant. The 5-g transmitters that four adults and all juveniles carried did not have such an activity sensor but did have a temperature sensor; we considered these birds dead when the measured temperature started to follow a day-night rhythm. These assumptions are also supported by the fact that we have never subsequently observed any of these adults to be alive during our extensive resighting efforts of marked birds Verhoeven et al., 2018).
For these known-fate data, we used generalized linear models with a binomial error structure and a logistic link function to test whether (a) the likelihood that juveniles died on their first southward migration was related to their departure date, sex or the year the juvenile hatched; and (b) the likelihood that juveniles died between departure from and return to the Netherlands was related to their hatch date, sex or the year they hatched. We also made two figures to illustrate where ( Figure 3) and when (Figure 4) mortality occurred during juvenile migration. We used the same type of generalized linear models to explore whether the adults and juveniles tracked in the same years differed in the proportion of individuals that died during south-and northward migration.
| Southward migration
We obtained migratory tracks from only 28 juveniles, because most tagged juveniles died after being tagged and before migrating southward-a period known to have high juvenile mortality (see Loonstra, Verhoeven, Senner, et al., 2019). All 28 juveniles started their initial southward migration at approximately the same age (88 ± 11 days). Thus, their departure date was positively correlated with their hatch date (Figure 2; Table S1). Furthermore, all juveniles departed the Netherlands later than did tracked adults (Figure 2; Table S2). After crossing 40°N, juveniles first encountered tracked adults from our study population with whom they could potentially spend the nonbreeding season or migrate alongside to West Africa; this is because some adults stopped for prolonged periods at sites around the Mediterranean, and either stayed there for the nonbreeding season or eventually continued on to cross the Sahara (Figure 1). Of the juveniles, four spent the nonbreeding season around the Mediterranean, and 19 migrated to the nonbreeding grounds in West Africa. However, of the 19 juveniles that migrated to the nonbreeding grounds in West Africa, 12 flew non-stop from the Netherlands (63%), while of the 28 adults that migrated to West Africa, only two flew non-stop (11%, statistics in Table 1). Furthermore, five of the seven juveniles that did stop on the Iberian Peninsula en route to West Africa stayed for only 1 or 2 days and departed before the tracked adults that stopped there for a prolonged time (Figure 1). Because juveniles flew nonstop more and made shorter stops, the timing of the southward Sahara crossing was less variable within juveniles than adults (Table S2). Whether juveniles flew non-stop to West Africa from the Netherlands did not depend on their departure date from the Netherlands (β departure date = 0.03 ± 0.05, χ 2 = 0.32, df = 1, p = 0.574, Table 2). Anecdotal observations of two juveniles which hatched 1 day apart and which both departed the Netherlands on 6 August 2016 show that one stopped and one did not stop en route to West Africa (Figure 4). Table S2).
| Northward migration
In contrast with southward migration, an immature's departure from West Africa was not related to its hatch date (Table S3). The immature birds that departed earliest for northward migration had the opportunity to continue to the Netherlands with adults from the same breeding population. However, all but one of the immatures left the Mediterranean later than any adult (Figure 2). Consequently, the arrival of immatures to the Netherlands was on average 36 days later than that of adults (Table S2), with some immatures arriving as late as 7, 14 and 15 May. At that point in the breeding season, all adults have already laid their first clutch (latest first clutch initiation date: 1 May, Verhoeven et al., 2020). Similar to the departure from West Africa, the departure from stopping sites and the subsequent arrival at the breeding grounds were not associated with an immature's hatch date (Figure 2).
| Routes and orientation of juveniles and adults
During southward migration, juveniles that migrated later tended to orient more towards the south and less towards the southwest after crossing the Mediterranean (from 36°N to 32°N, Table S7).
This gradual shift in juvenile routes might also explain why their variance in longitudinal movement across this latitudinal segment was different from the variance observed in adults (Table S9). The average longitude of juveniles upon arrival to the nonbreeding grounds (12.03°W at 20°N) was further to the east than that of adults (Table S6). During northward migration, juveniles that migrated later tended to fly more towards the north and less towards the northeast after departing the nonbreeding grounds (from 20°N to 24°N, Table S8). However, we otherwise found no clear relationships between the route and orientation of juveniles and their date of migration, sex or the year of migration during either south-or northward migration (Tables S4, S5, S7 and S8).
On average, juveniles and adults took similar routes during southward migration (Figure 2; Table S6). However, one juvenile flew south via northern Italy, a stopping site well to the east of the migratory corridor of adults (Figure 1). The southward routes of juveniles were on the whole also more variable than those of adults (Table S6), although the proportion of juveniles that crossed the Sahara towards West Africa (19/23, 83%) was the same as in adults (54/66, 82%, p = 0.932, Table 1). During northward migration immatures again used similar routes to adults, but some made stops well outside of the adult migratory corridor, such as at sites in Libya, Sicily, northern Italy and even as far east as the coast of Albania, which is more than 1,000 km outside of the adult corridor (Figure 1; Table S6). As a result, the northward routes of immatures, like their southward routes, were more variable than those of adults, especially north of the Sahara (Figure 1; Table S6).
| Southward migration
During southward migration in 2016, one juvenile and one adult died.
Notably, the juvenile perished in the Atlantic after overshooting the godwit nonbreeding area in West Africa (Figure 3). During southward migration in 2017, six juveniles and one adult died (Figure 3).
Mortality of juveniles during southward migration was not related to their date of departure, sex or the year in which they hatched (Table 2). For example, we observed two juveniles of similar ages departing on the same day, of which one died and the other survived ( Figure 4). Most juveniles (5/7) that died during southward migration did so during their very first flight from the Netherlands; one died at its first stopping site, and the other died during the second leg of its southward migration. The mortality of juveniles during southward migration was higher than that of adults (25% vs. 6%, p = 0.039, Table 1). near Leiden, the Netherlands (Figure 3). The mortality of juveniles on northward migration was slightly lower than that of adults, but not significantly so (20% vs. 27%, p = 0.584, Table 1). Furthermore, the mortality of juveniles outside the Netherlands-that is, between departure from the Netherlands and subsequent return 1-3 years later-was not related to their hatch date, sex or hatch year (Table 2).
| DISCUSS ION
We simultaneously tracked juvenile and adult godwits to elucidate how and when individual differences in migration patterns arise.
Specifically, we aimed to disentangle whether individual differences in the migratory patterns of adults are a result of inherited differences or experienced differences in environmental conditions, and thus result from developmental plasticity. Juveniles and adults appeared to be rather dissimilar, suggesting that differences among individuals in migratory patterns did not reflect (epi-)genetically inherited factors. One environmental factor did influence migratory behaviour: the departure timing of juveniles on southward migration correlated with their hatch date. However, we detected no other ef-
| Individual differences and developmental plasticity
Our results demonstrate that juveniles differ from each other in whether they cross the Sahara and how they migrate south and north. One possibility is that these individual differences have a genetic or at least a heritable origin (e.g. Pulido et al., 2001). Confirming such a possibility requires simultaneously tracking juveniles and their parents. Unfortunately, we have succeeded in tracking only one mother-daughter pair thus far. On southward migration, both mother and daughter flew to West Africa, but their migratory timing was considerably different. After arriving in West Africa, and on northward migration, their migration again differed considerably ( Figure S1). The tiny sample size makes these observations anecdotal. However, we have also performed a large-scale translocation and delay experiment with hand-raised siblings, which showed that siblings frequently migrate differently from each other, with some individuals crossing the Sahara on southward migration when their sibling did not (Loonstra et al., in review). In addition, we have previously shown that the use of stopover sites in Spain and Portugal is unlikely to be heritable . We therefore currently have no reason to assume that the observed differences among juveniles are inherited.
Our study also clearly demonstrates that juveniles and adults have different migration patterns. This is most obvious in terms of timing and route choice, as some juveniles visited sites we have never observed being used in the migrations of more than 200 adults tracked across more than 10 years (Hooijmeijer et al., 2013, Senner, Verhoeven, et al., 2015Senner et al., 2019 this study). There are three possible explanations for this large Survived discrepancy between adults and juveniles. The first is that we have observed novel behaviour on the part of a cohort of individuals and these individuals will continue to follow their juvenile routine, thus yielding a new adult migration pattern if these individuals continue to survive. The second is that we observed normal juvenile godwit behaviour, and that those juveniles with markedly different migrations from adults will either die or never breed. The third is that we observed normal juvenile godwit behaviour, but godwits change their migration later in life rather than continuing the movement patterns exhibited in their first year of life. Under both of the latter two scenarios, the current adult pattern would persist. Although recent results from other species (e.g. Meyburg et al., 2017) are in line with the second option-selective death of dissimilar juveniles-we believe that the most likely scenario in godwits is the third option.
We propose that juvenile godwits change their migratory patterns later in life on the following grounds: (a) The timing of godwit breeding has not changed by more than a week in either direction in the past 15 years (Schroeder et al., 2012;Verhoeven et al., 2020).
This means that in the past 15 years, the earliest juveniles-those hatched in the first week of May-could, at the earliest, have left the Netherlands by the end of June. In addition, we now know that juveniles hatched in the Netherlands return to the Netherlands and that their average life span is ~6 years ). Yet, we have observed adults leaving the Netherlands as early as the end of May and beginning of June (Figure 2). It necessarily follows that in these cases, these adults left earlier than they did on their first southward migration as juveniles. Moreover, (b) ringing recoveries of juveniles banded in the Netherlands over the past 70 years indicate that the later migration and different routes of juveniles compared to adults during northward migration are not a new phenomenon (Beintema & Drost, 1986;Haverschmidt, 1963).
More importantly, our results show that the juveniles that migrate later and use different routes than adults are not more likely to die.
Thus, if these age-dependent routes and timing are consistent with 70 years of ringing data, and not all juveniles with these different migration patterns die, it is likely that these juvenile godwits change their migration following their first migrations. We therefore expect godwits to make considerable changes to their migration later in life, though only the lifelong tracking of these same individuals will establish this for certain. Because the timing of juvenile godwits is expected to change later in life, lifelong tracking might also give us still more insight into why we observe large differences among adults .
| Environmental effects on juvenile migration
If inherited routines do not explain differences among juveniles or between juveniles and adults, what factors do have an influence?
Because juveniles migrated significantly later than adults, the fact that they also flew non-stop from the Netherlands to West Africa more frequently and had higher mortality rates during their first southward migration could be related to seasonal changes in the environment. For example, wind conditions might become more favourable later in the season or the availability of food and social information might decrease over time (e.g. Kölzsch et al., 2016). Within juveniles, however, we found no evidence for any seasonal patterns in their route, mortality rate or behaviour. Note that in two cases, we observed different migratory behaviours among pairs of juveniles that departed from the breeding areas on the same day. In one pair of juveniles, one individual migrated non-stop to West Africa while the other did not; in another pair, one survived southward migration while the other died before reaching its nonbreeding site. We realize that these anecdotal observations are not conclusive evidence that there are no seasonal patterns. However, we also observed that among both the earliest and latest juveniles that crossed the Sahara, individuals both survived and died (Figure 4). We conclude from these combined observations that the destination, migratory duration and mortality rate of juveniles are not simply a plastic response to seasonal changes in the physical environment. This further suggests that the differences in destination, duration and mortality rate between adults and juveniles are not a simple matter of timingrelated changes in the environment.
| Mortality
Most juveniles that died during migration did so during their first migratory flight from the breeding grounds and at higher rates than adults. Similarly, on northward migration, most mortality occurred during the first northward flight or immediately after a juvenile arrived at a location which was new to that individual. Both the higher mortality of juveniles compared to adults and the specific moments at which juveniles died suggest that performing novel migratory actions is risky. Therefore, making use of experience by repeating what worked previously might be beneficial and lead to higher fitness through either higher survival, higher breeding success or both.
The high degree of breeding and nonbreeding site fidelity of animals is thought to exist for this reason, and the benefits of fidelity might thus also explain why individuals adopt individually specific migratory routines (Cresswell, 2014, Winger et al., 2019, but see Lok et al., 2013). Support for this notion comes from tracking studies that have followed the same individuals for multiple years, with most showing that individuals are consistent in their spatiotemporal distribution across the annual cycle (Conklin et al., 2013;Pedersen et al., 2018;Vardanis et al., 2011;Verhoeven et al., 2019;. However, there are also migratory species with extended developmental periods during which they improve their migratory routines (Campioni et al., 2020;Mueller et al., 2013;Sergio et al., 2014) and even develop new routines (Teitelbaum et al., 2016;Tombre et al., 2019). This suggests either that it is not always beneficial to adhere to an individual's initial routine or that certain species have the capacity to change later in life while others do not.
| Species-specific differences in developmental plasticity
If godwits do change their migration pattern after their first year of life, as we expect, they are similar to other migratory species that have extended developmental periods (Mueller et al., 2013;Sergio et al., 2014;Teitelbaum et al., 2016;Tombre et al., 2019). In contrast to these species, the developmental period in species with a strong 'innate' control of their migratory routine is thought to be negligible (Berthold et al., 1992;Gwinner, 1996;Pulido et al., 2001). Thus, there appears to be a gradient in the amount of developmental plasticity that different species exhibit during their lives with respect to their migratory routines. But what could cause such differences among species?
First, we find it intriguing that the species with extended developmental periods-black kites Milvus migrans, whooping cranes Grus americana and barnacle geese Branta leucopsis-are all long-lived (Sergio et al., 2014;Teitelbaum et al., 2016;Tombre et al., 2019). Short-lived passerines are often hypothesized to be less plastic in developing routines throughout life (Cresswell, 2014, Pedersen et al., 2018; and see Karagicheva et al., 2018). This suggests that the general life history of a species, like a species' longevity, might play an important role in whether or not different levels of developmental plasticity are adaptive. Over evolutionary time-scales, different species might therefore have evolved different levels of developmental plasticity depending on what is most adaptive in their particular circumstance (Botero et al., 2015;Karagicheva et al., 2018). These differences might then have become genetically assimilated, leading to speciesspecific responses to environmental contexts (i.e. reaction norms) and setting organismal limits on the amount of plasticity they can exhibit (Pigliucci et al., 2006). Experiments with hand-raised siblings of different species, for instance, have clearly shown that such species-specific reaction norms exist under laboratory conditions (Berthold, 1996;Gwinner, 1996).
However, the observed levels of species-specific developmental plasticity must also be conditional on the environment. For example, populations of the same species exhibit different degrees of variability in their migrations (Flack et al., 2016;Loonstra, Verhoeven, Zbyryt, et al., 2019). Similarly, individuals from the same population can vary considerably (Gill et al., 2019;Verhoeven et al., 2019), especially in populations that are partially migratory (Chapman et al., 2011). For example, hand-raised godwit siblings can be induced to show different migratory strategies depending on the context in which they are released (Loonstra et al., in review). Even a single individual can be resident or migratory at different stages of its life (Hegemann et al., 2015). Thus, the apparent gradient in developmental plasticity between species, populations and individuals could be the result of both organismal and conditional differences.
In order to understand how much, how fast and when animals can adjust their migrations, we need to identify to what extent the differences between species, populations or individuals are currently organismal or conditional. Since seasonal migration is a 'syndrome' (i.e. an amalgamation of many different traits, Piersma et al., 2005), such an analysis will be daunting and need to encompass: (a) determining more comprehensively which component traits contribute to an individual's observed migratory routine (e.g. photoperiodic control of pre-migratory fattening, physiological control of fattening rates, absolute potential fuel loads, etc.); (b) experimentally revealing the organismal limits of these component traits, that is, their full reaction norms; (c) employing transcriptomic and genomic approaches to understand the genetic basis that may underlie these component traits, to try to disentangle the extent to which plasticity is organismal or conditional (Horton et al., 2019); and (d) developing an ecological understanding of how trade-offs among these component traits may limit the potential for plastic responses to changes within the environmental range covered by individual reaction norms. Studies of bird migration thus have the potential to illuminate the most fundamental questions about the generation of phenotypic variation and also help us understand the organismal limits to contemporary global change (Gienapp et al., 2014).
ACK N OWLED G EM ENTS
We would like to thank the members of our field crews, students and volunteers for their help with catching and tagging. Special thanks go to Bote de Boer, Jan de Jong and Murk Nijdam who in times of need helped us with catching juveniles. We thank Tienke Koning for fundraising, and Julie Thumloup and Yvonne Verkuil for their help with the molecular sexing. We are grateful to many farmers, most of whom are organized in the Collectief Súdwestkust, and | 2021-11-26T06:17:35.769Z | 2021-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "f6fa45038f2ed35fa8a79eff8a3b367f035f224d",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1365-2656.13641",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "02a080151a347ada22aa8411aec2ce283883ad35",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244817299 | pes2o/s2orc | v3-fos-license | Full-Scale Blast Tests on a Conventionally Designed Three-Story Steel Braced Frame with Composite Floor Slabs
This paper summarizes the findings of two full-scale blasts tests on a steel braced frame structure with composite floor slabs, which are representative of a typical office building. The aim of this research study was to experimentally characterize the behavior of conventionally designed steel braced frames to blast loads when enclosed with conventional and blast-resistant façade. The two tests involved a three-story, steel braced frame with concentrical steel braces, which are designed to resist typical gravity and wind loads without design provisions for blast or earthquake loads. During the first blast test, the structure was enclosed with a typical, non-blast-resistant, curtainwall façade, and the steel frame sustained minimal damage. For the second blast test, the structure was enclosed with a blast-resistant façade, which resulted in higher damage levels with some brace connections rupturing, but the building did not collapse. Observations from the test program indicate the appreciable reserved capacity of steel brace frame structures to resist blast loads.
Introduction
As a result of past terrorist attacks, such as the Alfred P. Murrah Building in Oklahoma City in 1995 and the attacks against the World Trade Center in New York City in 2001, researchers have been working to enhance the physical security of buildings in order to protect human lives and infrastructure. As such, many important buildings require consideration of blast loads in their design. Blast pressures can be man-made or accidental. A required feature of blast-resistant buildings is the use of hardened envelope façade systems. Their primary function during a blast event is to protect occupants from direct exposure to the blast wave, blast-induced debris from failure of the external envelope of the building (breakup of window glass, exterior, and interior walls), and free-fall injuries in case they are ejected from elevated floors [1]. While the performance levels of hardened façade envelopes vary depending on the material they are made of and the required level of protection [2], in general, they are designed to plastically deform but not to detach and penetrate into the building, thereby protecting occupants.
Hardened building envelopes are designed to absorb the applied blast load through elastic and plastic deformation; they generally collect a relatively high portion of the total applied blast load impulse. Then, the blast load absorbed from the façade is transferred to the structural frame of the building as dynamic reactions through the supporting diaphragm, beams, and columns. Depending on the design basis threat, the lateral-forceresisting system (LFRS) of the building is designed accordingly to resist these loads by following the generally acceptable approach of balanced or capacity design [3]. Generally, the structural frame of reinforced concrete and steel frame buildings that are designed and detailed for seismic loads tend to perform better under blast loads because of their increased ductility [4,5]. Nonetheless, a concern is raised for the LFRS of existing buildings that have not been designed to resist a design-basis blast threat nor designed for seismic loads. This may be the case for older buildings when retrofitted with hardened, blast-resistant façade systems. Generally, in this situation, cast-in-place reinforced concrete structures are known to perform better under blast loads due to their higher mass and monolithic construction [6]. Conversely, steel frame buildings are of higher concern due to lack of available data for their global response to blast loads. Their relatively lower mass and lack of ductility can negatively affect their response. In such buildings, their LFRS may need to be retrofitted accordingly [1] by applying the balanced design philosophy to ensure that the higher resistance of the façade will not overload the LFRS, thereby compromising the structural integrity of the building.
To date, research studies investigating the response of steel frame buildings to blast have primarily focused on the response of isolated components under short-duration blast loads that are likely to only cause localized damage to one or few columns, which can potentially trigger progressive collapse [5,7] similar to the partial collapse of the Alfred P. Murrah Building in Oklahoma City in 1995. In that context, many studies have investigated the response of isolated steel members numerically [8][9][10] and with simulated or actual blast tests [5,[11][12][13].
However, evaluation of the response of buildings under relatively long duration blast loads where the blast overpressure is expected to globally load the building with a nearly uniform load distribution on the building envelope [14] are mostly computational and sparse. In addition, most of these studies have focused on the response of strengthened structures by means of improved connection details and/or hardened members. Recently, Yussoft et al. [15] evaluated numerically the response of steel frames with strengthened joints, while Weli and Vigh [16] evaluated the response of a special self-centering steel moment frame (SMF) showing that such frames can reduce the residual drift after a blast event. Nourzadeh et al. [17] demonstrated computationally that the lateral story drifts on a 10-story seismically designed reinforced concrete (RC) building subjected to moderate blast loads resulting from a 1000 kg TNT charge at standoff distances between 15 and 30 m can be significantly larger than those observed as a result of earthquake ground motions for selected cities in the eastern and western regions of Canada. Ibrahim and Nabil [18] also investigated the response of an RC frame to blast loads and concluded that when columns were designed to have higher ductility, the building's response was significantly improved.
Studies on the response of conventionally designed buildings with no seismic or blast-resistant design considerations under larger-duration blast loads are limited. For this reason, the U.S. Department of State (DoS) Bureau of Diplomatic Security, in collaboration with research partners from Protection Engineering Consultants and the Energetic Materials & Testing Center (EMRTC) of the New Mexico Tech, directed and funded a multi-year research program focusing on characterizing the response of conventional steel frame buildings to blast loads.
Under this program, McKay et al. [19] performed a comprehensive analytical study on the blast response of common types of steel frame buildings with different numbers of stories and different types of LFRSs, namely SMF and steel braced frames (SBF). The analytical study results suggest that buildings with non-blast-resistant façade envelopes that are expected to fail early under blast while posing a significant hazard to building occupants due to failed debris are not prone to collapse, since the rapid failure of the façade limits the peak applied lateral load to the building's frame. Therefore, damage to their LFRS is not expected. Conversely, the lateral load demand on the LFRS of these conventional buildings enclosed with a blast-resistant façade is higher due to the façade's ability to absorb and transfer a larger portion of the applied blast load to the LFRS without failing. The study also showed that steel buildings of 12 stories or more with hardened envelopes are not prone to collapse due to their increased stiffness needed for wind serviceability, Vibration 2021, 4 867 while there was strong evidence that buildings with less than three stories were vulnerable to severe damage on the LFRS, which could lead to total collapse.
Mid-rise steel frames with three to six stories were found to be more susceptible to localized damage on their LFRS, but their safety margin against collapse was unclear [19]. This finding motivated the full-scale blast test program presented in this paper, which focused on experimentally testing the response against relatively long duration blast loads of a three-story steel frame building that was designed to only resist typical gravity and wind loads with a basic wind speed of 51 m/s. No provisions for blast or seismic loads were considered for the design of the LFRS of the test structure. Two tests were performed. During the first test, the building was enclosed with a conventional curtainwall glazed façade, and during the second test, the building was enclosed with a blast-resistant façade. The applied blast loads were similar for both tests. During the first test, the frame responded elastically with no failure on the LFRS, while the glazed façade almost completely shattered. The blast-resistant façade on the second test collected a higher portion of the blast load and transferred it to the LFRS, which resulted in partial compromise of the LFRS. Some first-floor gusset plate connections completely ruptured, and some others yielded but did not fail. Despite the partial damage of the LFRS, the frame had a residual permanent lateral deformation at the roof level of less than 3 mm, indicating that there was an appreciable margin of safety against collapse.
Problem Statement, Scope, and Objectives
A question is raised when existing buildings, especially steel frame structures, are retrofitted with blast-resistant façades to meet physical security requirements as to whether their LFRS should also be retrofitted. This is particularly true in cases where the LFRS is designed only to resist typical gravity and wind loads with no seismic detailing or blast load considerations. This research study focused on characterizing the behavior of conventional steel structures subjected to relatively high-magnitude, long-duration blast loads that can globally load the building envelope with approximately uniform blast pressures. Such blast loads can potentially result in the global sway response of the structure, causing the localized damage of key structural members up to complete collapse. Although not presented in this paper, another objective of the test series was to identify key factors to consider for assessments of existing buildings and simplified structural analysis methods for evaluating the performance of steel frame buildings under blast loads. The structural analysis methods were derived from the experimental data with the intent of allowing for efficient yet accurate modeling of buildings in response to blast loads using conventional structural analysis software. This will be a topic of a future publication.
Description of Three-Story Steel Frame Test Structure
The test structure designed for this research program was a full-scale, three-bay × two-bay three-story steel frame with typical square bays, 6.10 m long. It was constructed at the field laboratory of the EMRTC in Socorro, NM. The frame's story height was 4.10 m and comprised of steel-concrete composite floors. Figure 1a,b show the typical floor and roof plan of the test frame, respectively. The LFRS consisted of concentrical steel braces, which are commonly known as chevron braces. The building had two sets of chevron braces per floor parallel to the 12.20 m long side along gridlines B and C, which are shown in Figure 2a. Along the 18.30 m long side, there was one set of braces per floor along gridline 2, as shown in Figure 2b. The LFRS was designed to resist typical wind loads without provisions for resisting earthquake or blast loads. For the first blast test, the frame was enclosed by a conventional glazed curtainwall facade. For the second blast test, the façade enclosure was replaced with a hardened (blast-resistant) façade, which was designed to plastically deform but not rupture the applied blast load. REVIEW 4 enclosure was replaced with a hardened (blast-resistant) façade, which was designed to plastically deform but not rupture the applied blast load.
Choice of Test Structure
The choice of this particular steel structure type and number of stories was informed from an earlier extensive analytical study by McKay et al. [19]. This study focused on numerically assessing the performance of different types of steel buildings to far-field, long duration, blast loads. For the study, conventionally designed steel buildings with the two most common categories of LFRS, (a) steel braced frames (SBF) (concentric, eccentric, cross-braced) and (b) SMF, were included. In summary, the findings of the study suggested that as the number of stories increases, the collapse potential due to blast loads drops. In addition, most steel structure types that are enclosed with conventional (nonblast-resistant) façades are not prone to collapse due to the early failure of fragile façade systems such as non-blast-resistant glazing. Conversely, the load demand on the LFRS of steel frame buildings with a hardened façade is higher because a non-failing façade collects a higher portion of the blast load that is eventually transferred to the LFRS of the building.
Among the different LFRS types that were included in the study [19], the most vulnerable type was SBFs. Buildings with less than three stories were found to be the worst Vibration 2021, 4 FOR PEER REVIEW 4 enclosure was replaced with a hardened (blast-resistant) façade, which was designed to plastically deform but not rupture the applied blast load.
Choice of Test Structure
The choice of this particular steel structure type and number of stories was informed from an earlier extensive analytical study by McKay et al. [19]. This study focused on numerically assessing the performance of different types of steel buildings to far-field, long duration, blast loads. For the study, conventionally designed steel buildings with the two most common categories of LFRS, (a) steel braced frames (SBF) (concentric, eccentric, cross-braced) and (b) SMF, were included. In summary, the findings of the study suggested that as the number of stories increases, the collapse potential due to blast loads drops. In addition, most steel structure types that are enclosed with conventional (nonblast-resistant) façades are not prone to collapse due to the early failure of fragile façade systems such as non-blast-resistant glazing. Conversely, the load demand on the LFRS of steel frame buildings with a hardened façade is higher because a non-failing façade collects a higher portion of the blast load that is eventually transferred to the LFRS of the building.
Among the different LFRS types that were included in the study [19], the most vulnerable type was SBFs. Buildings with less than three stories were found to be the worst
Choice of Test Structure
The choice of this particular steel structure type and number of stories was informed from an earlier extensive analytical study by McKay et al. [19]. This study focused on numerically assessing the performance of different types of steel buildings to far-field, long duration, blast loads. For the study, conventionally designed steel buildings with the two most common categories of LFRS, (a) steel braced frames (SBF) (concentric, eccentric, cross-braced) and (b) SMF, were included. In summary, the findings of the study suggested that as the number of stories increases, the collapse potential due to blast loads drops. In addition, most steel structure types that are enclosed with conventional (non-blast-resistant) façades are not prone to collapse due to the early failure of fragile façade systems such as non-blast-resistant glazing. Conversely, the load demand on the LFRS of steel frame buildings with a hardened façade is higher because a non-failing façade collects a higher portion of the blast load that is eventually transferred to the LFRS of the building.
Among the different LFRS types that were included in the study [19], the most vulnerable type was SBFs. Buildings with less than three stories were found to be the worst performing ones, since they were vulnerable to severe damage during blast events. Conversely, buildings of 12 stories (or more) were not prone to collapse. However, mid-rise SBFs i.e., 3 to 6 stories, were found to be susceptible to localized damage on the LFRS, but their safety margin against collapse was unclear. Thereby, for the current test program, it was decided to use a 3-story SBF that was designed for conventional loads i.e., gravity and typical wind load with a basic wind speed of 51 m/s as a test structure. Its LFRS was comprised of concentrical steel braces (chevron), which constituted one of the most vulnerable brace configurations among those considered in the study [19], i.e., cross bracing, eccentric bracing, and concentric bracing.
Design of Test Structure
The test structure was designed to represent a typical office building. The design loads were based on ASCE/SEI 7-10 [20]. For strength and serviceability, the building was designed to meet the requirements of AISC 360-10 [21]. The dead load due to the self-weight of the slab and steel members was 2.3 kN/m 2 , and an additional dead load of 0.5 kN/m 2 was assumed to account for permanent floor loads. The design floor live load was 4.8 kN/m 2 , resulting in an ultimate design load (UDL) (1.2 × dead + 1.6 × live) of 11.0 kN/m 2 . Figure 1a shows a plan view of the second and third floor with the member sizes that were used. The columns were on a 6.10 m long square grid forming three bays in the East-West (EW) direction and two bays on the North-South (NS) direction. Each bay had two intermediate floor beams at the third points along the NS direction. All spandrel beams were W18 × 35 as well as the intermediate girders on the EW direction. All intermediate beams in the NS direction were W12 × 19. All steel beams acted compositely with a 114 mm thick normal-weight (24 kN/m 2 ) concrete slab that was poured over a 20-gauge galvanized composite metal decking with 51 mm deep corrugations that were parallel to the EW direction. The composite action between steel beams and concrete slab was facilitated through 19.1 mm diameter, 89 mm long shear studs. Shear studs had a 300 mm spacing and were placed at each low flute of the metal decking. The slab had 12.7 mm diameter reinforcing bars in both directions, which were placed 19 mm below the top of the concrete slab. Figure 1b shows the roof layout that was designed for a live load of 1.0 kN/m 2 . The beam sizes spanning between columns were the same size as those of the floors below. Each bay had three intermediate open-web steel joists at the quarter points along the NS direction, size 16K2. The roof frame was covered with a 38 mm deep, 19-gauge galvanized roof deck type "B" per the Vulcraft product catalogue [22].
The story height was 4.10 m, and all columns were W14 × 43, extending over the full height of the building. The LFRS was designed to resist a typical wind load with a basic wind speed of 51 m/s for strength and 40 m/s for serviceability for a maximum inter-story drift of h/400, where h is the story height. Earthquake or blast loads were not considered. Figure 1 indicate the locations of the chevron braces. Specifically, along the NS direction, they were two sets of braces per floor, one at each interior column line (gridlines B and C). Likewise, along the EW direction, one brace line was used at the interior column line (gridline 2). All braces were made of square hollow structural sections (HSS). Their sizes at each floor and at each direction are indicated in Figure 2
Structural Connection Details
All beam-to-column connections were bolted with double angles using 22 mm diameter bolts at both legs, as shown in Figure 3a. All floor-beam-to-girder connections were also using bolted double angles with two 22 mm diameter bolts as shown in Figure 3b. All brace connections were welded using a 11 mm thick gusset plate, as shown in Figure 3c. The ends of the HSS brace members were slotted and inserted into the gusset plate such that all four contact edges were fillet welded to the gusset plate with a 4.8 mm fillet weld, resulting in a total weld length per joint of 508 mm (4 × 127 mm). All columns were connected to the concrete foundation with typical base plate connections using four 22 mm diameter cast-in-place anchor rods.
Vibration 2021, 4 FOR PEER REVIEW 6 weld, resulting in a total weld length per joint of 508 mm (4 × 127 mm). All columns were connected to the concrete foundation with typical base plate connections using four 22 mm diameter cast-in-place anchor rods. (a)
Material Properties
All wide flange columns and beams were made of ASTM A992 steel [23]. The braces were made of ASTM A500 Grade B steel [24]. The clip angles for the joints were made of ASTM A36 steel [25], while the deck utilized ASTM A653, Grade 33 steel [26]. All bolts for the steel frame joints were ASTM A325 [27], and all welds were made with E70 electrodes. The 28-day cylinder compressive strength of all concrete (foundation, slab on grade, floor slabs) was 28 Mpa, and the cast-in-place anchor rods used for the column base plate connections had a 720 MPa specified yield stress. Finally, all reinforcing bars were made of ASTM A615, Grade 60 steel [28].
Conventional Façade Details
For the purposes of the first blast test, the steel frame was enclosed in all four elevations with a conventionally designed, non-blast-resistant, curtainwall glazed façade. The façade was designed to conform with the requirements of IBC 2006 [29] for a basic wind
Material Properties
All wide flange columns and beams were made of ASTM A992 steel [23]. The braces were made of ASTM A500 Grade B steel [24]. The clip angles for the joints were made of ASTM A36 steel [25], while the deck utilized ASTM A653, Grade 33 steel [26]. All bolts for the steel frame joints were ASTM A325 [27], and all welds were made with E70 electrodes. The 28-day cylinder compressive strength of all concrete (foundation, slab on grade, floor slabs) was 28 Mpa, and the cast-in-place anchor rods used for the column base plate connections had a 720 MPa specified yield stress. Finally, all reinforcing bars were made of ASTM A615, Grade 60 steel [28].
Conventional Façade Details
For the purposes of the first blast test, the steel frame was enclosed in all four elevations with a conventionally designed, non-blast-resistant, curtainwall glazed façade. The façade was designed to conform with the requirements of IBC 2006 [29]
Blast-Resistant Façade Details
For the purposes of the second blast test, a blast-resistant façade was used to enclose the test structure at all four elevations. Due to budget restrictions, instead of a blast-resistant glazed curtainwall façade with aluminum mullions, a steel façade was used. The steel façade was designed to meet similar strength and performance requirements as the equivalent aluminum glazed one with similar mass. Specifically, the façade system was designed for a low-level-of-protection (LLOP) per PDC-TR 06-08 [2], which allows permanent plastic deformation, but not rupture, preventing debris with significant velocities to enter the building and cause serious injuries or fatalities. For the 4.10 m floor height of the test frame herein, the maximum allowable deflection was 215 mm.
A view of the test structure with the steel façade is shown on Figure 5a. The blastresistant steel façade comprised vertical MC7 × 22.7 steel channels, made with A36 steel [25], spanning between floors, with an on-center spacing of 610 mm, as indicated in Figure 5b. At the first floor, the steel channels were covered with a 16 mm thick A36 steel plate that was welded to the exterior flange of the channels. The channels on the second and third floor were covered with horizontally spanning, 38 mm deep, 16-gauge galvanized deck type "B" per the Vulcraft ® product catalogue [22] that was fastened to the channels with self-tapping screws spaced at 300 mm. It is noted that a 16 mm thick plate on the first story was used, instead of the steel deck, to meet the performance requirements for forced entry/ballistic resistant (FE/BR), which are typically required for some blast-resistant buildings at the first floor to protect against forced-entry attacks and ballistic threats.
As a result of its relatively high stiffness and strength, special attention was given to the connection details of this façade to the steel frame to avoid its participation in resisting lateral loads through shear-wall-type action. This requirement was facilitated by positively connecting the top ends of the MC7 × 22.7 steel channels with a bolted connection
Blast-Resistant Façade Details
For the purposes of the second blast test, a blast-resistant façade was used to enclose the test structure at all four elevations. Due to budget restrictions, instead of a blast-resistant glazed curtainwall façade with aluminum mullions, a steel façade was used. The steel façade was designed to meet similar strength and performance requirements as the equivalent aluminum glazed one with similar mass. Specifically, the façade system was designed for a low-level-of-protection (LLOP) per PDC-TR 06-08 [2], which allows permanent plastic deformation, but not rupture, preventing debris with significant velocities to enter the building and cause serious injuries or fatalities. For the 4.10 m floor height of the test frame herein, the maximum allowable deflection was 215 mm.
A view of the test structure with the steel façade is shown on Figure 5a. The blastresistant steel façade comprised vertical MC7 × 22.7 steel channels, made with A36 steel [25], spanning between floors, with an on-center spacing of 610 mm, as indicated in Figure 5b. At the first floor, the steel channels were covered with a 16 mm thick A36 steel plate that was welded to the exterior flange of the channels. The channels on the second and third floor were covered with horizontally spanning, 38 mm deep, 16-gauge galvanized deck type "B" per the Vulcraft ® product catalogue [22] that was fastened to the channels with self-tapping screws spaced at 300 mm. It is noted that a 16 mm thick plate on the first story was used, instead of the steel deck, to meet the performance requirements for forced entry/ballistic resistant (FE/BR), which are typically required for some blast-resistant buildings at the first floor to protect against forced-entry attacks and ballistic threats.
As a result of its relatively high stiffness and strength, special attention was given to the connection details of this façade to the steel frame to avoid its participation in resisting lateral loads through shear-wall-type action. This requirement was facilitated by positively connecting the top ends of the MC7 × 22.7 steel channels with a bolted connection only. The bottom ends of the MC7 × 22.7 steel channels did not have any positive connection for gravity loads. Instead, they were in a track that allowed them to accommodate serviceability deflections and to slide as the building experienced sway motion and still be able to resist the applied blast loads via bearing-type action with the track. A close-up view of this connection is shown in Figure 5c.
only. The bottom ends of the MC7 × 22.7 steel channels did not have any positive connection for gravity loads. Instead, they were in a track that allowed them to accommodate serviceability deflections and to slide as the building experienced sway motion and still be able to resist the applied blast loads via bearing-type action with the track. A close-up view of this connection is shown in Figure 5c.
Instrumentation
For both tests, a high-frequency data acquisition system was used with a sampling rate sufficient to capture a high-resolution signal from the various gauges that were attached to the test structure. All the recorded data were synchronized with the time of detonation of the explosive charge. Thereby, throughout this paper, the time axis on the recorded data is time after detonation. The instrumentation included the following types of sensors: Pressure gauges (PG); Displacement gauges (DG); Strain gauges (SG) on selected key elements of the LFRS; Load cells (LC) on some façade-to-structure connection points.
An array of eleven (11) PGs were mounted to the front elevation of the structure that was directly loaded from the airblast pressure to capture the reflected pressure and impulse profile, which are shown in Figure 6a. Another four (4) PGs were mounted to the roof and side elevation of the structure to measure the incident pressure, as shown in Figure 6b,c. Figure 6d shows the four (4) PGs that were attached on the first and second floor to measure the interior pressure.
Instrumentation
For both tests, a high-frequency data acquisition system was used with a sampling rate sufficient to capture a high-resolution signal from the various gauges that were attached to the test structure. All the recorded data were synchronized with the time of detonation of the explosive charge. Thereby, throughout this paper, the time axis on the recorded data is time after detonation. The instrumentation included the following types of sensors: • Pressure gauges (PG); • Displacement gauges (DG); • Strain gauges (SG) on selected key elements of the LFRS; • Load cells (LC) on some façade-to-structure connection points.
An array of eleven (11) PGs were mounted to the front elevation of the structure that was directly loaded from the airblast pressure to capture the reflected pressure and impulse profile, which are shown in Figure 6a. Another four (4) PGs were mounted to the roof and side elevation of the structure to measure the incident pressure, as shown in Figure 6b,c. Figure 6d shows the four (4) PGs that were attached on the first and second floor to measure the interior pressure. Vibration 2021, 4 FOR PEER REVIEW 9 For redundancy, a total of nine (9) DGs, three (3) per floor level, were used to measure the lateral displacement history at each one of the three floor levels. All DGs were mounted to the back side of the structure i.e., the elevation opposite to the one directly loaded from the blast pressure. The DGs were placed at the floor levels and were mounted to a non-responding steel-frame instrumentation tower, which is shown in Figure 7. For redundancy, a total of nine (9) DGs, three (3) per floor level, were used to measure the lateral displacement history at each one of the three floor levels. All DGs were mounted to the back side of the structure i.e., the elevation opposite to the one directly loaded from the blast pressure. The DGs were placed at the floor levels and were mounted to a non-responding steel-frame instrumentation tower, which is shown in Figure 7.
For estimating the peak axial load on the chevron braces of the LFRS, a total of 24 SGs were attached to the two brace lines along the load direction (Figure 2a) i.e., braces along gridlines B and C. Specifically, a pair of two SGs were attached at each one of the 12 HSS brace members. Each SG pair was located at the midspan of each brace with the two SGs glued to the two opposite sidewalls of the HSS tube brace member. Figure 8 shows the locations and IDs of these SGs. More strain gauges were installed at other critical members such as columns and beams. For estimating the peak axial load on the chevron braces of the LFRS, a total of 24 SGs were attached to the two brace lines along the load direction (Figure 2a) i.e., braces along gridlines B and C. Specifically, a pair of two SGs were attached at each one of the 12 HSS brace members. Each SG pair was located at the midspan of each brace with the two SGs glued to the two opposite sidewalls of the HSS tube brace member. Figure 8 shows the locations and IDs of these SGs. More strain gauges were installed at other critical members such as columns and beams. For estimating the peak axial load on the chevron braces of the LFRS, a total of 24 SGs were attached to the two brace lines along the load direction (Figure 2a) i.e., braces along gridlines B and C. Specifically, a pair of two SGs were attached at each one of the 12 HSS brace members. Each SG pair was located at the midspan of each brace with the two SGs glued to the two opposite sidewalls of the HSS tube brace member. Figure 8 shows the locations and IDs of these SGs. More strain gauges were installed at other critical members such as columns and beams. A few connection points of the façade to the steel frame were modified to accommodate LCs to directly measure the dynamic reaction load that the façade transferred to the building. The load cell locations for the first and second test are indicated in Figures 4a and 5a, respectively. The LCs were positioned at the second-floor level, as shown in Figure 9.
A few connection points of the façade to the steel frame were modified to accommodate LCs to directly measure the dynamic reaction load that the façade transferred to the building. The load cell locations for the first and second test are indicated in Figures 4a and 5a, respectively. The LCs were positioned at the second-floor level, as shown in Figure 9.
(a) (b) In addition to the instrumentation above, high-speed video cameras were used at different viewing angles, outside and inside the structure, to capture the response of the building as loaded with the blast load. Finally, the building was surveyed before and after each test to assess any post-test damage and permanent lateral plastic deformation.
Test Procedure
Two blast tests were performed with the same explosive charge weight. The main difference between Test 1 and Test 2 was the façade enclosure of the steel frame. Test 1 was representative of a baseline case to evaluate the blast effects on a conventionally designed building that is enclosed with conventional, non-blast-resistant, curtainwall façade ( Figure 4). For Test 2, the same steel frame was enclosed with a blast-resistant façade (Figure 5) in order to evaluate the effects of a non-failing façade on the steel frame and on the LFRS of a conventionally designed steel frame building with no provisions for resisting dynamic blast loads. For both tests, the explosive charge used was positioned on the south side and at the centerline of the structure, as shown in Figure 10. The standoff distance between the explosive charge and the structure was sufficiently large enough, resulting in an approximately uniform pressure on the reflected (south) elevation of the structure. Following completion of the first blast test, where the building was enclosed with the conventional glazed façade, described in Section 3.5, the damaged façade was removed and replaced with the blast-resistant steel façade described in Section 3.6, in preparation of the second blast test. In addition to the instrumentation above, high-speed video cameras were used at different viewing angles, outside and inside the structure, to capture the response of the building as loaded with the blast load. Finally, the building was surveyed before and after each test to assess any post-test damage and permanent lateral plastic deformation.
Test Procedure
Two blast tests were performed with the same explosive charge weight. The main difference between Test 1 and Test 2 was the façade enclosure of the steel frame. Test 1 was representative of a baseline case to evaluate the blast effects on a conventionally designed building that is enclosed with conventional, non-blast-resistant, curtainwall façade ( Figure 4). For Test 2, the same steel frame was enclosed with a blast-resistant façade ( Figure 5) in order to evaluate the effects of a non-failing façade on the steel frame and on the LFRS of a conventionally designed steel frame building with no provisions for resisting dynamic blast loads. For both tests, the explosive charge used was positioned on the south side and at the centerline of the structure, as shown in Figure 10. The standoff distance between the explosive charge and the structure was sufficiently large enough, resulting in an approximately uniform pressure on the reflected (south) elevation of the structure. Following completion of the first blast test, where the building was enclosed with the conventional glazed façade, described in Section 3.5, the damaged façade was removed and replaced with the blast-resistant steel façade described in Section 3.6, in preparation of the second blast test.
Since the structure was designed to represent a typical office building, the two elevated floors, i.e., second and third, were loaded with additional load, which is consistent with the load combination for extraordinary loading events (1.2 × dead + 0.5 × live) per ASCE/SEI 7-10 [20]. This load combination represents the service loads on an operational building during an extraordinary event (fire, explosion, impact). For the design loads of the test structure, as outlined on Section 3.2 above, the required extra load, on top of the concrete slab self-weight, was 3.0 kN/m 2 . The extra load on the test structure was approximated with nine (9), evenly spaced, concrete blocks per bay with dimensions 0.9 m × 0.9 m × 0.65 m (height). Each one of the two floors had five bays, resulting in a total of 90 concrete blocks total (45 blocks per floor). A view of the blocks after being placed in the structure is shown in Figure 11. Since the structure was designed to represent a typical office building, the two elevated floors, i.e., second and third, were loaded with additional load, which is consistent with the load combination for extraordinary loading events (1.2 × dead + 0.5 × live) per ASCE/SEI 7-10 [20]. This load combination represents the service loads on an operational building during an extraordinary event (fire, explosion, impact). For the design loads of the test structure, as outlined on Section 3.2 above, the required extra load, on top of the concrete slab self-weight, was 3.0 kN/m 2 . The extra load on the test structure was approximated with nine (9), evenly spaced, concrete blocks per bay with dimensions 0.9 m × 0.9 m × 0.65 m (height). Each one of the two floors had five bays, resulting in a total of 90 concrete blocks total (45 blocks per floor). A view of the blocks after being placed in the structure is shown in Figure 11. Since the structure was designed to represent a typical office building, the two elevated floors, i.e., second and third, were loaded with additional load, which is consistent with the load combination for extraordinary loading events (1.2 × dead + 0.5 × live) per ASCE/SEI 7-10 [20]. This load combination represents the service loads on an operational building during an extraordinary event (fire, explosion, impact). For the design loads of the test structure, as outlined on Section 3.2 above, the required extra load, on top of the concrete slab self-weight, was 3.0 kN/m 2 . The extra load on the test structure was approximated with nine (9), evenly spaced, concrete blocks per bay with dimensions 0.9 m × 0.9 m × 0.65 m (height). Each one of the two floors had five bays, resulting in a total of 90 concrete blocks total (45 blocks per floor). A view of the blocks after being placed in the structure is shown in Figure 11.
Blast Test 1 on Frame with Conventional Façade
For the first test, the structure was enclosed with a conventional, non-blast-resistant, glazed curtainwall façade (Figure 4). Figure 12 shows a series of snapshots from highspeed video footage following the detonation of the explosive charge (Figure 12b) as the airblast shock wave propagated to the test structure. The airblast wave direction was from the south to the north ( Figure 10) and first arrived on the south (front) elevation of the structure. The front face of the shock wave can be clearly seen on Figure 12c. Almost concurrently with the arrival of the shock wave, the first few glazed units shattered on the first floor, as indicated in Figure 12d. The glazing damage rapidly propagated to the entire front elevation of the building, as shown in Figure 12e,f. Eventually, the exterior curtain wall façade failed almost instantaneously as the blast wave loaded the structure. Views of the post-test condition of the structure can be seen in Figure 13. Specifically, all front and rear elevation glazing completely shattered, and the aluminum mullions were severely damaged. Only a few glazed units remained intact at the side elevations of the building.
For the first test, the structure was enclosed with a conventional, non-blast-resistant, glazed curtainwall façade (Figure 4). Figure 12 shows a series of snapshots from highspeed video footage following the detonation of the explosive charge (Figure 12b) as the airblast shock wave propagated to the test structure. The airblast wave direction was from the south to the north ( Figure 10) and first arrived on the south (front) elevation of the structure. The front face of the shock wave can be clearly seen on Figure 12c. Almost concurrently with the arrival of the shock wave, the first few glazed units shattered on the first floor, as indicated in Figure 12d. The glazing damage rapidly propagated to the entire front elevation of the building, as shown in Figure 12e,f. Eventually, the exterior curtain wall façade failed almost instantaneously as the blast wave loaded the structure. Views of the post-test condition of the structure can be seen in Figure 13. Specifically, all front and rear elevation glazing completely shattered, and the aluminum mullions were severely damaged. Only a few glazed units remained intact at the side elevations of the building. The peak measured lateral deflections on the structure were recorded for the first inbound response cycle from the nine displacement gauges indicated in Figure 7, and these are summarized in Table 1. The measurements from the three sets of displacement gauges were relatively consistent. The peak lateral deflection of the roof was 33.0 mm. It was also observed that the peak deflection at all floor levels occurred almost concurrently, approximately 120 ms after the detonation. The peak deflection at the roof preceded the peak deflections of the lower floors by approximately 5-10 ms. The peak measured lateral deflections on the structure were recorded for the first inbound response cycle from the nine displacement gauges indicated in Figure 7, and these are summarized in Table 1. The measurements from the three sets of displacement gauges were relatively consistent. The peak lateral deflection of the roof was 33.0 mm. It was also observed that the peak deflection at all floor levels occurred almost concurrently, approximately 120 ms after the detonation. The peak deflection at the roof preceded the peak deflections of the lower floors by approximately 5-10 ms. The average deflections at each floor level from the three sets of displacement gauges are shown in Table 2 along with the estimated inter-story drift ratios, assuming the peak story deflections were concurrent. These inter-story drift ratios were approximately half of the 0.5% drift ratio of ASCE 41-06 [31] for performance level S-1 (Immediate Occupancy), which is described as "minor yielding or buckling of braces". In fact, due to the rapid failure of the curtain wall façade, no damage was observed on the LFRS of the structure. All braces remained straight; i.e., no signs of buckling and none of the gusset plate connections of the braces (Figure 3c) ruptured. Essentially, the frame of the building responded elastically. As confirmed by survey data, there was no permanent lateral deformation on the structure. Despite the favorable response of the structural frame, the occupant survivability was expected to be quite low, since most of the glazed façade shattered and debris penetrated the building. The deflected shape of the structure at different stages between time of detonation (0 ms) up to the time of peak inbound response (120 ms) is shown in Figure 14a. In addition, Figure 14b shows the recorded deflection histories at each floor level, as measured from DG4, DG5, and DG6 for the 2nd, 3rd, and roof respectively. Following the peak inbound response, there was a free vibration stage after about 200 ms from which the fundamental period of the frame was estimated to be ≈380 ms.
Vibration 2021, 4 FOR PEER REVIEW 15 The average deflections at each floor level from the three sets of displacement gauges are shown in Table 2 along with the estimated inter-story drift ratios, assuming the peak story deflections were concurrent. These inter-story drift ratios were approximately half of the 0.5% drift ratio of ASCE 41-06 [31] for performance level S-1 (Immediate Occupancy), which is described as "minor yielding or buckling of braces". In fact, due to the rapid failure of the curtain wall façade, no damage was observed on the LFRS of the structure. All braces remained straight; i.e., no signs of buckling and none of the gusset plate connections of the braces (Figure 3c) ruptured. Essentially, the frame of the building responded elastically. As confirmed by survey data, there was no permanent lateral deformation on the structure. Despite the favorable response of the structural frame, the occupant survivability was expected to be quite low, since most of the glazed façade shattered and debris penetrated the building. The deflected shape of the structure at different stages between time of detonation (0 ms) up to the time of peak inbound response (120 ms) is shown in Figure 14a. In addition, Figure 14b shows the recorded deflection histories at each floor level, as measured from DG4, DG5, and DG6 for the 2nd, 3rd, and roof respectively. Following the peak inbound response, there was a free vibration stage after about 200 ms from which the fundamental period of the frame was estimated to be ≈380 ms. The detonation of the explosive charge resulted in a peak-reflected specific impulse on the front (south) elevation ( Figure 10) of the structure of 1350 kPa × ms. This impulse value was taken as the average value from the 11 PGs that were installed on the front elevation of the structure at various locations over the width and height. Considering the 18.3 m width and 12.3 m height of the building, the total impulse at the front elevation was 304 MN × ms. This relatively high impulse value was not transferred entirely to the LFRS of the structure, since the early, almost instantaneous, failure of the glazed façade The detonation of the explosive charge resulted in a peak-reflected specific impulse on the front (south) elevation (Figure 10) of the structure of 1350 kPa × ms. This impulse value was taken as the average value from the 11 PGs that were installed on the front elevation of the structure at various locations over the width and height. Considering the 18.3 m width and 12.3 m height of the building, the total impulse at the front elevation was 304 MN × ms. This relatively high impulse value was not transferred entirely to the LFRS of the structure, since the early, almost instantaneous, failure of the glazed façade (Figure 12c-f) reduced considerably the "collected" blast load that was transferred to the mullions and eventually, through the mullion-to-spandrel beams connections (Figure 4c), to the steel frame. This observation was confirmed by the dynamic reaction histories measured from the two load cells connecting the mullion-to-spandrel beams (Figure 9a). Specifically, the load histories of the two load cells were integrated and divided by the tributary area of each connection point i.e., spacing of vertical mullions (1.2 m) × floor height (4.1 m). The resulting value of ≈120 kPa × ms provided an estimation of the peak specific impulse that the glazed façade transferred to the frame. This value was less than 10% of the 1350 kPa × ms reflected specific impulse, as measured from the 11 PGs at the front of the structure. Figure 8 shows the IDs of the 24 strain gauges that were attached to the braces of column lines B and C that were used to estimate the peal axial forces on the braces. Table 3 summarizes the peak measured brace forces. All peak forces were recorded during the first inbound response cycle; therefore, for each pair of chevron brace members, one value is negative, indicating compression, and the other value is positive due to tension. The brace forces progressively reduce from the first to the third floor. While there was some variation on brace loads between column lines B and C, overall, the peak measured loads were similar, indicating a nearly symmetric loading and response of the frame to the applied blast loads. Table 3. Test 1-Peak measured braces forces from strain gauge data (refer to Figure 8 for strain gauge positions). 1 IDs within brackets are strain gauges on column line C braces. 2 Taken at the peak measured strain multiplied by steel elastic modulus (200 GPa) and by brace cross-sectional area. 3 Taken as the average of the measured peak force from the two strain gauges at each brace member.
Blast Test 2 on Frame with Blast-Resistant Façade
Following the completion of the first blast test, the damaged glazed façade was removed and replaced with the steel, blast-resistant façade ( Figure 5). Figure 15 shows a series of snapshots from high-speed video footage following the detonation as the pressure wave propagated through the structure. The explosive charge weight and position were the same as for the first test ( Figure 10). Figure 15c,d indicate the propagation of the front face of the pressure wave on the structure. A few milliseconds later, the front façade started responding to the applied load, as shown in Figure 15e. At this stage, the façade was still loaded with the inbound positive pressure from the airblast, which is reflected on the deformed shape of the corrugated deck siding on the second and third floors. The first-floor siding, which was comprised of a 16 mm think steel plate (Figure 5a), remained practically elastic with minimal inbound deformation. As the pressure wave propagated, the negative phase of the airblast wave arrived, which is reflected in Figure 15f where some of the corrugated deck siding started to detach from the supporting channel members (Figure 5b). Figure 16 shows the condition of the structure after the test. As shown in Figure 16a, most of the corrugated siding was detached due to the negative phase of the airblast load. Based on visual inspection, the vertical façade members at the second and third floors responded within the expected response limits. As previously noted in Section 3.6, the performance limit was a peak lateral deformation 215 mm or less. In fact, in Figure 16b, the residual plastic deformation of the façade channels was approximately 150 mm.
Vibration 2021, 4 FOR PEER REVIEW 18 Figure 16 shows the condition of the structure after the test. As shown in Figure 16a, most of the corrugated siding was detached due to the negative phase of the airblast load. Based on visual inspection, the vertical façade members at the second and third floors responded within the expected response limits. As previously noted in Section 3.6, the performance limit was a peak lateral deformation 215 mm or less. In fact, in Figure 16b, the residual plastic deformation of the façade channels was approximately 150 mm. Most of the façade connections at the second-floor level failed where the façade channels slipped out of the track during the rebound (Figure 5c). The connections at this level failed due to the higher dynamic reactions caused by the composite action of the 16 mm think steel plate that was welded to the first-floor vertical façade channels (Figure 5b). Specifically, some bolts that connected the vertical channels to the frame ruptured in direct shear during the inbound response. During the negative phase of the airblast, the façade ends were not positively connected to the structure anymore, causing a zippertype failure mode with the façade detaching from the frame. This connection failure did not affect the inbound blast load that was transferred to the steel frame. During the inbound response and as bolts ruptured, the façade channels transferred the blast load to the structure through direct bearing action with the edge of the concrete slab, since the (Figure 5c). The connections at this level failed due to the higher dynamic reactions caused by the composite action of the 16 mm think steel plate that was welded to the first-floor vertical façade channels (Figure 5b). Specifically, some bolts that connected the vertical channels to the frame ruptured in direct shear during the inbound response. During the negative phase of the airblast, the façade ends were not positively connected to the structure anymore, causing a zipper-type failure mode with the façade detaching from the frame. This connection failure did not affect the inbound blast load that was transferred to the steel frame. During the inbound response and as bolts ruptured, the façade channels transferred the blast load to the structure through direct bearing action with the edge of the concrete slab, since the gap between the slab edge to the façade channels was less than 50 mm, as can be seen in Figure 9b.
In general, the peak lateral deflections of the frame were higher compared to the first test (Table 1). Table 4 shows the peak measured lateral deflections during the first inbound response cycle, as measured from the displacement gauges shown in Figure 7. At the roof level, the peak deflection of ≈85 mm occurred first, which was approximately 120 ms after the detonation, and after about 40-50 ms, the second and first floor also reached their peak displacement. Figure 7 for locations of each gauge. 2 All peak deflections occurred at first inbound response cycle. 3 Time after detonation.
The average recorded deflections are shown in Table 5, which are taken as the average of the floor deflections from the three sets of displacement gauges (Table 4). This table also shows the estimated inter-story drift ratios, assuming the peak story deflections occurred concurrently. Specifically, the second-floor inter-story drift was 1.4%, which is only 0.1% lower than the 1.5% drift ratio limit of ASCE 41-06 [31] for performance level S-3 (Life Safety) "many braces yield or buckle but do not totally fail. Many connections may fail". In fact, as shown in Figures 17d and 18c, two gusset plate brace connections on the first floor of the two braces that were in tension during the inbound response (direction of applied blast load) completely ruptured. Additionally, the connections of the conjugate first-floor braces that were in compression during the inbound response yielded and had some signs of fracture initiation at the weld lines, as indicated in Figures 17c and 18b. A magnified view of the gusset plate joint of Figure 18b can be seen in Figure 19 where the yielding and rupture initiation are apparent. On the second and third floors where the story drift ratios were 0.5% and 0.2%, respectively, which were equal or less to the drift ratio limit for performance level S-1 (Immediate Occupancy) per ASCE 41-06 [31], there was no apparent damage to the braces and their gusset plate connections. Despite the partial damage to the first-floor brace connections, it was verified by survey data that the permanent lateral deformation on the structure was less than 3 mm at the roof level. Globally, the frame did not have signs that it was at the near-collapse state. However, the building was not considered safe for immediate occupancy until after the damaged LFRS was repaired. Figure 19 for a magnified view); (c) Base gusset joint with column C2. The deflected shape of the structure at different stages between the time of detonation (0 ms) and a few milliseconds past the peak inbound response (120 ms) is shown in Figure 20a. Figure 20b shows the lateral deflection histories at the three floor levels as recorded by DG4, DG5, and DG6 for the 2nd, 3rd, and roof, respectively. The natural period of the frame was estimated to be approximately 820 ms, which was more than twice as high as the estimated natural period of the frame during the first test that was ≈380 ms. The rupture of the two first-floor brace connections (Figures 17d and 18c) were the primary reason for the higher natural period. Basically, two out of the four brace members on the first floor were not participating in the LFRS; hence, the lateral stiffness of the frame dropped, which resulted in an increase in the natural period. A secondary reason for the increase in the natural period was due to the overall increase in the building mass due to the heavier blast-resistant façade ( Figure 5) that the frame had during this test compared to the lighter conventional façade used for the first test ( Figure 4).
As mentioned earlier, to eliminate the participation of the relatively strong façade in the resisting lateral load, the base connection of the curtainwall façade at each floor level was designed to move along (slide) with the floor above (Figure 5c). Footage from highspeed video cameras that were focused on the base connections of the curtainwall façade at the west side of the building were reviewed, and it was confirmed that the façade was moving along with the floors. Figure 21 shows the motion relative to the base channel lateral displacement of the first-floor curtainwall facade at the west side of the building. The relative motion at 125 ms was estimated to be ≈50 mm, which was in phase with the measured displacement of the second floor at the same time (Figure 20a). The deflected shape of the structure at different stages between the time of detonation (0 ms) and a few milliseconds past the peak inbound response (120 ms) is shown in Figure 20a. Figure 20b shows the lateral deflection histories at the three floor levels as recorded by DG4, DG5, and DG6 for the 2nd, 3rd, and roof, respectively. The natural period of the frame was estimated to be approximately 820 ms, which was more than twice as high as the estimated natural period of the frame during the first test that was ≈380 ms. The rupture of the two first-floor brace connections (Figures 17d and 18c) were the primary reason for the higher natural period. Basically, two out of the four brace members on the first floor were not participating in the LFRS; hence, the lateral stiffness of the frame dropped, which resulted in an increase in the natural period. A secondary reason for the increase in the natural period was due to the overall increase in the building mass due to the heavier blast-resistant façade ( Figure 5) that the frame had during this test compared to the lighter conventional façade used for the first test ( Figure 4).
As mentioned earlier, to eliminate the participation of the relatively strong façade in the resisting lateral load, the base connection of the curtainwall façade at each floor level was designed to move along (slide) with the floor above (Figure 5c). Footage from high-speed video cameras that were focused on the base connections of the curtainwall façade at the west side of the building were reviewed, and it was confirmed that the façade was moving along with the floors. Figure 21 shows the motion relative to the base channel lateral displacement of the first-floor curtainwall facade at the west side of the building. The relative motion at 125 ms was estimated to be ≈50 mm, which was in phase with the measured displacement of the second floor at the same time (Figure 20a The peak-reflected specific impulse on the front (south) face of the building ( Figure 10) was 1500 kPa × ms, which was approximately 10% higher than the impulse during the first test. As with the first test, the impulse value was taken as the average from the 11 PGs that were installed on the front elevation of the building. Multiplying this value with the front face dimensions of the building, 18.3 m wide and 12.3 m tall, the reflected impulse was 350 MN × ms. Unlike the conventional glazed facade during the first test, the blastresistant façade absorbed most of the blast impulse that was eventually transferred to the LFRS of the building. Unfortunately, the loads cells that were installed to measure the dynamic reactions that the façade transferred to the frame (Figure 9b) were damaged during the test, and the recorded data were not valid. Nonetheless, the higher impulse that the nonfailing façade transferred to the building frame, compared to Test 1, was evident from the generally higher lateral deflections and the partial failure of the first-floor brace connections (Figures 17 and 18). Additionally, the higher impulse that was transferred to the building was also reflected on the response of roof spandrel beams on the front (south) elevation of the building, as shown in Figure 22. The roof spandrel beams had an appreciable level of permanent lateral deformation i.e., weak axis bending, of approximately 300 mm. Conversely, the same spandrel beams after the first test did not have any permanent deformation. Since the spandrels at the lower floors were connected to the concrete slab through shear studs, their weak axis bending capacity was higher; hence, they did not sustain any plastic deformation about their weak axis. The peak-reflected specific impulse on the front (south) face of the building ( Figure 10) was 1500 kPa × ms, which was approximately 10% higher than the impulse during the first test. As with the first test, the impulse value was taken as the average from the 11 PGs that were installed on the front elevation of the building. Multiplying this value with the front face dimensions of the building, 18.3 m wide and 12.3 m tall, the reflected impulse was 350 MN × ms. Unlike the conventional glazed facade during the first test, the blastresistant façade absorbed most of the blast impulse that was eventually transferred to the LFRS of the building. Unfortunately, the loads cells that were installed to measure the dynamic reactions that the façade transferred to the frame (Figure 9b) were damaged during the test, and the recorded data were not valid. Nonetheless, the higher impulse that the nonfailing façade transferred to the building frame, compared to Test 1, was evident from the generally higher lateral deflections and the partial failure of the first-floor brace connections (Figures 17 and 18). Additionally, the higher impulse that was transferred to the building was also reflected on the response of roof spandrel beams on the front (south) elevation of the building, as shown in Figure 22. The roof spandrel beams had an appreciable level of permanent lateral deformation i.e., weak axis bending, of approximately 300 mm. Conversely, the same spandrel beams after the first test did not have any permanent deformation. Since the spandrels at the lower floors were connected to the concrete slab through shear studs, their weak axis bending capacity was higher; hence, they did not sustain any plastic deformation about their weak axis. The peak-reflected specific impulse on the front (south) face of the building ( Figure 10) was 1500 kPa × ms, which was approximately 10% higher than the impulse during the first test. As with the first test, the impulse value was taken as the average from the 11 PGs that were installed on the front elevation of the building. Multiplying this value with the front face dimensions of the building, 18.3 m wide and 12.3 m tall, the reflected impulse was 350 MN × ms. Unlike the conventional glazed facade during the first test, the blast-resistant façade absorbed most of the blast impulse that was eventually transferred to the LFRS of the building. Unfortunately, the loads cells that were installed to measure the dynamic reactions that the façade transferred to the frame (Figure 9b) were damaged during the test, and the recorded data were not valid. Nonetheless, the higher impulse that the nonfailing façade transferred to the building frame, compared to Test 1, was evident from the generally higher lateral deflections and the partial failure of the firstfloor brace connections (Figures 17 and 18). Additionally, the higher impulse that was transferred to the building was also reflected on the response of roof spandrel beams on the front (south) elevation of the building, as shown in Figure 22. The roof spandrel beams had an appreciable level of permanent lateral deformation i.e., weak axis bending, of approximately 300 mm. Conversely, the same spandrel beams after the first test did not have any permanent deformation. Since the spandrels at the lower floors were connected to the concrete slab through shear studs, their weak axis bending capacity was higher; hence, they did not sustain any plastic deformation about their weak axis. Finally, Table 6 shows the peak measured brace forces based on the recorded data from the 24 strain gauges that were attached to each one of the 12 braces that resisted load in the blast direction ( Figure 8). The peak forces for all braces were recorded during the first inbound response cycle. Consequently, for each pair of braces, the negative value is at the brace member in compression and the positive value is for the conjugate brace that was in tension. It is also noted that the base connections of SG3, SG4, SG15, and SG16 that were attached to the brace members completely ruptured (Figures 17d and 18c). Table 6. Test 2-Peak measured braces forces from strain gauge data (refer to Figure 8 for strain gauge positions). 1 IDs within brackets are strain gauges on column line C braces. 2 Taken at the peak measured strain multiplied by steel elastic modulus (200 GPa) and by brace cross-sectional area. 3 Taken as the average of the measured peak force from the two strain gauges at each brace member. Underlined values are on the braces that their connections ruptured. '×' indicates strain gauge data were not good.
Test 1 and Test 2 Response Comparison and Remarks
Even though the applied blast impulse in the two tests was practically the same, in the order of 1350-1500 kPa × ms, the responses were significantly different. Since the only difference on the test structure between Test 1 and Test 2 was the envelope of the building, it is quite evident that the presence of the hardened (blast-resistant) façade during Test 2 Finally, Table 6 shows the peak measured brace forces based on the recorded data from the 24 strain gauges that were attached to each one of the 12 braces that resisted load in the blast direction ( Figure 8). The peak forces for all braces were recorded during the first inbound response cycle. Consequently, for each pair of braces, the negative value is at the brace member in compression and the positive value is for the conjugate brace that was in tension. It is also noted that the base connections of SG3, SG4, SG15, and SG16 that were attached to the brace members completely ruptured (Figures 17d and 18c). Table 6. Test 2-Peak measured braces forces from strain gauge data (refer to Figure 8 for strain gauge positions). 1 IDs within brackets are strain gauges on column line C braces. 2 Taken at the peak measured strain multiplied by steel elastic modulus (200 GPa) and by brace cross-sectional area. 3 Taken as the average of the measured peak force from the two strain gauges at each brace member. Underlined values are on the braces that their connections ruptured. '×' indicates strain gauge data were not good.
Test 1 and Test 2 Response Comparison and Remarks
Even though the applied blast impulse in the two tests was practically the same, in the order of 1350-1500 kPa × ms, the responses were significantly different. Since the only difference on the test structure between Test 1 and Test 2 was the envelope of the building, it is quite evident that the presence of the hardened (blast-resistant) façade during Test 2 collected and transferred a significantly higher dynamic lateral load to the structural frame of a building compared to Test 1 where the conventional glazed façade failed early, thus transferring less load to the frame. Table 7 summarizes the peak inbound lateral deflections from the two tests. The peak measured deflections during the second test were more than 2.5 times higher compared to the lateral deflections of the first test, which indicates the considerably higher blast impulse that the building frame absorbed during the second test. In particular, at the second floor level, the difference was actually more than four times higher, since the rupture of the gusset plate brace connections of the first-floor braces (Figures 17 and 18) further increased the lateral displacements. The higher loads that the LFRS resisted during the second test were also reflected on the estimated peak brace axial loads as recorded by the array of strain gauges (Figure 8). Table 8 shows a side-by-side comparison of the peak measured brace forces between the two tests. The brace forces during the second test were up to 2.4 times higher than those of the first test. In addition, as previously noted, the LFRS during the first test responded within the elastic regime with no signs of failure, whereas during the second test, the LFRS was partially compromised. Specifically, the gusset plate connections of the firstfloor braces were either partially damaged due to local yielding ( Figure 19) or completely ruptured along the weld lines d and Figure 18c). The partial damage of the first-floor brace connections acted as a "fuse-link" that limited the peak forces on the LFRS members. If the brace connections had higher capacity, it is likely that the braces forces would have been higher. Nonetheless, despite the partially compromised LFRS after the second test, the building remained practically vertical with less than 3 mm permanent lateral deformation at the roof level as measured from survey scans before and after the test. In terms of occupant survivability, it should be highlighted that the shattered façade during the first test would have likely result in casualties due to the high-velocity debris from the shattered glazed façade. The debris would have likely injured the building's occupants and casualties may have occurred, despite the essentially elastic response of the building's frame. On the other hand, during the second test, even though the LFRS was partially compromised, and peak lateral deflections were higher, occupant survivability was expected to be quite high since the hardened façade protected the interior of the building. Nonetheless, the building after the second test was not to be occupied immediately until after the damaged LFRS was repaired.
Research Limitations
While the full-scale test structure used for this test program constitutes in many ways a worst-case scenario, it may still not be representative for all steel frame building sizes and configurations. This test structure is considered to represent a worst-case scenario for the following reasons:
•
It was designed to resist only typical gravity load and wind loads for a basic wind speed of 51 m/s. • No provisions for blast or earthquake loads were considered.
•
The chevron brace configuration used for the LFRS was found to be one of the most vulnerable brace types under dynamic overload conditions, as suggested by McKay et al. [19].
•
The size and number of stories of the structure were also chosen based on the findings of the study by McKay et al. [19]. In similar blast environments, larger structures with more stories are likely to perform better. Conversely, smaller structures with a smaller number of stories are expected to have heavy damage and are prone to total collapse. • The structure was oriented relative to the applied blast load such that the wider, 18.3 m, face of the building (Figure 10) was the one directly loaded from the airblast, which resulted to higher blast impulse loads compared to having the airblast directly loading the narrower, 12.2 m, face of the building.
More vulnerable steel building configurations/shapes may exist. For example, building geometries with concave shapes or re-entrant corners tend to "collect" more blast loads [1]; hence, their response to blast load is expected to be worse compared to a similar buildings with flat planar faces, which is similar to the one used for the tests herein. Secondly, while the blast load levels that the building was subjected to during the tests were relatively high, in the order of 1500 kPa × ms, there is no assurance that if the same building was subjected to even higher blast load levels that it would not collapse. Nonetheless, the results from these series of blast tests on the three-story test structure provided valuable data about the response of steel structures for dynamic load ranges that exceed their design loads per structural code provisions of standard practice [20,21,29]. These tests demonstrated that steel buildings with three stories or more have the potential to withstand relatively high blast loads that can globally sway the structural frame of the building with a relatively high margin of safety against collapse to allow evacuation after an attack. However, the building was not considered to be suitable for immediate occupancy until after its LFRS was fully repaired. Finally, the experimental data from these tests will help improve structural analysis methods, as they can be used to validate existing approaches for assessing the response of structures under shock loads.
Summary and Conclusions
This paper presents the results of two large-scale blast tests that were conducted to evaluate the response of conventionally designed three-story steel frame buildings to relatively high, long duration blast loads. The LFRS of the test frame was designed only for typical gravity and wind loads without consideration for blast and/or seismic loads. The two tests were performed on the same steel frame and at the same blast load level. The only difference between the two tests was the envelope of the building. For the first test, the frame was enclosed with a conventional curtainwall glazed façade, while for the second blast test, the building was enclosed with a hardened blast-resistant façade. During the first test, the glazed façade failed early with minimal absorption of the applied blast impulse, and the steel frame responded elastically to the applied blast load. The LFRS was inspected after the test, and no damage was observed. Conversely, during the second test, the non-failing hardened façade absorbed a higher level of blast load, which was transferred to the LFRS of the test structure, resulting in partial damage of some gusset plate connections at the first-floor level. Despite the partial damage of the LFRS, the steel frame resisted the applied blast load and provided a relatively high margin of safety against collapse since the post-test permanent lateral deflection at the roof level was estimated to be less than 3 mm. The response of the two tests and observations made during the test lead to the following remarks and conclusions:
•
During the first test, the early failure of the conventional façade limited the blast loads that were transferred to the building; hence, no signs of damage or failure were observed on the LFRS of the test frame. Nonetheless, the occupant survivability in that case was expected to be quite low, since most of the glazed façade shattered and debris penetrated the building. • Due to the early failure of the glazed façade during the first test, the estimated reflected impulse that the non-blast-resistant glazed façade transferred to the structural frame of the building was estimated to be only 10% of the measured reflected impulse at the front face of the building.
•
During the second test, the blast-resistant façade sustained the inbound blast pressure with plastic deformation that was within the target performance limit, thereby transferring a considerably higher load to the LFRS of the test frame.
•
Owing to the higher dynamic reactions during the second test, the LFRS was partially compromised with some gusset plate connections of the braces completely rupturing.
•
The inter-story drift ratios of both tests were compared with the drift ratio limits of ASCE 41-06 [31] for the different performance levels. The agreement between the code-based drift ratio limit and the expected level of damage was consistent with the damage levels observed in the two tests.
•
Despite the partial compromise of the LFRS during the second test, the building did not show any signs that it was at a near-collapse state, indicating that the safety margin against collapse was relatively high and would allow evacuation after an attack. Preand post-test survey data suggested that the steel frame only had a residual permanent plastic deformation at the roof level of approximately 3 mm.
•
Due to the partial damage of the LFRS during the second test, the building was not considered suitable for immediate occupancy until after its LFRS was fully repaired. • While the building used for the test program was considered a worst-case scenario since its LFRS was designed for typical winds load only without any provisions for blast or seismic loads, other building configurations with concave shapes of similar size may exist that may have a less favorable response. | 2021-12-03T16:20:13.793Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "67287af8a3bfb6f5f30ee57f5e7ca9271357e264",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-631X/4/4/49/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1e20bdcecded32e8b744cdd0e80ef150a55674d0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
205376179 | pes2o/s2orc | v3-fos-license | Congenital agenesis of pubis and bilateral cryptorchidism: A case report
Highlights • Agenesis of the pubis is a very rare clinical deformity which can be a sign of urogenital abnormalities. To report a rare condition would help clinicians to diagnose co-morbidities which may affect children remaining life.• This is a rare pattern of associated anomalies confined to a localized region of the body. Somatic mutations may responsible for developmental abnormalities of mesoderm from which the pubic bones and urogenital structures develop.• An isolated X-ray finding of ramus pubis agenesis may associate with cryptorchidism or several other urogenital malformations.
Introduction
The pubis is the lowest and most anterior portion of the hip bones of the pelvis. The pubis has a body, a superior ramus, and an inferior ramus. The body of the pubis contributes to the lunate surface and acetabular fossa in the acetabulum. Ossification of the pubic bone begins in the 18th to 20th gestational week. The superior pubic and ischial rami of a full-term neonate are usually ossified. 1 The ischiopubic component of the pelvis starts to develop antenatal between the fifth and sixth months of fetal life from two ossification centers, an ischial (inferolateral) and a pubic (superomedial) center. At birth, the ossification is almost complete; however, the ischial and pubic segments remain separated by a cartilaginous tissue, the ischiopubic synchondrosis. Ossification and closure of the ischiopubic synchondrosis are variable, usually occurring between 4 and 12 years of age. 2 Agenesis of the pubic bone, as evidenced in the world literature, is a very rare clinical and congenital abnormality. Congenital agenesis of the pubis may present itself as either an isolated anomaly or as a syndromic constituent. Several disorders may occur with hypoplasia of the pubis. Some of these are extrophy of the bladder, epispadias, hypospadias, small patella syndrome, achondrogeneses 1 and 2, rib anomalies, multipl segmenter deficits of the spine, hipochondrogenesis, camptomelic dysplasia, hypophosphatasia, undescended testes, acetabular dysplasia and congenital dislocation of the hip. This case demonstrates a rare condition of congenital unilateral agenesis of the superior ramus with bilateral undescended testes, osteoporosis, acetabular dysplasia and an abnormal gait pattern.
Case report
In 2002, a 5-years-old boy who was admitted to our clinic with a limp and extraversion in the right leg. His parents confirmed that he had been born in 41 weeks by normal spontaneous vaginal delivery and that he was a healthy newborn. He was taken to a hospital because of extraversion in his right leg when he was six months old. In this hospital, he received follow-up for the right hip acetabular dysplasia. A physical examination by the urologist revealed bilateral undescended testes when he was one year old. The testosterone level was <20 ng/dl, LH was 0.16 mIU/ml, FSH was 2.81 mIU/ml, inhibin B was <10 ng/l. He put on human chorionic gonadotropin (hCG/Pregnyl ® ) 1500 IU per day for four days to stimulate testicular descent. After the stimulation, the testosterone level was found at <20 ng/dl again and no testes were palpated in scrotum. The patient underwent inguinal and pelvic exploration by the urologist and rudimentary testes tissues were found at the superior of the bladder and the chromosomal analysis was 46 XY. Also, testosterone secretions were not found.
The boy was five years old when he was admitted to orthopedics clinic and at the physical examination, his right hip abduction was 45 • , flexion was 135 • , internal rotation was 50 • and external rotation was 90 • . Clinically, the right lower limb was 2 cm shorter. Trendelenburg test was positive and right lower limb was in slightly external rotation while he walks. There were an old incision scar in the patient's right and left inguinal regions and neither testis could be palpated in the scrotum. There was a slight contracture of the iliospoas tendon on the right side which was diagnosed with a Thomas test. The neurologic and mental examinations were within normal limits. At the antero-posterior pelvic X-ray the superior ramus of the right pubic bone was not detected. An acetabular index (AI) of 20 • on the right and 16 • on the left hip was recorded at the initial radiologic examination (Fig. 1). Imaging of the lumbar region was performed and no abnormality was found and no secondary structural change was noted (Fig. 2). Cranial MRI showed that a 4 cm × 5 cm arachnoid cyst in the occipital region. His T score was −5.1 in the L2-L4 region for bone mineral density and his bone age was 3.5-years-old. Compared to his peers, he was considered as osteoporotic and the treatment was begun. Genetic screening has detected no additional syndrome. Gait analyses of the patient showed that right hip extensor muscle groups were dominant, anterior pelvic tilt (Fig. 3) and right hip abduction were detected in Fig. 2. Lumbosakral imaging showed no abnormality. the stance phase (Fig. 4). The iliopsoas muscle group was considered to be short according to gait analysis. The patient was followed at interval of six months and an arthrogram was performed for dynamic fluoroscopic examination of the right hip when he was six years old. In surgery AI was measured at 21 • on the right side by fluoroscopic evaluation. The femoral head was well-covered with asetabular cartilage, his hip range of motion was full and spherical congruency between femoral head and acetabulum was present (Figs. 5A and B and 6).
At 14 years old, the boy was fully ambulatory, with 2 cm right lower extremity shortening, a slight Trendelenburg gait and had no sign of pain. He is currently being followed by pediatric endocrinology for testicular regression syndrome. His T score was improved to −3.2 in the L2-L4 region with bisphosphonate and vitamin D treatment. His bone age was 13 years old, testosterone level was <20 ng/dl and he was still on testosterone replacement therapy. His AI was improved to 18 • on the right and 15 • on the left hip. His central edge angle (CEA) was 24 • on the right and 26 • on the left hip ( Fig. 7A and B). No additional surgery was needed.
Discussion
The abnormality of our patient was congenital unilateral agenesis of the superior ramus of the pubic bone with bilateral undescended testes, osteoporosis, cranium malformations, acetabular dysplasia and an abnormal gait pattern.
Recognition of the agenesis of pubic bones is of clinical importance, because bone abnormalities can be seen in conjunction with other musculoskeletal and urogenital abnormalities such as teratologic hip dislocation, patellar hypoplasia, undescended testes, and hypospadias. 3,4 Sarban et al. 4 found that the absence of the pubic bone may be a cause of acetabular dysplasia. Their case was presented with teratogenic left hip dislocation, undescended left testes, hypospadias and left pubic bone aplasia. They were performed open reduction with capsular plication when the infant was eighteen months old. The AI was within the normal range and the infant were fully ambulatory at the last follow-up. Sponseller et al. 5 compared computerized tomography scans of the pelvis of the 24 patients who had exstrophy of the bladder with scans of agematched controls. They found 30% shortening of the pubic rami, and progressive diastasis of the symphysis pubis. In the literature such as this urologic abnormalities were reported to be together with the agenesis of pelvic structures. Like our study, Yildiz et al. showed a rare entity that undescended testes with agenesis of pubic rami. 6 In conclusion, we suggest that this is a rare pattern of associated anomalies confined to a localized region of the body. Somatic mutations may responsible for developmental abnormalities of the mesoderm from which the pubic bones and urogenital structures develop. An isolated X-ray finding of ramus pubis agenesis may associate with cryptorchidism or several other urogenital malformations.
Conflict of interest
Each author certifies that he or she has no commercial associations that might pose a conflict of interest in connection with the submitted article.
Funding
This study has not been published elsewhere before. It is not accepted for publication and under consideration by another publication. There is no commercial association that might pose the manuscript and the data.
Ethical approval
Written informed consent was obtained from the patient for publication of this case report and accompanying images. Fig. 7. (A and B) Last follow up X-rays shows 2 cm right lower extremity shortening, pelvic obliquity, acetabular dysplasia with a horizontal sourcil.
Author contributions
Yavuz Saglam M.D. contributed in the study design, data collections, data analysis and writing the manuscript. Murat Dursun M.D. contributed in the study design, data analysis, and in writing. Goksel Dikmen M.D. contributed in data collections and data analysis. Suleyman Bora Goksan M.D. contributed in data analysis and writing.
Key learning points
• Agenesis of pubis is a very rare clinical deformity which can be a sign of urogenital abnormalities.
• Pubic bones and urogenital structures develop from mesoderm. • Somatic mutations may be responsible for developmental abnormalities of mesoderm.
• An isolated X-ray finding of ramus pubis agenesis may be associated with cryptorchidism. | 2018-04-03T04:58:08.672Z | 2014-08-15T00:00:00.000 | {
"year": 2014,
"sha1": "8d3de63bfea9ec58e8783fd59c543b848a1ff020",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2014.07.025",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d3de63bfea9ec58e8783fd59c543b848a1ff020",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118635262 | pes2o/s2orc | v3-fos-license | A possible solution to the second entanglement paradox
Entangled states are in conflict with a general physical principle which expresses that a composite entity exists if and only if its components also exist, and the hypothesis that pure states represent the actuality of a physical entity, i.e., its 'existence'. A possible way to solve this paradox consists in completing the standard formulation of quantum mechanics, by adding more pure states. We show that this can be done, in a consistent way, by using the extended Bloch representation of quantum mechanics, recently introduced to provide a possible solution to the measurement problem. Hence, with the solution proposed by the extended Bloch representation of quantum mechanics, the situation of entangled states regains full intelligibility.
GPP2 A composite physical entity S, formed by two sub-entities S A and S B , is said to exist at a given moment if and only if S A and S B exist at that moment.
On the other hand, according to standard quantum mechanics (SQM), the following two principles are also assumed to hold: The paradox results from the observation that the above four principles are not compatible with each other, precisely because of the existence of entangled states. Indeed, if the composite entity S is in the entangled state: |ψ = a 1 e iα 1 |ψ A ⊗ |φ B + a 2 e iα 2 |φ A ⊗ |ψ B , with a 1 , a 2 , α 1 , α 2 ∈ R, 0 ≤ a 1 , a 2 ≤ 1, a 2 1 + a 2 2 = 1, |ψ A , |φ A ∈ H A , |ψ B , |φ B ∈ H B , ψ A |φ A = ψ B |φ B = 0, considering that |ψ is a ray-state, by the SQMP1 it describes a pure state of S. By the GPP1, we know that S exists, and by the GPP2, the two sub-entities S A and S B also exist. But then, by the SQMP1, S A and S B are in ray-states, and by the SQMP2, S is in a product state, which is a contradiction.
Facing this conflict between the above four principles, a possible strategy is that of considering that the GPP2 does not have general validity, in the sense that when a composite entity is in an entangled state, its sub-entities would simply cease to exist, in the same way that two water droplets cease to exist when fused into a single larger droplet. However, this strategy is not fully consistent, as two quantum entities do not completely disappear when entangled, considering that there are properties associated with the pair that remain always actual.
For instance, we are still in the presence of two masses, which can be separated by a large spatial distance. So, entanglement is neither a situation where two masses are completely fused together, nor a situation where a spatial connection would bond them together, making it difficult to spatially separate them (as in chemical bonds). Also, entangled entities can remain perfectly correlated, and the property of "being perfectly correlated" is clearly a property which is meaningful only when we are in the presence of two existing entities.
In other terms, we cannot affirm that in an entangled state the composing entities cease to exist, and it is very difficult to even conceive a situation where the GPP2 would cease to apply. Even the above example of two droplets of water fused together cannot be considered as a counterexample, as in this case we are not really allowed to describe the larger droplet, once formed, to be the combination of two actual sub-droplets. So, it doesn't seem reasonable to abandon the GPP2, and of course it is neither reasonable to abandon the GPP1, which is almost tautological if we consider that a pure state, by definition, describes the objective condition in which an entity is in a given moment, precisely because of its actual existence.
Regarding the GPP1, it could be objected that also an entity which is not in a pure state can exist. This is correct, but we have here to understand that the term 'pure state' is a sort of pleonasm, as if we consider the notion of state at its most fundamental level, i.e., as a description of the reality of an entity, in a given moment, then all states are by definition pure states. Indeed, the non-pure states (i.e., the statistical mixtures of states), only describe our subjective lack of knowledge regarding the actual condition of the physical entity under consideration. Even when we are ignorant about the objective condition of an entity, i.e., its pure state, we can of course still say that the entity exists. However, and this is how the GPP1 should be understood, if the entity exists, in a given moment, then it must be possible (at least in principle, and independently of our subjective 'state of knowledge') to characterize its potential behavior under all possible interactions, and such characterization is precisely what a pure state is all about.
So, the only way to resolve the paradox seems to be that of revisiting the SQM1. To do so, we start by observing that there is a well-defined procedure in SQM that allows to associate individual states to entangled sub-entities. If, say, we are only interested in the description of S A , irrespective of its correlations with S B , all we have to do is to take a partial trace. For this, one has to rewrite the raystate (1) in operatorial form. Defining: where the interference contribution is given by: with α = α 2 − α 1 . The state of S A , irrespective of its correlations with S B , can then be naturally defined by taking the partial trace: D A = Tr B D ψ , and similarly for S B : D B = Tr A D ψ . A simple calculation yields: However, (4) cannot be considered to solve the second entanglement paradox, as is clear that the reduced one-entity states D A and D B will not in general be ray-states, but density operators, and by the SQM1 we cannot interpret them as pure states. Therefore, we cannot use the GPP1 to decree the existence of S A and S B .
The following questions then arise: To save the intelligibility of the entangled states, shouldn't we complete the SQM by also allowing density operators to describe pure states? And more importantly: Do we have sufficient physical arguments to consider such a completed quantum mechanics (CQM), and can it be formulated in a sufficiently general and consistent way? It is the purpose of this article to provide positive answers the above questions, showing that the second entanglement paradox can be solved.
For this, we start by observing that the first reason to consider that density operators should also describe pure states is precisely the existence of the above mentioned partial trace procedure. Indeed, there is no logical reason why by focusing on a component of a system in a well-defined pure state, taking a partial trace, we would suddenly become ignorant about the condition of such component.
Another important reason is that a same density operator admits infinitely many representations as a mixture of one-dimensional projection operators [5]. This immediately suggests that the mixture interpretation is generally inappropriate, not only because it remains ambiguous, but also, and especially, because it fails to capture the dimension of potentiality that a density operator is able to describe.
Another relevant observation is that composite entities in ray-states can undergo unitary evolutions such that the evolution inherited by their sub-entities will make them continuously go from ray-states to density operator states, and return; a situation hardly compatible with the statistical ignorance interpretation of the density operators (see [3], sect. 7.5).
But we think there is an even more important reason to consider that the density operators can also describe pure states: if we do so, it becomes possible to derive the Born rule and provide a solution to the measurement problem, as recently demonstrated in what we have called the extended Bloch representation of quantum mechanics [6].
Let us briefly explain how this works. As is known, the ray-states of two-dimensional systems (like spin-1 2 entities) can be represented as points at the surface of a 3-dimensional unit sphere, called the Bloch sphere [7], with the density operators being located inside of it. What is less known is that a similar representation can be worked out for general N -dimensional systems [8]. The 3-d Bloch sphere is then replaced by a (N 2 − 1)-dimensional unit sphere, with the difference that, for N > 2, only a convex portion of it is filled with states.
When this generalized Bloch sphere representation is adopted, as an alternative way to represent the quantum states, it can be further extended by also including the measurements. These are geometrically described as (N − 1)-simplexes inscribed in the sphere, whose vertices are the eigenvectors of the measured observables. These measurement simplexes, in turn, can be viewed as abstract structures made of an unstable and elastic substance, and it can be shown that an ideal quantum measurement is a process where the abstract point particle representative of the state first plunges into the sphere, in a deterministic way, along a path orthogonal to the simplex, then attaches to it, and following its indeterministic disintegration, and consequent collapse, is brought to one of its vertices, thus producing the outcome of the measurement, in a way that is perfectly consistent with the Born rule and the projection postulate [6].
We will not describe here the details of this 'hidden-measurement mechanism', as this is not the scope of this article. We only emphasize that its functioning requires the point particle representative of the state to move from the surface to the interior of the sphere, then back to the surface, thus implicitly ascribing the status of pure states also to the density operators.
In other terms, if we take seriously the extended Bloch representation, we can say in retrospect that a key obstacle in our understanding of quantum measurements is that SQM was not considering all the possible pure states that can describe the condition of a physical entity, and that the missing ones were precisely those located inside of the generalized Bloch sphere, i.e., the density operators.
Our final and in a sense most important argument in favour of the 'density operators are pure states' interpretation is about showing that within the extended Bloch formalism a composite entity in an entangled state is naturally described as a system formed by two components that always remain in well defined states, precisely corresponding to the reduced states (4), plus a third 'element of reality' describing their non-local correlation. For this, we start by observing that (2) can be written as [6]: where the real unit vector r is the representative of the ray-state D ψ within the generalized Bloch sphere 2 ) 1 2 , and the components of the operator-vector Λ are (a determination of) the generators of SU (N ), the special unitary group of degree N , which are self-adjoint, traceless matrices obeying Tr Λ i Λ j = 2δ ij , i, j ∈ {1, . . . , N 2 − 1}, forming a basis, together with the identity operator I, for all the linear operators on H = C N .
In the same way, with H A = C N A , H B = C N B , N = N A N B , we can define the Bloch vectors: , the Λ B j are the N 2 B − 1 generators of SU (N B ), and I A and I B are the identity operators on H A and H B , respectively. Note that for N = 2, the components Λ i in (5) are simply the Pauli matrices, and r is a vector in the usual three-dimensional Bloch sphere.
At this point, we observe that it is possible to use the remarkable property that the trace of a tensor product is the product of the traces, to construct a determination of the SU (N ) generators in terms of tensor products of the generators of SU (N A ) and SU (N B ). More precisely, defining the N 2 self-adjoint N × N matrices: it is easy to check that, apart Λ (0,0) = ( 2 N ) 1 2 I, the remaining N 2 − 1 matrices are all traceless, mutually orthogonal and properly normalized, and therefore constitute a bona fide determination of the generators of SU (N ) that can be used in (5), to express the components of the vector r, representative of the composite entity's state.
Using the orthogonality of the generators, the components of r are given by: 2c N , and similarly for the components of the Bloch vectors representing the sub-entities' states. With a direct calculation, one can then show that the entangled state r is of the tripartite direct sum form: where we have defined In (6), the vector r A = a 2 1 r A + a 2 2 s A belongs to the one-entity Bloch sphere B 1 (R N 2 A −1 ), and describes the state of S A , whereas the vector r B = a 2 1 s B + a 2 2 r B belongs to the one-entity Bloch sphere B 1 (R N 2 B −1 ), and describes the state of S B . On the other hand, r corr is the component of the state which describes the correlation between the two sub-entities, and is of the form: where r AB = a 2 1 r AB 1 +a 2 2 r AB If the first two one-entity generators are chosen to be: , r int only has four non-zero components, which for a suitably chosen order for the joint-entity generators are: According to (6), and different from the SQM formalism, we see that the extended Bloch representation allows to describe an entangled state as a "less tangled" condition in which the two sub-entities are always in the well-defined states r A and r B , belonging to their respective one-entity Bloch spheres, which are clearly distinguished from their correlation, described by the vector (7), which cannot be deduced from the states of the two sub-entities, in accordance with the general principle that the whole is greater than the sum of its parts (so that the states of the parts cannot generally determine the state of the whole).
We can observe that the interference contribution r int is what distinguishes the entangled state (1)-(2) from the separable state: However, even when the interference contribution is zero, the separable (but non-product) state D sep ψ does not describe a situation of two experimentally separated entities, as is clear that the state vector r A is not independent from the state vector r B , since their components both contain the parameters a 2 1 and a 2 2 . Also, the components of r AB cannot be deduced from the knowledge of the components of r A and r B , which means that a separable state is not a separated state, but a state that still describes a situation where the whole is greater than the sum of its parts.
Of course, when a 2 = 0 (or a 1 = 0), we are back to the situation of a so-called product state. This manifests at the level of the Bloch representation in the fact that , with the two sub-entity states r A and s B now totally independent from one another, and able to fully determine the joint-entity contribution r AB 1 , so that there are no genuine emergent properties in this case.
Therefore, considering the general description (6), and the previously mentioned arguments in favor of the 'density operators are pure states' interpretation, we are now in a position to formulate a completed quantum mechanical principle, in replacement of the SQMP1, which can restore the full intelligibility of entangled states: CQMP1 If S is a physical entity with Hilbert space H, then each density operator of H is a pure state of S and all the pure states of S are of this kind.
Of course, the CQMP1 does not imply that a density operator cannot be used to also describe a situation of subjective ignorance of the experimenter, regarding the pure state of an entity. It simply means that within the quantum formalism a same mathematical object can be used to model different situations, which are not easy to experimentally distinguish, because of the linearity of the trace used to calculate the transition probabilities.
However, a distinction between 'pure state-density operators' and 'mixed state-density operators' is in principle possible, for instance if one can set up an experimental context producing a non-linear evolution of the states, as in this case mixtures and pure states will evolve in a different way, and their ontological difference can become observable (see [6] for some additional considerations regarding the distinguishability of pure and mixed states in a measurement context).
It is worth mentioning that a completed quantum mechanics retaining all the principles of SQM, apart the SQM1 which is to be replaced by the CQMP1, was already proposed by one of us many years ago [4]. At the time the proposal was motivated by the existence of a mechanistic classical laboratory situations able to violate Bell's inequalities exactly as quantum entities in EPR-experiments can do [9]. Today, considering that the 'density operators are pure states' interpretation is an integral part of the extended Bloch representation, which provides a possible solution to the measurement problem [6] and, as we have shown in this article, also allows to obtain a partitioning of the entangled states where their correlative aspects remain clearly and naturally "disentangled" from the description of the sub-entities' states, we believe that the proposal has reached the status of a firmly founded scientific hypothesis, only waiting for an experimental confirmation.
A last remark is in order. Even in the simplest case of two entangled qubits (N A = N B = 2), where the two one-entity states r A and r B can be represented within our 3-dimensional Euclidean space (for instance as directions, in the case of spins), the correlation vector r corr is already 9-dimensional, and therefore is no longer describable within our Euclidean theatre. This is in accordance with the observed non-local effects that are produced by entangled entities, which are insensitive to spatial separation, and which therefore should be understood as effects resulting from the existence of genuinely non-spatial correlations.
In other terms, the solution we have proposed to the second entanglement paradox, via the extended Bloch representation, also suggests that non-locality should be understood as a manifestation of the non-spatial nature of quantum entities. This means that our approach also offers a possible solution to the first entanglement paradox, as is clear that if quantum entities are non-spatial entities, then their interconnections, when in entangled states, need not to happen through space, and therefore can remain perfectly insensitive to spatial separation. | 2016-01-19T21:05:54.000Z | 2015-02-22T00:00:00.000 | {
"year": 2015,
"sha1": "53975ac13439ed476d3de4c0ef35dc2c0cf6f979",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1502.06249",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "53975ac13439ed476d3de4c0ef35dc2c0cf6f979",
"s2fieldsofstudy": [
"Physics",
"Philosophy"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
44886994 | pes2o/s2orc | v3-fos-license | Differential activation of mitogen-activated protein kinase and S6 kinase signaling pathways by 12-O-tetradecanoylphorbol-13-acetate (TPA) and insulin. Evidence for involvement of a TPA-stimulated protein-tyrosine kinase.
AG-18, an inhibitor of protein-tyrosine kinases, was employed to study the role of tyrosine-phosphorylated proteins in insulin- and phorbol ester-induced signaling cascades. When incubated with Chinese hamster ovary cells overexpressing the insulin receptor, AG-18 reversibly inhibited insulin-induced tyrosine phosphorylation of insulin receptor substate-1, with minimal effects either on receptor autophosphorylation or on phosphorylation of Shc64. Under these conditions, AG-18 inhibited insulin-stimulated phosphorylation of the ribosomal protein S6, while no inhibition of insulin-induced activation of mitogen-activated protein kinase (MAPK) kinase or MAPK was detected. In contrast, 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced activation of MAPK kinase and MAPK and phosphorylation of S6 were inhibited by AG-18. This correlated with inhibition of TPA-stimulated tyrosine phosphorylation of several proteins, the most prominent ones being pp114 and pp120. We conclude that Tyr-phosphorylated insulin receptor substrate-1 is the main upstream regulator of insulin-induced S6 phosphorylation by p70s6k, whereas MAPK signaling seems to be activated in these cells primarily through the adaptor molecule Shc. In contrast, TPA-induced S6 phosphorylation is mediated by the MAPK/p90rsk cascade. A key element of this TPA-stimulated signaling pathway is an AG-18-sensitive protein-tyrosine kinase.
p70 s6k is not activated by MAPK and appears to lie on a separate signaling pathway (16). One of its upstream activators is IRS-1, whose phosphorylation by insulin receptor kinase creates a binding site for the SH2 domains of the p85 regulatory subunit of phosphatidylinositol 3-kinase (PI3K) (17,18). The association between p85 and IRS-1 results in activation of PI3K (17,18) via a mechanism independent of the direct activation of PI3K by Ras (19). Activation of PI3K then stimulates p70 s6k by an as yet unknown mechanism (20,21).
A large variety of extracellular signals, aside from insulin, lead to ribosomal S6 phosphorylation. One example is the tumor-promoting phorbol ester (TPA) that activates the Ca 2ϩand phospholipid-dependent protein kinase C (22). Although protein kinase C has been implicated as playing an important role in insulin-induced activation of the MAPK cascade (23), other studies suggest that insulin (24) and insulin-like growth factor I (25) activate the MAPK cascade independent of protein kinase C.
To study the relative contribution of MAPK/p90 rsk and PI3K/ p70 s6k to insulin-and TPA-induced stimulation of S6 phosphorylation, we made use of tyrphostins, synthetic competitive inhibitors of several tyrosine kinases (see Ref. 26 for review) that inhibit insulin receptor kinase activity in vitro (27) and block insulin-induced lipogenesis and anti-lipolysis in fat cells (28). Employing Chinese hamster ovary (CHO) cells that overexpress the wild-type insulin receptor gene (CHO.T) (29), we found that AG-18 effectively inhibits insulin-induced IRS-1 phosphorylation as well as S6 kinase activity. On the other hand, insulin induction of the MAPK cascade is not affected by AG-18. The phorbol ester TPA also stimulates S6 kinase activity that is inhibited by AG-18, but unlike the insulin stimulation, this inhibition is correlated with the inhibition of the MAPK cascade. These results implicate a bifurcation in insulin signaling where IRS-1 mediates, to a large extent, S6 phosphorylation via p70 s6k , while Shc may mediate a signaling pathway leading to MAPK/p90 rsk activation. In contrast, the TPA effect on S6 phosphorylation seems to be transmitted via the MAPK/ p90 rsk pathway. Most important, a protein-tyrosine kinase, whose activity is inhibited by AG-18, is one of the elements linking protein kinase C to the MAPK cascade.
Cell Cultures-Chinese hamster ovary cells, transfected with a wildtype human insulin receptor gene (CHO.T (29)), were a generous gift of Dr. William J. Rutter (University of California, San Francisco). The cells were grown in F-12 medium as described previously (29).
Purification of Insulin Receptor-Insulin receptors were partially purified from rat liver plasma membranes. The preparation of membranes, solubilization in Triton X-100, and affinity chromatography of insulin receptors on wheat germ agglutinin coupled to agarose were carried out as described previously (32).
Phosphorylation of Exogenous Substrates-The reaction was carried out as described previously (33). Briefly, partially purified insulin receptors (50 -200 g/ml) were incubated in the presence or absence of 10 Ϫ7 M insulin (in 50 mM Hepes, 0.1% bovine serum albumin, 0.1% Triton X-100, pH 7.6, for 30 min at 22°C) and the indicated concentrations of AG-18 (dissolved in 10% Me 2 SO). Control tubes were incubated with Me 2 SO at a final concentration of 1%. Phosphorylation in a final volume of 100 l was initiated with 20 l of a reaction mixture to yield the following final concentrations: 50 M [␥-32 P]ATP, 1 mM CTP, 40 mM magnesium acetate, 0.1% Triton X-100, and 0.2 mg/ml poly(Glu,Tyr) (4:1). Reactions were allowed to proceed for 10 min at 22°C and were terminated by spotting 80 l onto Whatman No. 3MM filter papers that were extensively washed in 10% trichloroacetic acid, rinsed in ethanol, dried, and counted by liquid scintillation in a Betamatic counter.
Treatment of Intact Cells with Tyrphostins-Cells, grown in 60-mm plates, were deprived of serum for 16 h before each experiment. 2 h before treatment, the medium was replaced with fresh medium supplemented with 1 mg/ml bovine serum albumin (Sigma radioimmunoassay-grade). Cells were then incubated with or without tyrphostins for an additional 4 h. Tyrphostins were supplemented as 100 ϫ concentrated solutions dissolved in 50 mM Hepes, pH 7.6, and 10% Me 2 SO. Final Me 2 SO concentrations during the incubation did not exceed 0.1%. In some of the experiments, when tyrphostins were present for periods of 16 h, they were added at the time of serum deprivation and were kept throughout the incubations. Following stimulation with the appropriate ligands, cells were washed three times with ice-cold PBS and frozen in liquid nitrogen. Preliminary experiments indicated that this concentration of Me 2 SO did not have any adverse effects on the biological activities under study.
Western Blotting-Samples were boiled in Laemmli sample buffer (34) containing 20 mM dithiothreitol, resolved by SDS-PAGE, and transferred to nitrocellulose. Electrophoretic transfer from the gels to the nitrocellulose papers and incubation with anti-Tyr(P) antibodies were carried out as described previously (33). Detection of bound antibodies was carried out with an ECL kit (Amersham) according to the manufacturer's instructions.
S6 Kinase Assay-The 40 S ribosomal subunit of Artemia salina was prepared as described previously (35). CHO.T cells, grown in 60-mm plates, were incubated for 16 h in serum-free medium in the absence or presence of AG-18. Insulin (0.1 M) or TPA (0.4 g/ml) was added to the cells, and incubation was continued for an additional 10 -60 min as indicated. Following treatment, cells were lysed in 1 ml of buffer II (50 mM Hepes, pH 7.6, 30 mM -glycerophosphate, 15 mM MgCl 2 , 10 mM EGTA, 2 mM sodium orthovanadate, 1 mM phenylmethylsulfonyl fluoride, and 25 g/ml aprotinin) by freezing and thawing (three times). The extracts were centrifuged at 12,000 ϫ g for 15 min at 4°C, and the supernatants were assayed for S6 kinase activity using the 40 S ribosomal protein as substrate (35).
Determination of MAPK and MAPKK Activities-Determination of these activities was carried out as described previously (36). Briefly, cells were harvested in buffer H (50 mM -glycerophosphate, pH 7.3, 1.5 mM EGTA, 1 mM EDTA, 1 mM dithiothreitol, 0.1 mM sodium vanadate, 1 mM benzamidine, 10 g/ml aprotinin, 10 g/ml leupeptin, and 2 g/ml pepstatin A) and disrupted by 2 ϫ 7-s sonication (50 watts) on ice, followed by centrifugation at 20,000 ϫ g for 15 min at 4°C. The supernatant contained the cytosolic extracts to be examined. All subsequent steps were performed at 4°C. Cytosolic extracts so obtained (0.5 ml) were fractionated on 0.35-ml DEAE-cellulose minicolumns. The flowthrough and wash in 0.02 M NaCl in buffer A (50 mM -glycerophosphate, pH 7.3, 1.5 mM EGTA, 1 mM EDTA, 1 mM dithiothreitol, and 0.1 mM sodium vanadate) were collected and measured for MAPKK activity toward extracellular signal-regulated kinase-2 in a double coupled assay (36). Elution with 0.75 ml of buffer A (including 0.22 M NaCl) contained Ͼ85% of MAPK activity measured against myelin basic protein as described (36).
Intracellular ATP Content-The intracellular ATP content was determined as described (37). Briefly, CHO.T cells, grown in 10-cm plates, were incubated for 2-16 h in serum-free medium in the absence or presence of AG-18 (0 -200 mM). Following treatment, cells were lysed in 1 ml of buffer, and the ATP content of the 12,000 ϫ g supernatant was determined by the luciferin/luciferase assay (37). The chemiluminesence was monitored using a Lumac 3M Luminometer (Model M2010A). Results are the means of duplicate measurements that did not vary by Ͼ5%. Each experiment was carried out at least three times with essentially similar results.
Effects of AG-18 on Insulin Receptor Kinase Activity in Vitro and Protein Tyrosine Phosphorylation in Intact
Cells-It has been previously shown that several tyrphostins inhibit the kinase activity of the insulin receptor in vitro. Among various tyrphostins studied in the present as well as previous work (27), AG-18 was found to be the most potent inhibitor. Although AG-18 (0 -150 M) failed to inhibit the autophosphorylation of the insulin receptor kinase under our in vitro assay conditions (data not shown), it inhibited insulin receptor kinase activity toward exogenous substrates with a K i of 0.5 mM (Fig. 1) and was therefore employed to study the effects of tyrphostins on insulin-and TPA-induced tyrosine phosphorylations and biological responses in CHO.T cells.
As shown in Fig. 2 (A and B) and consistent with previous studies (38,39), incubation of CHO.T cells with insulin induced tyrosine phosphorylation of two major proteins: the 95-kDa -subunit of the insulin receptor (insulin receptor kinase) and one of its major cellular targets, IRS-1 (4,18,38,40,41). Additional proteins that underwent enhanced Tyr phosphorylation in response to insulin were pp60/pp62 (42)(43)(44) and two (out of the three) isoforms of Shc (Shc64 and Shc54), whose identity was verified by immunoprecipitation from cell extracts with Shc antibodies (data not shown).
Incubation of the cells for 16 h with increasing concentrations of AG-18 resulted in a dose-dependent inhibition of insulin-stimulated phosphorylation of several proteins (Fig. 2, A and B). This inhibition could not be attributed to an inhibitory effect of AG-18 on insulin binding since incubation of CHO.T cells with this drug had no effect on the number of insulin receptors expressed on the cell surface nor did it affect the affinity of insulin for its receptors in these cells (data not shown). AG-18 inhibited insulin-induced Tyr phosphorylation of IRS-1, with a half-maximal effect at ϳ50 M. Inhibition of pp60/pp62 phosphorylation was also readily detected (Fig. 2, A and B). In contrast, autophosphorylation of pp95, the -subunit of the insulin receptor, as well as Tyr phosphorylation of Shc64 were largely unaffected, whereas phosphorylation of Shc54 was only partially inhibited. Failure of AG-18 to inhibit Shc phosphorylation was also reflected by the inability of the drug to inhibit insulin-induced complex formation of Shc and Grb2. This was demonstrated when insulin-treated cells were preincubated for 16 h either in the presence or absence of 150 M AG-18. Under these conditions, similar amounts of Grb2 (2.3 versus 1.9% of the total) were found in Shc immunoprecipitates.
Interestingly, AG-18 was a more potent inhibitor when applied to intact cells (Fig. 2) compared with its inhibitory effects on substrate phosphorylation of the insulin receptor kinase in a cell-free system (Fig. 1). These differences could be accounted for by AG-18 accumulation within the cell, making its effective intracellular concentration higher than that applied extracellularly. Alternatively, the native conformation of the insulin receptor kinase maintained in vivo could be more susceptible to the inhibitory effects of the drug.
Effect of AG-18 on Intracellular ATP Content-Compounds with structural similarity to AG-18, such as SF 6847, could act as inhibitors of oxidative phosphorylation (45). To rule out the possibility that the inhibitory effects of AG-18 on Tyr phosphorylation are simply due to a marked reduction in intracellular energy charge, its effects on intracellular ATP content were studied. Incubation of CHO.T cells for 16 Inhibition of Insulin-and TPA-induced S6 Phosphorylation by AG-18 -Insulin induces S6 phosphorylation in a timedependent manner. A maximal effect is attained by 10 min and is persistent for at least 1 h (data not shown). As shown in Fig. 3 (left), a 16-h incubation of CHO.T cells with increasing concentrations of AG-18 yielded a dose-dependent inhibition of insulin-stimulated phosphorylation of S6. A similar extent of inhibition was obtained whether the cells were stimulated with insulin for 10, 20, or 60 min. The effects of AG-18 were rather specific (Fig. 3, right) since only insulin-mediated phosphorylation was inhibited, whereas insulin-independent phosphorylation of several other proteins remained unaffected. Half-maximal inhibition of S6 phosphorylation required incubation with 50 M AG-18, a concentration that was similar to that required for half-maximal inhibition of IRS-1 phosphorylation, while nearly maximal inhibition was obtained at 200 M. S6 phosphorylation could also be induced upon stimulation of the cells with TPA (Fig. 4). Here again, the stimulatory effect of TPA was inhibited upon preincubation of the cells with increasing concentrations of AG-18, with a half-maximal effect being obtained at 75 M.
Effects of AG-18 on Insulin-and TPA-induced Activation of the MAPK Cascade-Phosphorylation of the ribosomal protein S6 is mediated by at least two different protein kinases known as p90 rsk and p70 s6k (2). Since p90 rsk is activated by MAPKs (10,15,16,46,47), we studied the effects of AG-18 on insulinand TPA-stimulated MAPKs. Consistent with previous studies (48), insulin added to CHO.T cells markedly enhanced MAPK activity; however, we failed to detect significant inhibition of this activity when the cells were preincubated with AG-18 (Fig. 5A). In contrast, AG-18 at 100 M inhibited by 50% the maximal stimulatory effect of TPA on MAPK activity (Fig. 5B). This concentration is similar to that required to inhibit TPA-stimulated phosphorylation of S6 by 50%. Interestingly, even in the presence of 200 M AG-18, no more than 50% of the TPAstimulated MAPK activity was inhibited. This suggests the presence of both AG-18-sensitive and AG-18-insensitive pathways leading to MAPK activation upon TPA stimulation. Similar results were obtained when we studied the activity of MAPKK, the dual specificity Tyr/Thr kinase that phosphorylates and activates MAPK (11,13). While TPA-stimulated MAPKK was partially inhibited by AG-18 (Fig. 6B), no such inhibitory effect was observed in insulin-stimulated cells (Fig. 6A).
AG-18 Inhibits TPA-stimulated Protein Tyrosine Phosphorylation-Since AG-18 is thought to be a selective inhibitor of protein-tyrosine kinases (26), the above results suggest the involvement of a TPA-activated and an AG-18-inhibited protein-tyrosine kinase in mediating the activation of the MAPK cascade. To directly address this possibility, the effects of TPA and AG-18 on protein tyrosine phosphorylation were evaluated. Incubation of CHO.T cells with TPA resulted in a timedependent enhancement of tyrosine phosphorylation of several proteins, the most prominent ones having molecular masses of 114 and 120 kDa (Fig. 7, upper). Tyrosine phosphorylation of these proteins was inhibited in a dose-dependent manner by AG-18 in vivo. Furthermore, there was a good correlation between the concentrations of AG-18 required to inhibit TPAinduced tyrosine phosphorylation (of pp114 and pp120), activation of the MAPK cascade, and phosphorylation of ribosomal S6 protein (Fig. 7, lower). These results are compatible with a model in which protein-tyrosine kinases medi-ate at least part of the effects of TPA on MAPK activation and S6 phosphorylation. DISCUSSION A protein-tyrosine kinase inhibitor from the tyrphostin family (AG-18) was used to distinguish between the pathways leading to the activation of p70 s6k and the activation of the MAPK/p90 rsk cascade. AG-18 effectively inhibits insulin-induced activation of S6 kinase while having no inhibitory effect on insulin-induced activation of the MAPK cascade. These results indicate that activation of MAPK per se is not sufficient for stimulation of S6 phosphorylation and suggest that insulininduced activation of S6 kinase(s) may occur through an alternative pathway. In this respect, our results complement studies demonstrating that MAPK activation by the insulin receptor is not required for insulin-induced metabolic processes such as glucose transport or glycogen synthase in 3T3-L1 adipocytes (49).
Inhibition of S6 phosphorylation by AG-18 largely parallels the inhibitory effects of AG-18 on insulin-induced phosphorylation of IRS-1 and is compatible with the notion that IRS-1 phosphorylation mediates many insulin responses (41,50,51), including the stimulation of S6 phosphorylation. The latter presumably involves activation of PI3K (17,18) and subsequent activation of p70 s6k (20,21,52), which occurs independent of activation of p21 ras and the MAPK cascade (53). Conversely, we have shown that inhibition of IRS-1 and S6 phosphorylation occurs without inhibition of the MAPK cascade. Although we cannot rule out the possibility that AG-18 fails to inhibit phosphorylation of IRS-1 at Tyr 895 , which is part of the Grb2-binding site (54), our findings support the view that there are alternative pathways for insulin-induced activation of the MAPK cascade, independent of IRS-1 (55,56). This conclusion is supported by the observation that association of Grb2/Sos with IRS-1 plays little if any role in MAPK activation in L6 myoblasts (57). A likely candidate to stimulate the MAPK cascade is Shc, which serves as a downstream effector of the insulin receptor (5) and acts to activate the Ras/MAPK pathway (8, 55, 58 -60). Indeed, AG-18 failed to inhibit insulininduced Tyr phosphorylation of Shc64, and phosphorylation of Shc54 was only partially inhibited. Similarly, AG-18 failed to inhibit insulin-induced complex formation between Shc and Grb2. Hence, although our results clearly support the involvement of Shc in MAPK stimulation, different Shc isoforms might play different roles in insulin signal transduction, and further studies are required to address this possibility.
Activation of p90 rsk as a result of Shc phosphorylation, together with activation of the MAPK cascade, could account for the residual S6 phosphorylation observed in the presence of 150 M AG-18 in insulin-treated cells. The fact that this residual S6 phosphorylation is rather low (ϳ20%) suggests, however, that the predominant mode of insulin-activated S6 phosphorylation (at least in CHO cells) occurs through the IRS-1/ PI3K/p70 s6k signaling pathway. Taken together, our findings are consistent with a model (Scheme 1) in which IRS-1 mediates insulin-induced activation of p70 s6k , whereas the signals leading to the activation of the MAPK/p90 rsk cascade are transmitted via the adaptor molecule Shc.
Tyrphostins were previously shown to inhibit insulin-stimulated lipogenesis in fat cells, while they failed to inhibit the anti-lipolytic effect of the hormone (28). These differences could be accounted for by the different potency of tyrphostins to inhibit phosphorylation of insulin receptor kinase substrates that could mediate these processes (IRS-1 and Shc). Accordingly, we suggest that IRS-1 is more prone to inhibition by these competitive inhibitors because it is less abundant and/or has a lower affinity toward insulin receptor kinase when compared with other insulin receptor substrates (e.g. Shc) that mediate activation of the MAPK cascade. This assumption is supported by recent findings (56), where cells expressing insulin receptor mutants (of Tyr autophosphorylation sites within the kinase region) maintained insulin-induced phosphorylation of Shc, whereas phosphorylation of IRS-1 was largely reduced. Alternatively, some of the insulin-induced Tyr-phosphorylated proteins (e.g. Shc64) could serve as substrates for intermediary protein-tyrosine kinases rather than as substrates for the insulin receptor kinase itself. These intermediary protein-tyrosine kinases could undergo activation upon insulin receptor autophosphorylation, which is not inhibited by AG-18 (see above). Activation could involve, for example, binding of SH2 domains of these putative intermediary protein-tyrosine kinases to unique Tyr(P) residues within the cytoplasmic portion of the insulin receptor.
A different picture emerges when the effects of TPA on S6 phosphorylation are studied. A good concordance exists between the inhibitory effects of AG-18 on TPA-induced MAPKK and MAPK activity and S6 phosphorylation, which suggests that protein kinase C induces S6 phosphorylation preferentially through the MAPK signaling pathway. Moreover, the difference in the effects of AG-18 on insulin-versus TPA-activated MAPK suggests that, in these cells, insulin-induced activation of the MAPK cascade occurs via a protein kinase Cindependent pathway. Hence, p70 s6k appears to be mainly responsible for insulin-induced S6 phosphorylation (20), while p90 rsk could mediate the effects of TPA (61).
Since AG-18 is a very poor inhibitor of Ser/Thr protein kinases, including protein kinase C (62), the inhibitory effects of AG-18 suggest that one or more AG-18-sensitive protein- FIG. 7. Effect of AG-18 on TPA-induced protein tyrosine phosphorylation. Upper, confluent CHO.T cells were incubated for 16 h in serum-free medium with the indicated concentrations of AG-18. At the end of incubation, 0.4 g/ml TPA was added for a 30-min incubation period. Cells were then washed three times with ice-cold PBS, and cell extracts were prepared, resolved by 10% SDS-PAGE, and immunoblotted with anti-Tyr(P) antibodies. Lower, the intensity of the bands corresponding to pp120 (E) and pp114 (q) was quantitated by densitometry, and the percent inhibition induced by AG-18 was calculated. The effects of AG-18 on TPA-induced S6 phosphorylation (f) and MAPK activity ( ) are also presented for comparison. SCHEME 1. Tentative model illustrating the signaling pathways induced by insulin and TPA that stimulate S6 phosphorylation. Many proteins known to take part in other aspects of insulin and TPA-mediated responses were eliminated for simplicity. The expected sites of action of AG-18 are indicated. PM, plasma membrane; IR, insulin receptor; PKC, protein kinase C; PTK, protein-tyrosine kinase, Erk, extracellular signal-regulated kinase. tyrosine kinases mediate the effects of protein kinase C on the MAPK pathway and S6 phosphorylation. This conclusion is supported by the facts that (i) TPA induces protein tyrosine phosphorylation in CHO.T cells, and (ii) AG-18 inhibits both TPA-stimulated tyrosine phosphorylation and TPA-stimulated MAPK activity with a similar dose-response curve. Although we cannot rule out the possibility that insulin and protein kinase C activate different isoforms of the dual specificity MAPKK, our results are most consistent with the hypothesis that the TPA-activated protein-tyrosine kinase presumably differs from the dual specificity kinase, MAPKK. This conclusion is primarily based on the fact that insulin-induced activation of MAPKK is insensitive to the presence of AG-18.
Several studies implicate protein kinase C, the direct effector of TPA, as a mediator of protein tyrosine phosphorylation events. In rat basophilic leukemia cells, Tyr phosphorylation of a 110-kDa protein occurs secondary to calcium influx and protein kinase C activation (63). Activation of protein kinase C and/or the induction of calcium influx was implicated in immunoglobulin E receptor-induced Tyr phosphorylation of focal adhesion-associated tyrosine kinase (pp125 FAK ) in fibronectin-adherent rat basophilic leukemia cells (64,65). Similarly, protein kinase C was shown to mediate carbachol-stimulated tyrosine phosphorylation in human SH-SY5Y neuroblastoma cells (66). Our results suggest that a protein kinase C-stimulated protein-tyrosine kinase should be present upstream of MAPKK in the protein kinase C signaling pathway, leading to the activation of p90 rsk .
The nature of the TPA-activated protein-tyrosine kinase is presently unknown, but among its potential substrates, we find pp114 and pp120, whose inhibited phosphorylation correlates with inhibition of MAPK activity. Hence, we can formulate a tentative signaling cascade (Scheme 1) in which a TPA-activated protein-tyrosine kinase stimulates the common Grb2/Sos and the Ras signaling pathway (8, 55, 58 -60) and in such a way leads to activation of MAPK and S6 phosphorylation (13). Further studies are required, however, to figure out the role of pp114/pp120 and to determine whether the TPA-activated protein-tyrosine kinase indeed utilizes the Grb2/Sos/Ras signaling elements to induce activation of the MAPK cascade. | 2018-04-03T05:54:25.866Z | 1995-11-24T00:00:00.000 | {
"year": 1995,
"sha1": "b515079b090ac3cf013d04d8de27f777d623a4b0",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/270/47/28325.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "906b607890783de85e3643f4b466375a60b14632",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
25089033 | pes2o/s2orc | v3-fos-license | Collinear optical weak measurements with photonic crystals
We present a theoretical and experimental study of a photonic crystal based optical system in terms of weak values that map polarization states onto longitudinal spatial position and show fast and slow light behavior.
It is well known that the expectation value of a quantum mechanical operator lies within the range of its eigenvalues. However, certain measurements of a physical observable can produce values far outside this range. Aharonov, Albert, and Vaidman first described the results of such unusual kinds of measurements as "weak values" [1]. These weak values have been theoretically discussed and experimentally demonstrated in optical systems involving the angular deflections of polarized beams passing through birefringent prisms [2].
In this work, we present a theoretical and experimental study of a frequency-dependent, polarization-sensitive, collinear optical system based on a two-dimensional, birefringent photonic crystal. The group delay or time-offlight of an optical pulse through this system is given by the eigenvalues if the system is analyzed with pure polarization basis states or eigenstates, however, unbounded values result for general superposition states. Using the theoretical framework introduced by Aharanov, Albert, and Vaidman, we show that these phenomena can be understood in terms of quantum mechanical weak measurements.
The system we consider is based on a transparent, birefringent photonic crystal which imparts a polarizationdependent phase to electromagnetic (EM) waves. The critical aspect of the photonic crystal is the large frequencydependent birefringence in transparent spectral regions [3]. These polarization-dependent phases are functions of frequency defined as φ TM (ω) and φ TE (ω) where the labels TM and TE refer to transverse magnetic and transverse electric, respectively. We introduce an EM wave propagating along the y axis, normally incident on this crystal, and define the independent vertical and horizontal polarization states |1> and |2> as parallel to the z and x axes, respectively. The crystal is free to rotate in the xz-plane by an angle β with respect to the z-axis.
If we postselect a particular transmitted state, the complex response of this system is given by T(ω,β)=<ψ f |exp[iΓ(ω,β)]|ψ in >, where |ψ in > and |ψ f > are normalized vectors describing the polarizations of the incident and detected fields, respectively, and exp[iΓ(ω,β)] is a complex operator which describes the action of the birefringent medium on the initial state. Since we are dealing with two-dimensional polarization rotations, we can construct this operator in terms of the usual Pauli spin matrices and I, the identity operator. For example, it is not difficult to verify that for β=0, it takes the simple form exp[iΓ(ω,0)]=(1/2){exp[iφ TE (ω)] (I+σ z )+exp[iφ TM (ω)](Iσ z )]}. In Fig. 1, we display a theoretical contour plot of arg{T(ω,β)} generated using simple linear models for the birefringence. We can see singular points at β equal to π/4 and 3π/4 and normalized frequency equal to one.
In Fig. 2, we display experimental results for the phase delay vs. frequency for the light transmitted through a photonic crystal positioned at different angles. As expected, the phase delay shows unusual effects near the singularity. By taking numerical derivatives of the phase delay with respect to frequency, we can generate experimental data for the weak values <A ω > W . In Fig. 3 we show experimental measurements of the group delay at the half-waveplate frequency as a function of the waveplate angle. Clearly, the group delay assumes extreme values ("fast and slow light") of opposite sign on either side of β=π/4, and takes positive, non-degenerate values at β=0 and β=π/2.
In conclusion, we have described and experimentally demonstrated optical weak values that manifest as a longitudinal observable, i.e., the longitudinal position or time-of-flight of a wavepacket. The weak measurements of this observable may assume values corresponding to either superluminal propagation or slow light, with a very sharp transition between the two regimes. Although our experiments were performed using classical signals, the results apply equally well to quantum-mechanical experiments. Since the weak values in our experiment are extremely sensitive to the angular position near the singularity, a system of this type can be used to make precise measurements of angular position. | 2017-02-10T04:27:56.707Z | 2003-06-06T00:00:00.000 | {
"year": 2003,
"sha1": "d12353cc7a0ba3bae8e1cf0f2c1ca02f6ff6c009",
"oa_license": null,
"oa_url": "http://arxiv.org/abs/quant-ph/0309032",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0bbfae962c4d9d5a3b8cbd56d01ed790c574089d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225374414 | pes2o/s2orc | v3-fos-license | Discussion on the Combination of Sports and New Media in the New Era
With the development of society, the sports economy system is constantly improving and the sports industry is developing rapidly on the basis of social development. In recent years, the growth rate of the economic output value of the sports industry has obvious advantages compared with the growth rate of other industries at the same stage. It can be said that the development of the sports industry is also a component of social and economic development. And the rapid development of the modern sports industry is inseparable from the efficient combination of media technology. In the context of the rapid development of new media, new media and TV (television) media are indispensable ways for the communication of sports culture to achieve an effective combination of sports and media. And the organic combination and application of the two is an important trend in the development of sports media.
I. INTRODUCTION
The rapid development of the sports industry is inseparable from the efficient combination of new media technology.With the rapid updating and popularization of advanced Internet technology and new media equipment among people, society is entering a new period of information resource interaction.The new media can bring a large amount of information resources to the sports industry and provide a medium for the rapid dissemination of sports information.
However, with the rapid popularization of Internet technology, new media has penetrated into the lives of the people.The International Olympic Committee's website http://www.olympic.org/can provide high resolution photos, Video News Releases (VNRs), press releases, information on accreditation, a calendar of events, etc.In today's new media era, the living space of traditional media such as television and newspapers was once squeezed from the Internet media.For example, the development and promotion of the sports industry in mobile terminals has continued to reduce the total number of TV users.In addition, the Internet is also changing the channels of information dissemination.Nowadays, people can obtain information through WeChat and microblog, which further drains traditional media users.
But nowadays, the information dissemination of sports culture and the construction of the sports industry can't do without the support of traditional media.Therefore, how to efficiently combine traditional television media with new media to maximize the effect of promoting the development of sports industry is undoubtedly a major problem that needs to be resolved.
II. WAYS TO COMBINE SPORTS AND MEDIA IN THE NEW MEDIA ERA: EFFECTIVE INTEGRATION OF NEW MEDIA AND TRADITIONAL MEDIA
The difference between today's sports media and the 20th century is that the interactivity of online social media has been greatly enhanced, and the Internet can give people more freedom and more ways of expression when watching sports events.On the one hand, people can learn about the latest sports information through the emerging online media, and on the other hand, they can break away from the limitation of unilaterally accepting information by traditional media, and can express their opinions and standpoints with their subjective initiative, and they can also interact with people in the online community in this process.In the 2018 Rio Olympics, mobility, video and interactivity have become the biggest features of Olympic reporting.
In the era of new media, in addition to professional media and reporters, anyone on the scene can become a producer and disseminator of primary information.The content of self-media may not be inferior to any professional media, even more original and richer.Social media allows everyone to participate in the discussion of the Olympics and evoke more user interaction.The survey shows that the index of viewers following the Olympics through WeChat, search engines, and video sites is approaching TV.New media such as social networking sites have become the main channels for young people to pay close attention to the Olympics.
However, in addition to the new media audience, the receivers of sports information also have traditional television audiences.Although today's young people are more willing to learn about sports information from mobile and computer terminals, there are still a considerable number of older audiences that have difficulty getting rid of the habit of watching sports events on TV and understanding sports information.This shows that traditional media still has value for the development of the sports industry.Therefore, while people are playing the role of new media in the construction of the sports industry, they can't ignore the role of traditional media.Instead, it's necessary to combine the two through effective means to give full play to the role of the media in the sports industry, and better combine the media with sports.
However, as mentioned in the previous article, the living space of television media is now being squeezed from online media.Therefore, if sports and media want to be better integrated, firstly, it is necessary to effectively integrate traditional sports media and new media in many aspects.
A. The information fusion between TV media and new media
The biggest advantage of new media is that information is released in a timely manner and audiences can easily obtain information through PC and mobile terminals.For example, the public can see the news updates of various sports teams at the first time through the Internet, and obtain timely information and sports reviews of many Chinese and foreign athletes.The multiple channels of the Internet give users more choices in obtaining information.However, technology is always a double-edged sword, and while creating value, it will also expose its own shortcomings.The new media has brought countless sports learning to the audience, but this information is mixed, and some unofficial information can't ensure its authenticity.Therefore, in order to effectively integrate traditional sports media and new media, it's needed to realize the complementation of resources at the information level.From a practical point of view, there are already a lot of traditional TV media and new media for commercial cooperation.Television media improves real and effective learning for new media to make up for the lack of authenticity of new media information.
For example, the sports client-side of Internet TV has multiple columns such as program lists of major sports games, live or on demand events and exclusive comments by the host and so on.In addition, there are various sports events, which complement and integrate sports information resources from multiple aspects.And they let the traditional television media obtain brand-new vitality through the new carrier.
B. The complementation and integration of the communication form content of TV media and new media
Another outstanding advantage of new media over TV media is that it is highly interactive.Internet users can participate in various activities related to sports events in new media, and they can also communicate and discuss with other users who may not know each other in real life in the online community, which is difficult for television media to do.
Judging from the development history of TV media, TV media has also done prize-winning guessing activities through telephone calls and text messages, but this form of interaction has been difficult to attract users with the advent of network technology.Therefore, TV media needs to make good use of their brand influence to further innovate the form of interaction and communication.For example, the CCTV Sports Channel has opened its own columns on microblog, WeChat official account and sports forums to create distinctive and open programs, increase audience participation, and further increase the influence of its own brand.[3]
C. The complementation and integration of the communication channels of TV media and new media
New media can make use of the brand influence and credibility of resources possessed by traditional sports television media, and traditional sports television media can use the communication technology of new media to complement each other.For example, a new media platform and Guangdong Sports Channel once cooperated to let viewers see more international competitions.Not only did this increase the ratings of Guangdong Sports Channel, but the new media platform also stood out among similar platforms and gained more attention.
In general, in the context of the rapid development of new media, both new media and television media are indispensable ways for the spread of sports culture.The organic combination and application of the two is an important trend in the development of sports media.When new media and traditional media have their own advantages and disadvantages, this can scientifically realize the sustainable and long-term development of sports media through complementary advantages.
III. ECONOMIC AND SOCIAL BENEFITS OF COMBINING SPORTS WITH MEDIA IN THE NEW MEDIA ERA
The further combination of sports and new media can dig deeper into the potential market of the sports industry.
The development of media can bring more added value to the sports industry.For example, the global sports information integration technology launched by Internet technology has brought more market and economic benefits to the sports industry.
The new media carnival triggered by the 2018 Rio Olympics is leading the Internet industry to use sports competitions to explore a new model of its entire industry chain layout.In the future, some Internet giants will continue to acquire high-quality event resources, and they may also try to infiltrate all copyrights of event operations and team shares.However, it is certain that a new industrial chain is being created and integrated by new media, and the new media in the future will surely play an important role in the formation of this industrial chain.Now, entering the 2020s, the number of media that can be combined with sports has further increased.For example, Hupu and other sports information apps that include a variety of sports forms and content of sports events provide smartphone users with more effective and practical information services.In recent years, there have been more and more mobile phone users.Compared with the client-side, consumers are more willing to get the latest sports information from mobile media anytime and anywhere.From another perspective, mobile media can allow consumers to purchase tickets for each event more conveniently and bring more revenue to the hosting of sports events.In addition, the development of media technology can also enable information audiences to have a better experience.For example, Alisports and CCTV Sports Channel not only sponsor the Hangzhou International Marathon, but also introduce high-tech technologies such as traffic intelligence brains, driverless technology and face scan payment to help the 2022 Hangzhou Asian Games "go to the cloud", break through the space constraints, give play to the superimposed effect of technology and industry, and bring a brand new experience for audiences.[4] Therefore, while mobile media brings more audiences to the sports industry, it also gains more means for its own profitability.
Looking back at the entire process of combining media and sports, from newspapers to TV, from computer websites to mobile phone software, the media has been updating.However, consumers' spiritual and cultural needs and material needs related to the sports industry have not decreased, but will even continue to increase with the development of sports.While the development of the media has met the needs of consumers, it has also brought more and more economic and social benefits to the development of the entire sports industry.Therefore, it can be seen that the advancement of media technology and effective integration with the sports industry can bring more business opportunities to the society, allowing the potential of the advancing sports market to be continuously dug.
IV. CONCLUSION
In summary, the efficient combination of increasingly improved media technology and sports has promoted the rapid development of the sports industry.
To combine the two, traditional TV media and new media must be effectively integrated in the three levels of information, communication channels and cost forms.
From an impact perspective, in the context of the new media era, the combination of sports and media will bring positive feedback to the society.It is mainly reflected in the role of the media in disseminating the unique culture of the sports industry, bringing huge social benefits to the society, further integrating sports and media, digging into the potential market of the sports industry and creating economic benefits.
The times will continue to progress, and the new media is also in a process of continuous development.In order to achieve long-term development of the sports industry, it is necessary to combine new media and sports in accordance with the development trend of the times, keep trying and actively put into practice. | 2020-08-20T10:03:53.347Z | 2020-08-05T00:00:00.000 | {
"year": 2020,
"sha1": "89a41dbe49b88aa6b1705fa2073086aadf74f3d5",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125942956.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ba9f520d88b49b172a697ae470c105ed8a8f5859",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
12435626 | pes2o/s2orc | v3-fos-license | The Canadian contribution to the US physician workforce
Background: A physician shortage has been declared in both Canada and the United States. We sought to examine the migration pattern of Canadian-trained physicians to the United States, the contribution of this migration to the Canadian physician shortage and policy options in light of competing shortages in both countries. Methods: We performed a cross-sectional analysis of the 2004 and 2006 American Medical Association Physician Masterfiles, the 2002 Area Resource File and data from the Canadian Institute for Health Information, the Canadian Medical Association and the Association of Faculties of Medicine of Canada. We describe the migration pattern of Canadian medical school graduates to the United States, the number of Canadian-trained physicians in the United States in 2006, the proportion who were in active practice, the proportion who were practising in rural or underserved areas and the annual contribution of Canadian-trained physicians to the US physician workforce. Results: Two-thirds of the 12 040 Canadian-educated physicians living in the United States in 2006 were practising in direct patient care, 1023 in rural areas. About 186, or 1 in 9, Canadian-educated physicians from each graduating class joined the US physician workforce providing direct patient care. Canadian-educated physicians are more likely than US-educated physicians to practise in rural areas. Interpretation: Minimizing emigration, and perhaps recruiting physicians to return to Canada, could reduce physician shortages, particularly in subspecialties and rural areas. In light of competing physician shortages, it will be important to consider policy options that reduce emigration, improve access to care and reduce reliance on physicians from developing countries.
physicians to the United States, particularly those who chose to remain in the United States, in relation to their school of training, specialty, rural status and practice type.
Methods
We performed a cross-sectional secondary analysis of the 2006 American Medical Association (AMA) Physician Masterfile to identify and locate all graduates of Canadian medical schools who had immigrated to and were working in the United States. 9 The AMA Physician Masterfile includes data on all physicians who reside in the United States, including AMA members and nonmembers and graduates of foreign medical schools. The AMA Physician Masterfile data includes physician name, medical school and year of graduation, sex, place and date of birth, geographic location and address, type of practice, present employment and practice specialty. 10 We obtained data about the Canadian physician workforce from the Canadian Medical Association Physician Masterfile (2003) and the Scott's Medical Database (2005; reported by the Canadian Institute for Health Information). Summary data about the number of graduates from Canadian medical schools and their residencies were obtained from the Association of Faculties of Medicine of Canada and the Canadian Post-MD Education Registry (2005); Canadian medical school and residency graduation volumes were averaged over the most recent available decade (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005). 11,12 Summary data from the Scott's Medical Database about practising physicians who annually emigrate from and return to Canada were available from the Canadian Institute for Health Information. 13,14 The number of physicians emigrating from and returning to Canada were averaged from 1995 through 2004 (physicians in residency training were excluded from these data in 1995 so that only physicians eligible to practise were included). Data on the number of graduates who specifically immigrated to and emigrated from the United States were received from the Canadian Institute for Health Information. Data on the number of graduates of US medical schools practising in Canada were available for 2004 from the Canadian Institute for Health Information. 1 To determine the number of Canadian-educated physicians who were practising in rural or Health Professional Shortage Areas in the United States, we matched county of practice with the 2004 Area Resource File and the 2003 Rural-Urban Continuum Codes from the United States Department of Agriculture. 15 To assess the degree of data lag in the AMA Physician Masterfile, we examined the longitudinal migration patterns in the 2004 and 2006 AMA Physician Masterfiles. We limited the assessment of net annual migration patterns to physicians who graduated medical school before or during 2000 to avoid counting graduates who were still in residency training in 2006. We performed all other assessments using data on physicians who graduated before or during 2006. We performed simple frequency analysis by birth country, Canadian medical school of graduation, rural versus nonrural address and whole-or partialcounty Health Professional Shortage Area status. We performed χ 2 analysis to test for significant differences between US-and Canadian-educated physician practice locations. The number of graduates of Canadian medical schools who practised in direct patient care in the United States by graduation year was obtained from both the 2004 and 2006 AMA Physician Masterfiles and was compared with Canadian physician migration data. To quantify longitudinal effects, patterns were averaged between 1960 and 2000 to account for possible attrition due to retirement (lower bound) and for graduates who might still be in residency training (upper bound).
Despite delays in data reporting and other inaccuracies such as confusion about work versus practice address, the AMA Physician Masterfile is the most complete and authoritative source of information on physicians in the United States, particularly at a national level of analysis. 16,17 Previous studies that have compared AMA Physician Masterfile data with physician census surveys have found that AMA Physician Masterfile data were reliable and adequate for work-force projections and policy studies when aggregated to the state level. 17 We have previously shown that the AMA Physician Masterfile is valid for rural and whole-county Health Professional Shortage Areas where the accuracy of the data is nearly 90% for county-of-practice classification. 18 Canadian physician databases suffer from many of the same lags and accuracy problems as the AMA Physician Masterfile for similar reasons, particularly the administrative sources. US and Canadian databases were not directly linked but contemporary databases were used for temporal comparisons and to fill in gaps in each about the physicians who train in Canada and migrate to or from the United States.
Results
In 2006, there were 32 241 family physicians and general practice physicians and 30 656 medical and surgical specialists in Canada. 19 On average each year since 1996, 1642 physicians have graduated from medical school and 1683 physicians have graduated from residency training programs in Canada. 11 try: 1 262 physicians who completed their residency training in Canada left the country and 317 returned. We found that Canada contributed about 186 active, direct-patient care physicians to the US health care system annually (range 37-268); however, we found a lag of 5 or more years in our ability to monitor this from the US data. Americans who graduated from Canadian medical schools accounted for 14
Interpretation
In 2006, 1 in 9 Canadian-educated physicians practised in the United States. If physicians who were born in the United States are excluded, this number is reduced to 1 in 12. This accounts for just over half of the net loss of physicians from the Canadian-trained physician workforce. Collectively, this is equivalent to having 2 average-sized Canadian medical schools dedicated to producing physicians for the United States. Canada is the second largest source of immigrant physicians to the United States, second only to India.
The number of emigrant physicians approximates the current physician shortage in all Canadian provinces. In addition, graduates of Canadian medical schools who practise in the United States are more likely to choose to practise in a rural area compared to US graduates. If these physicians were to choose to stay and practise in rural Canada, this would dramatically alleviate physician shortages in rural areas of the country. 22 The migration of US-trained physicians to work in Canada, only 400-500 physicians, is miniscule in comparison; this was substantiated by another recent study using similar data sources. 23 Immigration of Canadiantrained physicians to the United States may be slowing, as there was a net gain in the number of physicians who returned to Canada in 2004. Our findings are in contrast to pronouncements that emigration is not a major contributor to physician shortages in Canada. 6,7 The annual net migration of Canadian-educated physicians has been sustained until recently. Although this migration has been well documented, its aggregate contribution to the physician shortage in Canada has not been. 1,4 There may be many reasons that Canadian-educated physicians immigrate to the United States, and exploring these reasons will be an important step in designing policies that support the decision to stay in, or to return to, Canada. In the 1990s, there were Canada-wide and municipal policies associated with peak emigration, such as geographic and billing restrictions, but it remains unclear how these policies affected emigration trends. 24 Highly specialized physicians may have a greater opportunity to develop their skills and the earning potential can also be much greater for some specialties in the United States compared with Canada. Lower taxes in the United States and rapidly rising educational debts for Canadian-educated physicians may also increase their desire to immigrate to the United States. Canadian-educated physicians may also be responding to a rigidly controlled residency training system. Whatever the reasons for physician emigration, a lack of awareness, or a lack of response, has contributed to Canada's physician shortage. In response to physician shortages in Canada, the number of spaces in publicly-funded medical schools have been increased by 15% to 30%, the first new medical school in more than 30 years has been opened, 25 satellite campuses of existing medical schools have been created, 26,27 the number of post-graduate positions has been increased and restrictions on international medical graduates have been loosened. [28][29][30] There are inherent limitations of the AMA Physician Masterfile and in the cross-sectional design of our study. Because of these limitations, there is a risk of over-counting Canadian medical school graduates who train or practise in the United States and then return to Canada and a risk of undercounting physicians who have finished residency training but who are not yet counted in the physician workforce. Both the Canadian and US physician data have similar limitations in measuring migration patterns, especially for nonrespondents and in the years closest to graduation from residency training. Reliability appears to be poorest for physician data in the United States and Canada in the 3-5 years immediately after completion of residency training. Our comparisons of 2004 and 2006 AMA Physician Masterfile data suggest that this data lag may underestimate the number of Canadiantrained physicians practising in the United States by 10% or more. It also prevents a clear picture of how migration has changed for three or more years. There is also evidence of some lag time in accounting for physicians who have migrated. We believe that the evidence points to an underestimation of migration to the United States with a lag time of 5 or more years.
Our findings suggest that physician migration to the United States may be decreasing, but that efforts to further stem this loss would be beneficial. Understanding which policies would be most potent in this regard may require further study; however past research has suggested that reducing debt loads and salary differentials between Canada and the United States, using incentives to encourage physicians to practise in specific locations or providing liberal training options may help to alleviate shortages. 31 Provincial governments could consider incentives to attract Canadian-educated physicians back to Canada. Encouraging migration offers some degree of control over the physician-specialty mix and policy options to stem migration risks loosening these controls. Given the cumulative loss and physicians shortages in Canada, relaxing controls on migration may be timely. Canada also benefits from the US post-graduate training system but this benefit carries risk. Of the nearly 500 graduates of Canadian medical schools who are in US residency training programs in any given year, more than two-thirds will leave the United States and presumably return to Canada. Many physicians take advantage of training in the United States that is unavailable in Canada and do so at a cost of as much as US $48 000 000 to the US Medicare program per year (the median Medicare payment per resident was US $121 169 in 2001). This training exchange benefits Canada's physician workforce, both offering and financing broader training opportunities for physicians. However, Canadian-educated physicians who complete their residency training in the United States are less likely to return to Canada and are as much as 9 times more likely than Canadian-educated physicians who completed their residency training in Canada to later immigrate to the United States. 31 It may be desirable to respect this risk and permit the exchange, but to create incentives for returning to Canada. The United States is a major beneficiary of the Canadian medical education system, and Canada is a beneficiary of US post-graduate training programs. These trade-offs may represent a mutually beneficial exchange that is not typical of most physician-donor nations. Canada and other developed countries could promote these beneficial exchanges while avoiding the "pillage" of physicians from developing countries. 31 | 2018-04-03T00:31:03.519Z | 2007-04-10T00:00:00.000 | {
"year": 2007,
"sha1": "386351822fd431a494771500a9ea614134fb1568",
"oa_license": null,
"oa_url": "http://www.cmaj.ca/content/cmaj/176/8/1083.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "6adf8a747b7091c6a38374e39aecc7e9abf3ac6a",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214704972 | pes2o/s2orc | v3-fos-license | Innovative Approaches of Political Education in Ukraine: On the Example of Developed Western Democracies
This article was composed to discuss the particular traits of the contemporary political education of students in Ukraine. A system of political education is introduced to develop the fundamental political knowledge and skill-set of students, as well as aiding in the formation of their civic consciousness. The innovative modernization approaches are represented by American Political Science Association. The main purpose of the research to show modernization methods of political education what is the basis for the formation of democratic values of Ukrainian society. Much attention is paid to the correlation between civic and political education. The need for value content of political education adds to the urgency of the study. This involves making greater use of the tools of political education to support the dialogue of cultures in the modern world. Ensuring that society is prepared to solve global problems by developing the active position and individual responsibility of each young person should be an equally important result of political education in the 21st century. Political education now requires a walkthrough approach of implementation (through project initiatives, socio-cultural activities, networking and various types of cooperation between those who get education in the socio-political sector). The authors emphasize that it is worth rethinking the new opportunities of social networks, the Internet era to build youth initiatives and teams for project work. However, one of the challenges of the information society is the dominance of different types of data (“information noise”), which requires young people to have a higher level of media literacy and more developed critical thinking. Systematization literary sources and approaches for solving the problem indicates that the new systems of political values have been formed, which are essential for the democratization. Two groups of mutually perceived values were approved. The first group is dominant. It is associated with values such as security, sovereignty, patriotism, freedom, human rights, justice, political stability. The second group of established values is evolving: legality, responsibility, equality, political pluralism, legitimacy. This article reveals the understanding of these values of Ukrainian citizens on the basis of Ukrainian and European sociological research.
Introduction
The relevance of this subject is defined by sharpness which is gained by a problem of lack of political knowledge and competence of student's youth. More than a half of citizens for decades of independence of Ukraine admitted that they do not have enough political knowledge, experience of political activity. Therefore, need of political education is becoming increasingly relevant in Ukraine. It can become an additional tool of assimilation of democratic values, norms, behavior models. The USA can become an example where teaching political science had the purpose of forming of democratic political culture of youth and commitment to democracy values from the middle of the 19th century.
Today, the younger generation in Ukraine has to become a full-fledged subject of political activity and the relations without which participation cardinal social and political changes are impossible. Just the youth concentrates the main tendency of development of society. Our analysis highlights the unfavorable trend of Ukrainian society to increase spontaneous protests in contrast to the inability to establish organized forms of political participation. This tendency can be identified throughout modern Ukrainian history and is especially prevalent in the events of 2004 and 2013-2014. Therefore, a problem of conscious participation of young citizens in implementation of the government and perception of policy by them as spheres of personal interest and responsibility becomes especially acute.
Literature Review
Researches of interrelation of political education and political participation of youth were begun within cognitive school in psychology (Shestopal, 1988). The process of thinking was an object of researches. The very idea of communication of age opportunities of the personality and its perception of political meanings are fruitful as it allows to track process of formation of political consciousness from a position of development of the personality itself, internal regularities of formation of its thinking (Przeworski, 1991).
Other direction of researches of political education was entered by psychologists -cognitive scientists Jehuda, Morrison, etc. (Shestopal, 1988). They analyzed purposeful influence of factors (agents) of socialization among which school institutions were determined the most influential factors with the help of which political values at the younger generation are formed and standards of political system are assimilated.
The interpretation of the active personality consists within cognitive school having certain political thinking, acts as base for the choice of certain political behavior. The individual makes the type of political thinking in the course of formation of political consciousness, a picture of political reality through which perceives policy, makes decisions and acts. Such direction of researches proceeds in a number of concepts of political training.
Modern researches in the field of political education began within the West German school of political didactics. In the course a combination of problems of political socialization and education the work of Heitmeyer and Jacobi (1991) was published. Shcherbinin (1996) noted, that works in the field of political didactics represent a transition from the theory of political socialization to practice of political education orienting on political process.
From 60th years of the 20th century in German practical programs of assimilation of political knowledge by youth were developed. Practical orientation becomes the main line of a system of political education. This is because the process of formation of political consciousness of youth is closely connected with social and political experience of young people.
American scientists Almond and Verba (1992) emphasized that training acts as a component of process of political socialization that causing political orientations of the individual. The assimilation of the political knowledge forms models of political behavior of the younger generation (Almond & Verba, 1992).
The problem of definition of ways of assimilation and transfer of the systematized political knowledge through an education system becomes more and more urgent in domestic scientific literature. The purpose of this article is to define the ways of modernization of political education of Ukraine on the basis of the international innovative approaches.
Modern scholars view the problems of political education in the light of the threats and challenges posed by the information society. Youth is a mobile social group open to IT newlydesigned products and applications. However, this raises the problem of media literacy, critical thinking and proper rethinking of information and news by a young person. There is a need to familiarize yourself with the analysis of the event from different sources in order to make its objective assessment. This is time consuming, but demonstrates the responsible position of a consciously thinking person. Thus, the development of the civic culture of young people, their prudent position to engage in social and political life not in a protest, but in a constructive way, to build democracy, to protect human rights and the principles of social justice become more relevant. For example, modern researchers Zijun Hu and Lia Li agree that the Internet era creates many opportunities for project work, team building and collaboration. However, in acquiring the skills to communicate with the computer, scientists believe that young people are still a kind of aboriginal people in the information society (Hu & Li, 2018). A young man becomes dependent on the evaluative judgments of many strangers. On the other hand, the Internet is an opportunity to unite certain destructive forces in the public sector and manipulate public opinion. Therefore, political education should include an important value concept that will foster the development of a culture of civic consciousness among young people, an understanding of the importance of diversity and building of a society based on consensus polylogue.
According to the theorists Zafer Kus and Orge Tarhan, modern political education of the youth should be walkthrough, i.e. not limited to a single subject or module, but implemented through a variety of teacher initiatives outside the educational institution -support for youth project activities, educational activities in social and cultural institutions, etc. (Kus & Tarhan, 2016). Therefore, it is important that political education should not have a politicized but value colouring.
The Polish scholar T.
Szkudlarek discusses whether the current political education of young people can support the "depoliticized" dimension of political realities (Szkudlarek, 2013). Theorist emphasizes that most global problems of the modern world require responsible individual solutions and self-management. And this trend -teaching young people to be responsible for themselves and, therefore, for the society in general -also has a clear value colouring. So we can say that the basis of political education is the values of responsibility, social justice and integrity. Civic education (a slightly broader concept than political education), in fact, contains patterns of political education. Thus, the development of political education depends on the success of civic education programs, and vice versa. We will further develop this understanding in this publication.
The relationship between modern education and politics requires new articulations. Education is a practice that reconfigures the relationship between the individual and the public. Political education, among other things, forms a discourse that leads to either social conflict or consensus. Modern approaches to political education should ensure maximum use of the educational potential of the society for the development of political processes, civil society, democratic development of all social structures.
Metodology
The study has a broadly interdisciplinary character, which is reflected in the choice of a broad methodological tools. The theoretical and methodological basis of the study is a set of philosophical, general scientific and specific political science research methods that provide a systemic analysis of the phenomenon of political education in the context of a democracy. In the course of the research, the content analysis of the current domestic and foreign publications on the topic of political education was carried out. The structural and functional approach was used for the comprehension of political education as a system, process and result. The study used a historical method that allowed analyzing the background of the formation of democratic political culture of Ukrainian youth and the impact of political education on this process in retrospect. A substantive analysis of the concept of "political education" in the classical and modern theories of politics was accomplished. The findings of the study are substantiated by providing the empirical data, in particular the results of mass sociological surveys in Ukraine conducted by reputable international organizations/projects on the development of civic education and/or education for a democratic society.
Results
Two interconnected concepts: civic and political education meet in the researches devoted to this question.
Civic education is a complex of teaching and educational work combining elements political, economic, legal, ethical education. The purpose of teaching and educational actions is formation of the citizen, capable being guided by the existing legislative base, to protect his rights and freedoms, to give support to political forces which can really represent his interests at the level of the local and supreme bodies of the government (Ivanov, 2003).
The formation of public consciousness includes political education as component of civic education. This is a training of young citizens of life in the conditions of the modern democratic state (Zhadan et al., 2004). The sense of responsibility, debt, patriotism (feeling of solidarity, participation in the historical fate of the homeland and its people), understanding itself of as full member of social community, citizen of the country, a maturity of political and legal consciousness, respect for the rights and duties lie in the base of civic consciousness. Compliance with laws, upholding of own rights, satisfaction of social requirements and interests with democratic methods have to be the basic practical skills of the citizen. Political education is directed to formation at the individual of certain qualities of the citizen.
As a result, the establishment of a citizen's knowledge, duties and right, as well as a direct and representative democracy's constitutional and legal process (including the principles of suffrage, political gathering, referendum, and the demonstration process among others) embody the main aspect of political education. The primary goal of political education is to utilize critical political thinking and proper evaluation of political events to foster a rational selection of political stances and positions.
Political education incorporates many stages. These educational phases include personal familiarization of political figures and symbols of the state, as well as the recognition of behavioral norms in the political process. Also incorporated is a thorough study of the structure and function of political institutes, analyzing the concept of political value and the application of rational thinking to the establishment of policy. (Iskhakova, 2018a;2018b).
The child gradually develops in the course of training. There are important psychological changes in intellectual development. In school days, namely transition from empirical knowledge to theoretical, from direct sensory perception of political reality to generalized and abstract concepts. Deeper level of assimilation of political and legal knowledge occurs in a higher educational institution.
The purpose of introduction of a system of political education in the university is formation at students of a system of basic knowledge, development of skills of theoretical and practical bases of implementation of policy as public phenomenon, assistance to formation of civic consciousness of students.
The task of studying of the corresponding disciplines is in giving to students of knowledge of features of policy as the public phenomenon, structure and typology of political system of society. An image of the state as main political institute of society has to be created in consciousness of young citizens. Despite rapidity and sharpness of political processes in modern Ukraine, educated youth should have steady knowledge of the main kinds of a political regime and stages of electoral process in the democratic country. The students should understand an entity of political parties and groups of interests in society and also a role of the personality in political process in order to realize ways of implementation of own interests in policy. Racks of knowledge of main spheres of foreign policy activity of Ukraine will give the chance to youth to understand the strategy of development for own country on the international scene.
The political education has "to arm" young citizens with knowledge of the most various modern political phenomena and processes of own country and the world: origin, entity and social conditionality of politics; specifics of the political power, the political relations and processes, in particular, in modern Ukraine; structure, typology of political system of society and main forms of the state as its main institute; the main directions of state policy and the system of executive power in Ukraine; parameters of the analysis of a political regime, its versions and political principles of modern democracy; features of elections as forms of direct democracy in Ukraine, versions of electoral systems and stages of electoral process in modern Ukraine; types of political parties and party systems, functions of groups of interests in politics; types of political behavior and forms of political participation; entity and main directions of world politics, international relations, global problems of the present. The system of political education and education is an effective specific institute of primary political socialization, provides occurrence of new generations to difficult world of politics. An organic inclusion in a complex system of the political relations and institutes through formation of idea of the state, the power in society at the daily level, formation of models of political behavior, the relation to political roles in the future, preparation for political life, establishment of a ratio of norms and deviations in political consciousness and behavior is its content. Therefore, the gained knowledge has to be supported by the following abilities of young citizens: to define own position in social structure of society; to analyze ways of realization of own social requirements and interests in the political sphere; to establish a form of government and the device of modern Ukraine and the countries of the world; to acquire powers of the supreme bodies of the government of Ukraine (being guided by the Constitution of Ukraine); to prove the principles of the constitutional state and to establish the level of realization of constitutional rights and freedoms of people and the citizen in Ukraine; to take conscious part in electoral process of Ukraine and to apply standards of electoral laws during elections; to analyze the structure of civil society in Ukraine and to form own political position on the basis of awareness of own requirements and interests; to resist to manipulative impacts on political consciousness of citizens. American political scientists have presented a fundamental study of the trends in the development of political science in the 21st century. The study was conducted within American Political Science Association (2011).
APSA Teaching and Learning Conferences consistently highlight the crucial requirement to associate political science with actual real-world events. The implication of this thinking is that political scientists should more extensively incorporate current events into their educational presentations. Examples of courses that could easily be made more relevant to students by tailoring them to include current events are those that focus on civic engagement, international issues and policy development. The inclusion of current events within the structure of classes increases the chances that students and citizens remain engaged in the learning process.
Numerous studies exemplify the crucial role that political scientists contribute to the development of a politically engaged population. One reason examined by the summaries is that U.S. society is undergoing a vast change in demographics while at the same time an increasingly connected world results in the events of other nations directly influencing students in the United States. If our political discipline successfully incorporates current events and contemporary issues, we have a much greater ability to aid our students with a greater comprehension of a diverse world domestically as well as internationally (American Political Science Association, 2011).
Students' self-checking and a self-assessment of own activity are formed in the course of political education, it will become important quality of the person in the future. Bases of independence, responsibility, initiative in life lies in internal reflexive processes of the person. The reflection matters for personal development in that aspect, that it gives an idea of the purposes, content and means of own activity, it allows critically to treat activity, including public.
In the course of training at the universities students need to be able to carry out a reflection of own actions, and it assumes the whole complex of abilities.
-The ability to exercise control of own intellectual and practical actions.
-To control logic of expansion of own opinion (judgments).
-To define the sequence and hierarchy of stages of activity.
-The ability to carry out dialectic approach to the analysis of a situation, to become on a position of various "observers". -The ability to explain political processes of the modern world and analyze them depending on the interests of own state. Reflexive processes are an obligatory component of educational activity of the students therefore reflexive abilities need to be formed purposefully at them (Novikov, 2005).
The American Political Science Association (2011) in the fundamental research has proposed the modernization of political science in the following directions.
Through fundamental research, the American Political Science Association (2011) has proposed several ways to enhance the modernization of political science. To begin, a higher emphasis on open dialogue about critical world issues amongst people of diverse cultures and backgrounds should be sought to help enhance a move to internationalization in order to better align political science with other disciplines. The net effect of this process will be to give our students a greater awareness of the complexity and unique nature of foreign political systems and cultures and to ease cross-border communication. This in turn will serve to create a bridge between local and global views and move away from a solely "Westernized" global view. Additionally, our students should be given guidance on the application of classroom theories to real-life contemporary situations. This process could include removing students from familiar surroundings and providing alternate perspectives from all standpoints to include issues like racism, prejudice and nationalism. At a practical level, international internships would function to intertwine academic theory with real-world experience. Finally, the aforementioned concepts should be promoted while taking care not to overuse or misuse applicable terminology. Words such as "tolerance" or "multi-cultural,' for example, tend to be so over-circulated in contemporary media that they almost lose their meaning entirely and become nothing more than buzzwords.
Besides, acquisition of experience of political activity has to be provided with institutes of civil society, in particular the youth organizations, after all the transformation of information to political knowledge happens because of its ratio to personal experience of the young person. It is created the systems of public youth associations for today in Ukraine which number constantly grows. Despite it, most of them remain numerous little-known for youth and is not for it authoritative and prestigious. It while a problem of youth associationsstimulation independent the organized participation in politics. Because of activity political consciousness and the culture of the young man is formed.
Effective functioning of youth formations and associations is important where the authorities of norm, the rules of behavior are formed, duties and responsibility are established. The acquisitions of practical political experience, skills of democratic activity of youth has to be provided by consolidation of efforts of the youth organizations.
In this case it will be interesting the Report of the Task force on democracy, economic security, and social justice in a volatile world prepared by American Political Science Association (2012).
This report has put forth the idea that the practical application of different strategies to improve democracy and enhance economic security and social justice are closely interwoven in the political landscape. Democracy cannot exist without effective citizenship, of which economic and political citizenship are essential component. Preconditions for economic citizenship include a guarantee of economic rights that ensure public services, as well as a public means to finance these services while overall serving to moderate economic inequality. To attain political citizenship, it is not only imperative to provide guaranteed political and civil rights, but to also achieve a legitimate and accountable participation of citizens in the governing process where government responds directly to the will of the people. Hence, the common thread woven throughout the foundation of democracy is the protection and preservation of fundamental rights which is, of course, a central component to democracy in general.
In our case, the discussion of the underlying importance of "rights" as a foundation of democracy is no mere coincidence. Human rights-based approaches (HRBA), participatory governance (PG) and economic citizenship have been highlighted to underscore their interwoven nature and identify no potential areas of research. In world that is becoming increasingly volatile, the future of democracy is perhaps growing somewhat ambiguous. Serious issues ranging from inequality and poverty to economic mismanagement can all be considered threats to the very future of democracy as a practical and effective form of government. It is essential that the interconnections and innovations of democratic societies be openly shared and discussed across cultures and borders to ensure the health of the democratic process.
We have definitively identified human rights to be a crucial component to the universal foundation of democratic government, therefore it has become essential to gain a more thorough understanding of how these rights can be established and protected. This process can be achieved through a variety of methodologies with the goal of enhancing economic security as well as social justice. It has been our goal to highlight these innovative approaches as well as to examine how the different concepts are interrelated. Additionally, we seek to stimulate further research into the innovations we have discussed with the desired objective of better enabling our discipline to practically utilize these concepts to broaden our field of study (American Political Science Association, 2012).
The creation of conditions for personal development as citizen of Ukraine acts as the main goal of the Ukrainian education system (as civil and political). It assumes formation at children and youth of democratic outlook and political culture, an active civic and professional position. Besides, education of respect for language, culture, history of the people living in Ukraine, formation of culture of the interethnic relations is important. It will become a basis for consolidation of the Ukrainian people in the Ukrainian political nation as a set of citizens of the country, irrespective of their social-group distinctions have not only the equal rights and duties, but also the general political culture.
Ukrainian youth must have feelings of solidarity and patriotism instilled within them as well as knowledge of what is embodied in a normal political process to embark on the path of selfgovernment and the realization of a sovereign system. A primary characteristic of political socialization is the establishment of a national identity in young citizens and a desire to belong their country. In case valuable and standard system of society is weakened, then political socialization is not successful and then there is a reproduction of the destructive potential which is saved up in society.
Importance of a uniform system of political education proves the fact that political socialization happens in the Ukrainian society in different social and economic, sociocultural circumstances and different at the children belonging to different social groups and communities. The situation is complicated also by the fact that modern institutions of education and education in Ukraine are at the stage of constant reforming.
Investigating what values for Ukrainians are key it should be noted that the National Institute of strategic researches conducted sociological survey on this subject. Methodological tools of research methods were described in the analytical work of Ukrainian researchers Chupriy and Gai-Nizhnyk (2014). The main values are presented in Table 1. Sources: Chupriy and Gai-Nizhnyk (2014) The main typical lines of Ukrainians are presented in researches of values of the different people of Europe, made by the European sociological institutes (European Social Survey, n./d.). Researches show, that such lines as concern about own safety, inability to independently make decisions, aspiration to self-affirmation (the status, wealth, power) alerted the relation to changes, poorly expressed aspiration to enjoy life, are the main typical lines of Ukrainians (European Social Survey, n./d.). Analysts note that the majority of these lines were created as a result of lack of confidence in tomorrow. Researchers note that the typical Ukrainian cardinally differs from the typical inhabitant of the majority of the European countries with long-term democracy, stable economy and the low index of corruption of governing bodiesfor example, from the Scandinavian states of Denmark or Sweden and the countries of the northwest of the continent, such as Holland, Belgium, France and Germany (Chupriy & Gai-Nizhnyk, 2014).
Staff of Institute of Sociology of National Academy of Sciences of Ukraine actively researches values. So, Ruchka (2010) investigating valuable priorities of the population of Ukraine, allocates five valuable syndromes. The first valuable syndrome covers the vital valueshealth (4.74 points on a 5mark scale), family (4.72), children (4.67), welfare (4.67). The second valuable syndrome covers social values: creation in the society of various opportunities for all, favorable moral and psychological climate in society, social equality (4.06). The third valuable syndrome covers traditionalist values: national and cultural revival, participation in religious life (3.47). The fourth valuable syndrome covers value self-realization: interesting work, public recognition, increase in educational level, expansion of a cultural outlook (3.70). The fifth valuable syndrome covers political and civil values: the state independence of the country, democratic development of the country, participation in activity political parties and public organizations (3.51). That is the vital and social values are the priority according to Institute of Sociology of National Academy of Sciences of Ukraine. The results of this research are shown in Table 2. (Gorbulin & Kachinsky, 2009;Ruchka, 2010). According to above-mentioned researchers, further development and existence of the state and the nation has to be considered making a start from its valuable kernel which includes spiritual property, welfare, the system of international relations, social justice, patriotism and directly national security which consolidates society.
For this reason, the problems of political and civic education in Ukraine demand further detailed studying. In particular, the special attention is required by a question of inclusion of political knowledge in educational programs, developments of the manuals on political education, which corresponded the requirements of democratic society, promoted formation of knowledge and civil skills.
This process can be aided by recommendations provided by the American Political Science Association. Information garnered from the APSA's Teaching and Learning conferences can be employed to produce new educational techniques that incorporate more focus on current events and greater inclusiveness in curriculum. The political science discipline can be at the forefront of an effort to spearhead an investigation of issues associated with vast and complex demographical and political changes in the United States and numerous other countries. One of the core missions of political science is the analysis of the victors and losers in the political arena. As such, political science is ideally positioned to aid citizens in recognizing the direct consequences of their own choices as well as that of their government. A political science classroom is the ideal venue from which to communicate information that can truly enable citizens to empower themselves to make changes that will directly influence their futures (American Political Science Association, 2011).
Discussion
Due to the significance of the aforementioned dialogue, it becomes necessary to evaluate the controversy induced by the use of the terms "civic (citizenship) education" and "political education" in the context of use with European countries. It is noteworthy that citizenship education in developed democracies is a compulsory school subject that has been taught for several years (in France -all 12 years of schooling, in Spain -9). The Concept of Citizenship Education Development in Ukraine (Protasova & Poltorak, 2018) provides for an integrated (in the content of several disciplines) and cross-sectoral approach to the implementation of citizenship education at the higher education level. The question of what is the main category of civic education, what is its content regarding to the development of political culture of citizens, remains open.
Civic education, as noted by Napiontek (2013), was referred to as political education at the beginning of the 90s, since the field of knowledge it covered was politics, participation in elections, citizens' awareness of the activities of supreme authorities. However, the modern development of democracy significantly expands the scope of civic education because, in countries with a culture of participation, civic education aims to create an active citizen who would take part in policy-making on his or her own (Napiontek, 2013).
According to other scholars, the term "political education" was widely used in communist countries before 1989, and is therefore controversial today because it is associated with political propaganda. The Council of Europe was one of the first on the continent to feel the urgency of the problem and to launch the Education for Democratic Citizenship programm (Butt-Pośnik, Butt-Pośnik & Widmaier, 2013). In Germany, for example, the scientific lexicon does not contain the term "civic education" at all, but instead uses the term "political education" (Germ. Politische Bildung) (Butt-Pośnik, Butt-Pośnik & Widmaier, 2013, p. 206).
The importance of training educators in civic education was highlighted by British researchers White (2006) and Volpp (2007). Another British researchers Bergamini (2018) and Carter (2016) insisted on compulsory civic education. In his work, Carter (2016) writes about the lack of serious perception of civic education and disregard for political education. Turkey has witnessed a sharp rise in the amount of studies relating to citizenship and political education (Kuş & Tarhan, 2016).
Returning to Ukraine, civic education here is a relatively new sphere of scientific and public interest. The current state of affairs generates uncertainty and lack of formation of the position regarding the place and role of civic education in the system of formal and non-formal education. Although, here is a rapid increase in the number of studies concerning the citizenship education (Bondar & Ishakova, 2015;Semenets-Orlova, 2018;Semenets-Orlova & Kyselova, 2018;Trapeznikova, 2016).
Ukrainian scholar Ivanov (2003), makes a clear distinction between civic and political education, including politology in scientific discussions. In his opinion, the mentioned terms cannot in any way be identified, but at the same time, the scholar does not exclude that they have much in common. If we consider political and civic education, the common feature for them will be the political and functional level, when the entire resource of both systems is mobilized to solve certain tasks of the society. But at the level of knowledge of the political world, civic and political education have very different objectives. As for political education and politology, Ivanov (2003) defines the first as "a system of transferring knowledge about political life by means of all social institutions that disseminate knowledge about it to a greater or lesser degree" (p. 43), and the second as "a system of professional education of specialists in the field of political process" (p. 43).
The need for advanced citizenship is growing with the strengthening of Ukrainian statehood. The processes of decentralization, self-improvement of politics, a new quality of democracy and a new social reality (the need to cooperate effectively in communities for the collective good) need a new model of patriotism for modern Ukraine. Living culture requires the creation of new values, although all of them should be discussed according to the criterion of respect for human dignity (Teaching and learning for a sustainable future, n./d.). Therefore, for example, the countries that are leaders in the academic performance of young people reconfigure educational systems into a valuebased approach, being aware of the growing demand for the value core of the individual for peaceful coexistence in a complex world. An important characteristic of the outlook of people who have devoted themselves entirely to social activity is social service, which is associated with a sense of duty towards others. Trust, belief in justice involves the voluntary commitment of members of society to exercise public functions.
New educational programs in the field of citizenship education in democratic countries are based on theory of famous researchers Almond and Verba (1989) highlighted the following signs of civil culture: a sense of pride in their nation, an expectation of a fair attitude to the society on the part of the authorities, a tolerant attitude to the opposition parties, active participation of the community in local self-government, confidence in one's ability to participate in politics, civil cooperation and trust, citizens' membership in autonomous associations. In this context, the scientists' view of the meaning of the term "civil culture" can be described as "enlightened patriotism".
A well-known philosopher Habermas (1996), in his work argues that the normative meaning of democratic citizenship can be determined without the formation of an individual in the context of a "national state". A theorist Starkey (2002) has a similar position, he claims that the concept of "citizenship" in its meaning always has a political and legal dimension. Although citizenship is in some way linked to a national concept, it is an autonomous and independent theory. In this context, Starkey (2002) observes that in the new concepts, citizenship exists also at supranational levels. Unlike Starkey (2002), the Irish researcher Craith (2004) argues that although the basis of modern citizenship is the focus on civil responsibility, it is the cultural forces (the value attitude of the individual to the state, the country and its citizens) that implicitly fasten components of modern citizenship. Our position correlates with the Irish researcher's point of view.
Today, the link between political and civic education and the term patriotism is widely discussed. The Stanford Philosophical Encyclopedia gives the standard definition of "patriotism" as follows: it is love for one's country (The Stanford Encyclopedia of Philosophy Archive, 2017). Other scholars point out that patriotism must be understood as a commitment and a sense of belonging to one's country (Françoise, 2013). Its meaning is usually related to its role in supporting national cohesion on behalf of the state to the extent that the state encourages members of society to respect their civil responsibilities. The consideration of this question varies greatly from one context to another, so theorists suggest talking about "patriotism" in plural, stating the diversity of its manifestations (Françoise, 2013).
Patriotism as the main category of political education can be defined as a system of views (cultural, consciousness attitudes) that reflect the inflexible attachment of a person to a particular country, characterized by an indisputable positive assessment of that country, persistent loyalty and intolerance to critics. In Western political philosophy, there is a debate about the type of patriotism that can provide an effective alternative to nationalism, as a meta idea for a stable statehood (The Stanford Encyclopedia of Philosophy Archive, 2017).
In this position, a meaningful assessment of morality as a universal regulator of the world of the future is important, which, in particular, the author of the theory "The Black Swan" Taleb (2014) substantiated. Methodologically valuable is the vision of the discrepancy between the notions of "nationalism" and "patriotism" (as main categories of political education) by the theorist Acton (1949): "patriotism, unlike nationalism, is the awareness, first of all, of our moral responsibilities to the socio-political community" (p. 163).
The results of sociological research (according to data for 2018) indicate a low level of awareness among Ukrainian youth of the political issues (Figures 1-4) The results of the study showed that pupils have a vague idea of their civil identity. Thus, 49% believe that the Ukrainian people are citizens of Ukraine of all nationalities. At the same time, 20.5% consider that citizens are only Ukrainians by nationality, and 22.3% -all those who adhere to Ukrainian national customs and traditions. Only 38.1% of pupils know that the source of power in Ukraine is the people. At the same time, 28% erroneously believe that it is the President of Ukraine, while 21.5% is the Verkhovna Rada of Ukraine (Protasova & Poltorak, 2018).
Following the above study, at the heart of modern citizenship is the focus on civil responsibility. However, the value attitude of the individual towards the state, the country and its citizens contributes to the consolidation of society and the strengthening of the components of modern citizenship. Today it is a positive phenomenon that the Ukrainian state has standardized the need for the development of citizenship competencies (documented in the law).
To succeed in a modern society, an individual is not enough to be a narrow specialist in a particular industry. Developed democracy implies that all members of society, despite their professional daily activities, must have the necessary knowledge in the field of democratic citizenship.
Conclusions
According to above-mentioned researchers, further development and existence of the state and the nation has to be considered making a start from its valuable kernel which includes spiritual property, welfare, the system of international relations, social justice, patriotism and directly national security which consolidates society. The further introduction of state programs of civil and political education in educational institutions is the leading direction of harmonization of process of political socialization of youth in the Ukrainian society. These programs have to include the formation of civil identity, feeling of solidarity and patriotism at children and youth. The educational component of political socialization has to be complemented with practical experience. Acquisition of skills of political activity at youth can be provided by consolidation of efforts of the youth organizations and political parties.
According to the authors of the article, civic education should be implemented by civil society institutions in the leading role of the state in the educational process. The state is the main institution of political education, provides a single educational space, creating a system of education and upbringing. The system of education and upbringing ensures the entry of new generations into a complicated world of politics. Its content is the organic inclusion in the complex system of political relations and institutions through the formation of models of political behavior, the presentation of political roles, preparation for political life. Unlike civil, political education is based on the application of advances in Political Science and has a phased nature. Therefore, among the main tasks of the educational process is the assimilation of political and legal knowledge.
The reorientation of society into new democratic values has already begun. Today, a number of concepts of civic education have been developed in Ukraine, as the Ministry of Education and Science of Ukraine and the Academy of Pedagogical Sciences of Ukraine take part in this process. Civic education should be implemented in the content of the entire education system, what is mentioned in the National Doctrine on the Development of Education in Ukraine.
In addition, civic education should be implemented by civil society institutions, in particular, youth organizations, because the transformation of information into political knowledge is due to its correlation with the personal experience of a young person. Nowadays a number of effective steps have been taken in Ukraine in this direction: there were created systems of public youth associations. In spite of this, most of them remain non-numerical, little-known to young people, and are not authoritative and prestigious for them. This is despite the fact that the task of youth associations is to stimulate young people's initiatives to actively engage in independent political participation. After all, it is through activity that the political consciousness and culture of a young person are formed.
The support of the youth initiative, the desire of young people to participate in society and the state in an organized way, should be a priority in the activity of state bodies to the independent realization of needs and interests. What is important is the effective functioning of youth organizations and associations where government rules, rules of behavior, obligations and responsibilities are established. By consolidating the efforts of youth organizations, it is necessary to ensure the acquisition of practical political experience, skills of political activity of youth.
The primary focus of Ukrainian civil and political education is the establishment of conditions that permit individuals to develop into Ukrainian citizens. The result of these conditions should include the formation of a democratic outlook and political culture in the youth, an active civil and professional position, the preservation and continuation of the cultural and historical tradition, the upbringing of respect for state symbols, institutes, as well as language, cultures, the history of peoples living in Ukraine, the formation of a culture of interethnic relations, an orientation towards the consolidation of the Ukrainian people in the racial political nation as a set of citizens of the country. These Ukrainian citizens should, regardless of their socio-group differences, possess not only equal rights and responsibilities, but a singular political culture based on a sense of solidarity and patriotism. Furthermore, they should engage in democratic governance based upon constitutionally defined procedures and processes to ultimately achieve a state of political sovereignty. | 2020-03-19T10:52:52.187Z | 2020-03-10T00:00:00.000 | {
"year": 2020,
"sha1": "8df169cb62ea9ee5f881ae9d57baf9f948f27ee0",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/10712/10333",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3c244056aee108fd09898d2741ddfd26d33e49b0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
5678591 | pes2o/s2orc | v3-fos-license | Tuberculous Lymphadenitis at Penang General Hospital, Malaysia
Objective: To evaluate the incidence, treatment and clinical outcomes of tuberculous (TB) lymphadenitis at Penang General Hospital, Malaysia. Materials and Methods: Penang General Hospital is the referral center for all tuberculosis patients in the state of Penang. Patient records were reviewed to identify patients with confirmed diagnosis of TB lymphadenitis between January 2006 and December 2008. Data were analyzed using SPSS version 15. Results: Of 1,548 tuberculosis cases, 109 (7.0%) patients had TB lymphadenitis. The mean age was 36.4 ± 12.87 years and of the 109 patients with TB lymphadenitis, 35 (33.0%), 37 (34.0%) and 36 (33.0%) were observed for 2006, 2007 and 2008, respectively. Ethnically, 45 (41.3%) were Malay followed by 37 Chinese (33.9%). Among risk factors for TB lymphadenitis, HIV and diabetes mellitus were seen in 17 (15.6%) and 11 (10.0%) patients, respectively. Cough and fever were the most frequently reported symptoms. In a majority of cases (n = 90, 82.5%) positive results were obtained for fine-needle aspiration (FNA). Directly observed therapy was given to all patients. Sixty-two (56.9%) patients were successfully treated, and 5 (4.6%) patients died during the treatment. Conclusion: There was no increase in the incidence of TB lymphadenitis over the 3-year study period. The incidence was slightly higher in male than female gender and in Malay (ethnic group). Diabetes mellitus and HIV were the most commonly reported risk factors. FNA is the most reliable diagnostic test.
Introduction
Tuberculosis, a deadly bacterial infection, can spread to other body tissues and organs through the blood stream and the lymphatic system [1] . With the global increase in the incidence of human immunodeficiency virus (HIV) there has been a steady increase in extrapulmonary tuberculosis as exemplified in the United States [2] and Malaysia [3] , where 21% of extrapulmonary tuberculosis cases were associated with HIV infection [3] . Tuberculous (TB) lymphadenitis also known as scrofula (King's Evil) [4] was first described 3,000 years ago and is one of the common forms of extrapulmonary tuberculosis. In areas where tuberculosis is endemic, TB adenitis is a common cause of lymphadenopathy [5] . The cervix is the most common site of TB adenitis, while other sites include intrathoracic, intra-abdominal, and occasionally, axillary, inguinal and intramammary [6,7] . TB lymphadenitis comprises 30-50% of extrapulmonary tuberculosis cases in the US [5] , and 33.5% in Malaysia [8] . A wide excision and prolonged antituberculosis therapy were the only available treatment options in the 20th century [9] , but now a short course of rifampicin has been used successfully [10] .
Malaysia, a multicultural and multiethnic country, has a population slightly more than 26 million [11] and an area of 33,000 km 2 . In Malaysia, tuberculosis is among the top five communicable diseases with an incidence rate of 62.56 per 100,000 and a mortality rate of 5.37 per 100,000 [11] . The state of Penang is the 8th most populous state in Malaysia and among the top five states with the highest tuberculosis burden [12] . Pulmonary tuberculosis has been the center of attraction for the researchers for a number of years [13] . However, only a handful of studies [14] have been reported on extrapulmonary tuberculosis, especially, TB lymphadenitis. The current study was designed to expand our current knowledge and to gather baseline data on the incidence, diagnosis, complications, management and treatment outcomes of tuberculous lymphadenitis in a teaching hospital.
Subjects and Methods
The study protocol was approved by the Clinical Research Center, Penang General Hospital and Ministry of Health, Malaysia. Informed consent was taken from the subjects before performing needle biopsy test.
Study Location
Penang is one of the 13 states and is geographically situated in northern Malaysia. Penang is a multicultural state comprising Malay (42.5%), Chinese (46.5%), Indian (10.6%) and other minorities (0.4%), with an estimated population of 1.5 million [12] .
Subjects
A total of 1,548 cases were registered for tuberculosis treatment from 1st January 2006 to December 2008 at the Respiratory Clinic of Penang GH. This center is a tertiary level reference center for respiratory diseases in the state of Penang, Malaysia. Any person with a respiratory problem in the state of Penang can attend this center without a physician referral.
All patients with a confirmed diagnosis of TB lymphadenitis were included in the study. For those who had completed treatment (January 2006 to May 2007), data were obtained from medical records while, for those undergoing treatment (June 2007 to December 2008) data were collected during the course of the treatment. From each medical case file, the patient history, physical findings, chest radiographs and laboratory investigation were reviewed in order to obtain maximum information about the type and severity of TB. In addition, demographic factors, lifestyle (smoking habit and alcohol use) and clinical characteristics were recorded. Among clinical characteristics comorbid medical complications like diabetes mellitus and HIV, medications for therapy and therapeutic outcomes were recorded.
Categories of Patients for Registration on Diagnosis
New patient: a patient who has never had treatment for tuberculosis or has taken antituberculosis drugs for less than 1 month; relapse: a patient who was previously treated for tuberculosis and was declared cured or treatment completed, and later diagnosed with bacteriologically positive (smear or culture) tuberculosis; failure: a patient who, while on treatment, had positive sputum smear at 5 months or later during the course of treatment; return after default: a patient who returned to treatment with positive bacteriology, following interruption of treatment for 2 months or more; transfer in: a patient who was transferred from another tuberculosis registry to continue treatment.
Diagnosis
Diagnosis of tuberculosis and extrapulmonary tuberculosis were based on the World Health Organization definitions [15,16] . For tuberculosis the patient was bacteriologically confirmed, or was diagnosed by a clinician. Extrapulmonary tuberculosis such as pleura, lymph nodes, abdomen, genitourinary tract, skin, joints and bones, meninges, diagnosis was based on one culture-positive specimen, histological or strong clinical evidence consistent with active extrapulmonary tuberculosis, followed by a decision by a clinician to treat with a full course of antituberculosis chemotherapy.
The diagnosis of TB lymphadenitis was based on fine-needle aspiration (FNA) biopsy. The diagnosis was also supported by tuberculin skin test, sputum culture for acid-fast bacilli. However, to further ensure the diagnosis of HIV-positive patients and those with the comorbid diabetes mellitus records from the medical, infectious and venereal diseases clinics were traced and reviewed.
Treatment Outcome
A patient who was sputum smear-negative in the last month of treatment and on at least one previous occasion was assumed to be cured. A patient who was sputum smear-positive at 5 months or later during treatment was categorized as treatment failure. A patient who had completed treatment but who did not meet the criteria to be classified as cured or failure was categorized as treatment completed. A patient who died for any reason during the course of treatment was categorized as dead. A patient whose treatment was interrupted for 2 consecutive months or more was categorized as defaulter. A patient who was transferred to another recording and reporting unit and for whom the treatment outcome was not known was categorized as transferred out. As a whole the sum of patients cured and those who completed treatment was categorized as treatment success.
Data Analysis
The data were analyzed using the statistical software SPSS version 15. The data with quantitative variables are expressed as mean ( 8 SD) and range while the qualitative variables were estimated by frequency and percentage. Furthermore, non-parametric statistics (i.e. 2 and Fisher's exact test) were used to find out the association among the variables. p values less than 0.05 were considered significant.
Patient Demographics
The demographics of the patients are given in table 1 . Of the 1,548 patients, 109 (7.04%) had TB lymphadenitis, of whom 58 (53.2%) were males and 51 (46.8%) females. The mean age of patients with TB lymphadenitis was 36.4 8 12.87 years (7-72). Incidence of TB lymphadenitis was highest among the age group of 21-30 years (29.4%) and the difference was statistically significant (p = 0.007). Moreover a high prevalence of TB lymphadenitis was observed among Malays (41.3%) followed by Chinese (33.9%), then Indians and others.
The most common comorbid condition reported in the present study was HIV (n = 17, 15.6%). Of these 14 (82.35%) were males. The incidence of TB lymphadenitis within the HIV-infected group of patients was found to be statistically significant among male patients (p = 0.006).
Clinical Symptoms
A majority of patients (n = 83, 76.15%) reported cough, 80 (73.39%) fever (53.21%) and night sweats (58%). However, loss of weight and loss of appetite were more significantly reported by female than male patients. Details of the reported symptoms are listed in table 2 .
Management and Outcomes
All patients received directly observed therapy for a minimum of 9 months (range 9-14 months). 2EHRZ+ 6H2R2 was the therapeutic combination used for the majority (n = 22, 20.2%) of the patients. Details about the therapeutic combinations used are given in table 4 . Based on the clinical outcome the duration of intensive phase was increased to 3 months in 29 (26.6%) patients. Sixtytwo (56.9%) of the patients were successfully treated. In 9 (8.2%) patients treatment failed and 5 (4.6%) patients expired during the course of therapy.
Discussion
TB lymphadenitis is the most common form of extrapulmonary tuberculosis especially among young adult males [7] . Tuberculosis is responsible for 30-52% of diseases causing lymphadenopathy in developing countries, whereas in developed countries it is only 1.6% [17] .
In terms of age, the incidence of TB lymphadenitis was significantly higher in the age group 21-30 years followed by 51-60, 31-40, and 41-50 years, confirming a previous report [18] . The reasons for this high incidence in the age group 21-30 years may be weakening of the immune system due to smoking, which may indirectly increase the susceptibility to opportunistic infections [19,20] and the environmental pollution [18,21] . The findings of the current study contradict the findings reported by Koffi et al. [19] and Gajalakshmi [20] , because a majority (19.3%) of the patients in this age group were nonsmokers.
Concerning ethnicity, a high incidence of TB lymphadenitis was observed among Malays (41.3%); these findings contradict those of the previous study that had reported a higher incidence among Chinese [22] . Overall, a high risk of TB lymphadenitis was observed among the patients with HIV/AIDS (15.6%), thereby confirming the findings of Polesky et al. [23] and Shafer et al. [3] , who had reported HIV/AIDS to be the most common underlying condition. The majority of patients in our study presented with typical symptoms and signs, which were similar to the findings reported in other studies [24,25] . However, there are other reports that suggest atypical symptoms and signs to be more common among TB lymphadenitis patients [23] .
In agreement with the findings of other studies [23,26] , FNA of the lymph nodes was the most consistent method to identify the bacteriologic agent responsible for lymphadenopathy. In addition, tuberculin skin test was a basic tool in the diagnosis of tuberculosis infection among 72.5% of patients [27] . On the other hand, sputum culture was found to be the least reliable test for the diagnosis of TB lymphadenitis.
Conclusions
There was no increase in the incidence of TB lymphadenitis over the 3-year study period reported at Penang General Hospital. The incidence of TB lymphadenitis was slightly higher in males than in females and higher in Malays than in other ethnic groups. 2EHRZ (ethambutol, isonaiazid, rifampicin, pyrazinamide) + 6H2R2 (ethambutol, rifampicin) combination therapy showed a better treatment outcome than others. Diabetes mellitus and HIV were the risk factors. FNA was the most reliable diagnostic test. | 2018-04-03T06:23:16.166Z | 2010-12-01T00:00:00.000 | {
"year": 2010,
"sha1": "3ca5ab8d293d2e450211cb530aa7276d7e236db3",
"oa_license": null,
"oa_url": "https://www.karger.com/Article/Pdf/319764",
"oa_status": "GOLD",
"pdf_src": "Karger",
"pdf_hash": "3b5aa823544b5c8914c39223d19ca1986dba68cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260921642 | pes2o/s2orc | v3-fos-license | Improved Optical Efficiency of 850-nm Infrared Light-Emitting Diode with Reflective Transparent Structure
This study investigated a reflective transparent structure to improve the optical efficiency of 850 nm infrared light-emitting diodes (IR-LEDs), by effectively enhancing the number of extracted photons emitted from the active region. The reflective transparent structure was fabricated by combining transparent epitaxial and reflective bonding structures. The transparent epitaxial structure was grown by the liquid-phase epitaxy method, which efficiently extracted photons emitted from the active area in IR-LEDs, both in the vertical and horizontal directions. Furthermore, a reflective bonding structure was fabricated using an omnidirectional reflector and a eutectic metal, which efficiently reflected the photons emitted downwards from the active area in an upward direction. To evaluate reflective transparent IR-LED efficiency, a conventional absorbing substrate infrared light-emitting diode (AS IR-LED) and a transparent substrate infrared light-emitting diode (TS IR-LED) were fabricated, and their characteristics were analyzed. Based on the power–current (L-I) evaluation results, the output power (212 mW) of the 850 nm IR-LED with the reflective transparent structure increased by 76% and 26%, relative to those of the AS IR-LED (121 mW) and TS IR-LED (169 mW), respectively. Furthermore, the reflective transparent structure possesses both transparent and reflective properties, as confirmed by photometric and radial theta measurements. Therefore, light photons emitted from the active area of the 850 nm IR-LED were efficiently extracted upward and sideways, because of the reflective transparent structure.
Introduction
Near-infrared light-emitting diodes (NIR-LEDs) are commonly utilized as emitters for photo-couplers, automobile sensors, and closed-circuit television [1,2].In recent years, they have been applied on a wide scale, including in time-of-flight sensors, optical sensors used in wearable devices, small vehicles, and flying drones [3,4].Smaller NIR-LEDs with higher output power at large injection currents are required for certain applications.The output power for NIR-LEDs can be increased by using multiple quantum wells (MQWs), window layers, distributed Bragg reflectors (DBRs), omnidirectional reflectors (ODRs), and currentspreading layers.MQWs are used to maximize the internal quantum efficiency of the active region in NIR-LEDs [5].To improve the optical efficiency of NIR-LEDs with an absorbing substrate, a DBR must be used because it reflects the photons emitted from the active area in an upward direction [6].To obtain improved reflectivity and thermal dissipation efficiency, a reflective single metal and eutectic metal have been used.These metals serve to upwardly reflect photons emitted downward to the active area or to dissipate significant heat formed in the active area [7].Additionally, photons absorbed by the top electrode can easily escape from the LED through thick sideway paths produced by using a current-spreading layer [8].These studies emphasize that the optical efficiency must be improved by reflecting or moving photons emitted from the active areas in a light-emitting diode (LED).However, an alternative solution for a sharp decrease in chip size has not yet been proposed.Several studies indicate high size-dependent efficiency, in which smaller devices exhibit lower maximum efficiency, attributed to the degradation of electrical injection [9,10].A sidewall passivation treatment and Si substrate were employed to overcome the size-drop effect for several microdevices.High light output power, high size-independent leakage current density, and low ideality factor were observed by employing sidewall treatments [11].In smaller devices, the silicon substrate was more effective than the GaAs substrate, owing to the former's thermal dissipation effect [12].However, conventional studies do not address the sudden decrease in the surface and sideway emission areas of these devices caused by a reduction in the chip size.
In this study, we focused on improving the optical path capability of the surface and side emissions, which sharply deteriorated because of the reduction in the chip size.Here, transparent and reflective structures are selected and investigated as candidates for solving the abovementioned problems because of their proven success in enhancing the output power of LEDs [13,14].Owing to the transparent structure of the IR-LED, a significant number of light photons emitted from the active region may be induced to emit sideways.Conversely, the use of a reflective structure fabricated by the wafer-bonding process may effectively increase the number of light photons emitted upward from the active region.
Furthermore, exploiting the advantages of both approaches is an effective solution to improve the optical path capability of both surface and sideway emissions.Therefore, we verified the applicability of a combination of the transparent epitaxial and reflective bonding structures toward addressing the aforementioned problems.
The transparent epitaxial structure in the IR-LED was obtained using thick pand n-AlGaAs layers, grown using the liquid-phase epitaxy (LPE) method.A reflective bonding structure was applied using a reflector/eutectic bonding process for an IR-LED with a transparent structure.As a result, a remarkably improved output power was observed for the IR-LED chip with this developed combined structure, in relation to those of other IR-LED chips.Therefore, this study verified that a combined structure is crucial for improving the output power of IR-LEDs.
Materials and Methods
Using the metalorganic chemical vapor deposition (MOCVD) or LPE method, epitaxial wafers with wavelengths of 850 nm were created to produce the developed samples.By using the conventional 850 nm epitaxial wafer (LED A) fabricated via MOCVD, five pairs of MQWs, each with 5 nm thick GaAs wells and 12 nm thick Al 0.05 Ga 0.95 As barriers, were used as the active region.The nand p-type confinement layers made of nand p-doped Al 0.3 Ga 0.7 As materials, respectively, were on each side of the active area.The nand pdoped Al 0.3 Ga 0.7 As materials were 2.5 × 10 18 atoms/cm −3 and 1.5 × 10 18 atoms/cm −3 , respectively.The n-doped Al 0.12 Ga 0.88 As/n-Al 0.9 Ga 0.1 As material (high refraction index: 60 nm/low refraction index: 70 nm) was inserted between n-doped Al 0.3 Ga 0.7 As and ntype GaAs substrates in the 20 paired DBRs of the LED A structure.For the LED B and LED C structures fabricated using the LPE method, 120 µm thick 2nd n-Al 0.6 Ga 0.4 As, and 20 µm thick 1st n-Al 0.18 Ga 0.82 As layers grown sequentially on the GaAs substrate were employed as the etching stop layer and n-confinement, respectively.Moreover, 1-µm-thick p-Al 0.08 Ga 0.92 As and 20-µm-thick 1st n-Al 0.18 Ga 0.82 As were grown on n-confinement as the active layer and p-confinement, respectively.Additionally, LEDs with transparent and reflective structures must be fabricated through LPE at an epitaxial growth rate of 1 µm/min.
Before the wafer-bonding process, the absorbing GaAs substrate was selectively removed from the H 2 O 2 :NH 3 solution until the appearance of the 2nd n-Al 0.6 Ga 0.4 As layer, which was attached to a p-Si carrier by a paraffin solid.After removing the n-GaAs sub-strate, the p-Si carrier was selectively removed by eliminating the paraffin.Thus, the transparent substrate LED (TS LED) structure was obtained (LED B).
An epitaxial wafer (LED B) without a GaAs substrate was wafer-bonded to the p-Si substrate.A 3000 nm thick Ti/Au/In/Ti structure was employed as the eutectic structure, while a reflector made of 500 nm thick Ag was used for the reflector to bond the wafers.A pressing force of 4500 N at 230 • C was used to conduct the wafer-bonding process for 60 min.Therefore, a reflective transparent substrate LED (reflective TS IR-LED) was finally obtained (LED C). Figure 1 shows the fabrication process for three types of infrared (IR) LEDs: absorbing substrate 850 nm IR-LED (LED A), transparent substrate 850 nm IR-LED (LED B), and reflective transparent 850 nm IR-LED (LED C).The absorbing substrate 850 nm IR-LED (LED A) was grown in situ on an n-GaAs absorbing substrate, using the MOCVD system.The 850 nm IR-LEDs with the transparent substrate were grown on an n-GaAs absorbing substrate using the liquid-phase epitaxy system.The TS 850 nm IR-LED was obtained simply by removing the n-GaAs absorbing substrate.The reflective TS 850 nm IR-LED was fabricated by adding a reflective structure (reflector/eutectic/p-Si) to the TS 850 nm IR-LED.It is important to note that the reflective TS 850 nm IR-LED should have a reverse structure.
Micromachines 2023, 14, x FOR PEER REVIEW 3 of 10 which was attached to a p-Si carrier by a paraffin solid.After removing the n-GaAs substrate, the p-Si carrier was selectively removed by eliminating the paraffin.Thus, the transparent substrate LED (TS LED) structure was obtained (LED B).
An epitaxial wafer (LED B) without a GaAs substrate was wafer-bonded to the p-Si substrate.A 3000 nm thick Ti/Au/In/Ti structure was employed as the eutectic structure, while a reflector made of 500 nm thick Ag was used for the reflector to bond the wafers.A pressing force of 4500 N at 230 °C was used to conduct the wafer-bonding process for 60 min.Therefore, a reflective transparent substrate LED (reflective TS IR-LED) was finally obtained (LED C). Figure 1 shows the fabrication process for three types of infrared (IR) LEDs: absorbing substrate 850 nm IR-LED (LED A), transparent substrate 850 nm IR-LED (LED B), and reflective transparent 850 nm IR-LED (LED C).The absorbing substrate 850 nm IR-LED (LED A) was grown in situ on an n-GaAs absorbing substrate, using the MOCVD system.The 850 nm IR-LEDs with the transparent substrate were grown on an n-GaAs absorbing substrate using the liquid-phase epitaxy system.The TS 850 nm IR-LED was obtained simply by removing the n-GaAs absorbing substrate.The reflective TS 850 nm IR-LED was fabricated by adding a reflective structure (reflector/eutectic/p-Si) to the TS 850 nm IR-LED.It is important to note that the reflective TS 850 nm IR-LED should have a reverse structure.Bonded IR-LED wafers were sequentially cleaned with acetone and methanol to remove organic contamination, followed by removing the surface oxidation of the 2nd n-Al0.6Ga0.4Astop window (front) and p-Si substrate (back) in an HF:deionized water (10:1) solution.After cleaning, bonding pads were placed on the front and back of the wafers using a combination of photolithography and selective etching.AuGeNi (1000 nm/50 nm /20 nm) was deposited on the n-type substrate using a thermal evaporator, and AuBe (500 nm) was deposited on the p-type substrate using an electron beam evaporator.Figure 2 shows the schematic of the structure, and provides the compositional information of the reflective TS 850 nm IR-LED chip.Bonded IR-LED wafers were sequentially cleaned with acetone and methanol to remove organic contamination, followed by removing the surface oxidation of the 2nd n-Al 0.6 Ga 0.4 As top window (front) and p-Si substrate (back) in an HF:deionized water (10:1) solution.After cleaning, bonding pads were placed on the front and back of the wafers using a combination of photolithography and selective etching.AuGeNi (1000 nm/ 50 nm/20 nm) was deposited on the n-type substrate using a thermal evaporator, and AuBe (500 nm) was deposited on the p-type substrate using an electron beam evaporator.Figure 2 shows the schematic of the structure, and provides the compositional information of the reflective TS 850 nm IR-LED chip.
Results and Discussion
Previous studies showed that the emission area must be increased to improve the efficiency of the IR-LEDs.In this study, we demonstrate that the combined structure of the transparent and reflective layers can serve as an effective light-emitting factor for IR-LEDs.Conventional IR-LEDs have been fabricated through MOCVD with a low growth rate (1 µm/h).Therefore, we conducted LPE to obtain IR-LEDs with transparent structures.Through LPE, tens-of-micrometer-thick transparent structures could be grown in the IR-LED with a noticeable growth rate (1 µm/min).Figure 3a,b show the epitaxial scanning electron microscopy (SEM) images of the conventional IR-LED and TS LED.The conventional IR-LED in Figure 3a exhibits the total epitaxial layers of approximately 6 µm thick, which include the DBR used as the reflector.Based on existing research, the number of light photons emitted from the active region decreases upon the reduction of the emission area caused by a thin epitaxial layer.However, the conventional IR-LED could not have a thick transparent layer, owing to its inefficient growth rate.In the SEM image in Figure 3b, a significantly thick transparent layer (p-, n-AlGaAs) is observed in the active region (p-Al0.08Ga0.92As).The n-Al0.6Ga0.4Aslayer used for the 2nd n-confinement was approximately 120 µm thick.The p-AlGaAs and n-AlGaAs layers at the interface of the active region were approximately 20 µm thick.Furthermore, a reflective bonding structure was used to increase the emission area of the developed TS IR-LED.Figure 3c shows the SEM image of the epitaxial layer of the reflective TS IR-LED, fabricated using the waferbonding process.The SEM image in Figure 3c shows the reversed TS structure, reflective bonding structure, and the p-Si layer.Figure 3d-f show the schematic of the structure of the chip fabricated from the epitaxial structures in Figure 3a-c.In the conventional IR-LED chip shown in Figure 3d, the active region between the p-and n-confinements is located on the DBR and absorbing GaAs substrate.Contrarily, the TS IR-LED chip has no DBR or absorbing GaAs substrate, except for the transparent p-and n-AlGaAs layers.The reflective TS IR-LED chip in Figure 3f has a reversed TS epitaxial layer on the reflective bonding structure and a p-Si substrate applied by the wafer-bonding process.Based on the optical analysis, the optical-emission efficiency of the reflective TS IR-LED chip will be higher than that of either the AS IR-LED chip or the TS IR-LED chip, as shown in Figure 3c,f.
Figure 4 shows the photon paths for the AS IR-LED chip with the DBR, the TS IR-LED chip with the transparent layer, and reflective TS IR-LED chips with both transparent and reflective layers.From the photon paths shown on the AS IR-LED chips in Figure 4a, most of the light emitted from the active region escaped from the LED chip through the
Results and Discussion
Previous studies showed that the emission area must be increased to improve the efficiency of the IR-LEDs.In this study, we demonstrate that the combined structure of the transparent and reflective layers can serve as an effective light-emitting factor for IR-LEDs.Conventional IR-LEDs have been fabricated through MOCVD with a low growth rate (1 µm/h).Therefore, we conducted LPE to obtain IR-LEDs with transparent structures.Through LPE, tens-of-micrometer-thick transparent structures could be grown in the IR-LED with a noticeable growth rate (1 µm/min).Figure 3a,b show the epitaxial scanning electron microscopy (SEM) images of the conventional IR-LED and TS LED.The conventional IR-LED in Figure 3a exhibits the total epitaxial layers of approximately 6 µm thick, which include the DBR used as the reflector.Based on existing research, the number of light photons emitted from the active region decreases upon the reduction of the emission area caused by a thin epitaxial layer.However, the conventional IR-LED could not have a thick transparent layer, owing to its inefficient growth rate.In the SEM image in Figure 3b, a significantly thick transparent layer (p-, n-AlGaAs) is observed in the active region (p-Al 0.08 Ga 0.92 As).The n-Al 0.6 Ga 0.4 As layer used for the 2nd n-confinement was approximately 120 µm thick.The p-AlGaAs and n-AlGaAs layers at the interface of the active region were approximately 20 µm thick.Furthermore, a reflective bonding structure was used to increase the emission area of the developed TS IR-LED.Figure 3c shows the SEM image of the epitaxial layer of the reflective TS IR-LED, fabricated using the waferbonding process.The SEM image in Figure 3c shows the reversed TS structure, reflective bonding structure, and the p-Si layer.Figure 3d-f show the schematic of the structure of the chip fabricated from the epitaxial structures in Figure 3a-c.In the conventional IR-LED chip shown in Figure 3d, the active region between the p-and n-confinements is located on the DBR and absorbing GaAs substrate.Contrarily, the TS IR-LED chip has no DBR or absorbing GaAs substrate, except for the transparent p-and n-AlGaAs layers.The reflective TS IR-LED chip in Figure 3f has a reversed TS epitaxial layer on the reflective bonding structure and a p-Si substrate applied by the wafer-bonding process.Based on the optical analysis, the optical-emission efficiency of the reflective TS IR-LED chip will be higher than that of either the AS IR-LED chip or the TS IR-LED chip, as shown in Figure 3c,f.chip in Figure 3c would exhibit a higher optical efficiency than the TS IR-LED chip.Here, Figure 4c shows that most of the light emitted downward from the active region can effectively escape from the LED chip when both a transparent layer and an ODR are used, owing to the significant reduction in the intensity of light moving downward.A significant reduction was achieved by enhancing the sideway-directed light and upwardly reflected light generated by the transparent and reflective layers, respectively.Figure 4 shows the photon paths for the AS IR-LED chip with the DBR, the TS IR-LED chip with the transparent layer, and reflective TS IR-LED chips with both transparent and reflective layers.From the photon paths shown on the AS IR-LED chips in Figure 4a, most of the light emitted from the active region escaped from the LED chip through the surface.In addition, some light photons emitted downward from the active region may have escaped to the surface via the DBR.Despite these efforts, the AS IR-LED had a relatively low optical efficiency, owing to either an insignificant sideway emitting angle or an insignificant DBR reflective angle.The TS IR-LED structure in Figure 4b shows that a significant number of light photons could effectively escape from the LED chip by increasing the sideway emission area of the transparent layer.This remarkable improvement was reasonable because the emission area was much larger than that of the surface in the IR-LED.However, even with the use of the TS IR-LED chip, most light photons emitted downward from the active region did not escape from the LED chip, because either Ag paste or bonding metal was used for the assembly.Therefore, the reflective TS IR-LED chip in Figure 3c would exhibit a higher optical efficiency than the TS IR-LED chip.Here, Figure 4c shows that most of the light emitted downward from the active region can effectively escape from the LED chip when both a transparent layer and an ODR are used, owing to the significant reduction in the intensity of light moving downward.A significant reduction was achieved by enhancing the sideway-directed light and upwardly reflected light generated by the transparent and reflective layers, respectively.The results of the light-emitting paths in Figure 4 verified that the use of transparent and reflective layers was a more attractive method for improving the light extraction efficiency of the IR-LED chip, because the light-emitting path in the LED chip was limited by the intrinsic problems of an insignificant sideway emission area and the low reflectivity of the specific wavelength [15].The results in Figures 3 and 4 demonstrate that the light extraction efficiency of IR-LEDs can be improved using either the transparent epitaxial layer or the reflective bonding layer.
To obtain more detailed information, the current-voltage (I-V) and light output power-current (L-I) characteristics of the AS IR-LED, TS IR-LED, and reflective TS IR-LED chips were evaluated (Figure 5).Here, an integrating sphere was used to measure the output power-current-voltage (L-I-V) characteristics of the developed LEDs.An integrating sphere is designed to collect light scattered and emitted from a sample in the form of a hollow sphere with a highly reflective inner surface (Model OPI-100 LED Electrical and , Optical Test System, Withlight company, Yeoju-si, Republic of Korea).As shown in Figure 5a, a marginally higher turn-on voltage (0.1 V) of the AS IR-LED chip is induced by the resistance of the undoped active region in the AS IR-LED.The TS The results of the light-emitting paths in Figure 4 verified that the use of transparent and reflective layers was a more attractive method for improving the light extraction efficiency of the IR-LED chip, because the light-emitting path in the LED chip was limited by the intrinsic problems of an insignificant sideway emission area and the low reflectivity of the specific wavelength [15].The results in Figures 3 and 4 demonstrate that the light extraction efficiency of IR-LEDs can be improved using either the transparent epitaxial layer or the reflective bonding layer.
To obtain more detailed information, the current-voltage (I-V) and light output powercurrent (L-I) characteristics of the AS IR-LED, TS IR-LED, and reflective TS IR-LED chips were evaluated (Figure 5).Here, an integrating sphere was used to measure the output power-current-voltage (L-I-V) characteristics of the developed LEDs.An integrating sphere is designed to collect light scattered and emitted from a sample in the form of a hollow sphere with a highly reflective inner surface (Model OPI-100 LED Electrical and, Optical Test System, Withlight company, Yeoju-si, Republic of Korea).The results of the light-emitting paths in Figure 4 verified that the use of transparent and reflective layers was a more attractive method for improving the light extraction efficiency of the IR-LED chip, because the light-emitting path in the LED chip was limited by the intrinsic problems of an insignificant sideway emission area and the low reflectivity of the specific wavelength [15].The results in Figures 3 and 4 demonstrate that the light extraction efficiency of IR-LEDs can be improved using either the transparent epitaxial layer or the reflective bonding layer.
To obtain more detailed information, the current-voltage (I-V) and light output power-current (L-I) characteristics of the AS IR-LED, TS IR-LED, and reflective TS IR-LED chips were evaluated (Figure 5).Here, an integrating sphere was used to measure the output power-current-voltage (L-I-V) characteristics of the developed LEDs.An integrating sphere is designed to collect light scattered and emitted from a sample in the form of a hollow sphere with a highly reflective inner surface (Model OPI-100 LED Electrical and , Optical Test System, Withlight company, Yeoju-si, Republic of Korea).As shown in Figure 5a, a marginally higher turn-on voltage (0.1 V) of the AS IR-LED chip is induced by the resistance of the undoped active region in the AS IR-LED.The TS As shown in Figure 5a, a marginally higher turn-on voltage (0.1 V) of the AS IR-LED chip is induced by the resistance of the undoped active region in the AS IR-LED.The TS IR-LED and reflective TS IR-LED chips exhibited similar turn-on voltage properties because the series resistance of the device was not significantly influenced by the metals (used as a reflector or eutectic structure).However, the current-voltage curve exhibits a different trend with increasing current.When the current was increased, the reflective TS-IR LED chips showed a relatively lower rate of increase than that of the TS IR-LED chips.This may have been owing to the heat-dissipation effect caused by the metal (reflector/eutectic metal) used in the reflective TS IR-LED [16].The output powers of the developed IR-LED chips exhibit different properties in Figure 5b.At an injection current of 300 mA, the TS IR-LED chip with a transparent layer exhibited a higher output power (169 mW) than the AS IR-LED chip (121 mW).Furthermore, an improved output power (202 mW) was obtained from the reflective TS IR-LED chip with both the transparent and reflective layers.This result confirmed that the output power of the IR-LED chips was strongly dependent on the use of either a transparent or reflective layer, because light photons emitted from the active region can effectively escape from the LED upward or sideways.Moreover, the results of the output power exhibited a trend similar to those of the light photon path illustrated in Figure 4. Therefore, the IR-LED chip with the combined structure would exhibit a considerably higher output power than those with either a transparent or reflective layer.
In particular, the IR-LED chip with a combined reflector had a 67% higher output power than a conventional LED A with a DBR.Furthermore, the radial theta (half angle) of the photometric values was investigated for the AS IR-LED, TS IR-LED, and reflective TS IR-LED chips; the results are shown in Figure 6.Here, a light distributor (goniophotometer) was used to measure the radial theta of the photometric values for the developed LEDs.A light distributor is a piece of equipment that measures the intensity of light reflected from the surface of an object at various angles, as well as analyzing the direction and distribution characteristics of light from a light source, lighting fixture, medium, and surface.(Model OPI-305 Gonio-Photometer System, Withlight company, Republic of Korea).In the case of the conventional IR-LED chip (LED A) with DBRs, the radial theta was a relatively narrow angle, and the photometric value was low.A relatively high photometric value (68-70) was observed between 0 • and ±10 • .Above 20 • , the photometric value exhibited a remarkably decreasing trend.Therefore, the DBRs were more effective in reflecting photons from the surface.However, the TS IR-LED chip with a transparent layer had a wider radial theta and higher photometric value.A higher photometric value (80-81) was observed between 0 • and ±40 • .Therefore, the photons escaped from the IR-LED sideways, because of the transparent layer.In the case of the reflective TS IR-LED chip, a higher photometric value (~100) was observed at similar angles (0 • -38 • ).As a result, different light-current (L-I) curves of the AS IR-LED, TS IR-LED, and reflective TS IR-LED chips were obtained by using the radial theta and photometric values.
These results demonstrated that reflective transparent structures are essential in decreasing the surface and sideway emission areas, owing to a sharp chip shrink.The reflective transparent structure exhibited a high efficiency in extracting photons emitted sideways from the active area.Conversely, the reflective structure efficiently reflected photons emitted from the active area in the upward direction.Therefore, the transparent structure is more useful for improving optical efficiency reduced by chip shrinking because of emission areas on the four sides.Furthermore, the reflective structure is one of the factors in improving optical efficiency.Therefore, much thicker transparent and reflector structures were crucial for improving optical efficiency of an extremely smaller chip.The mutual complementarity between the combined structure and wavelength must be considered for developing LEDs with shorter or longer wavelengths.
Figure 1 .
Figure 1.Fabrication process for developed AS, TS, and reflective TS IR-LEDs.
Figure 1 .
Figure 1.Fabrication process for developed AS, TS, and reflective TS IR-LEDs.
Figure 2 .
Figure 2. (a) Schematic of the structure.(b) Composition of reflective TS 850 nm IR-LED chip.
Figure 3 .
Figure 3. SEM images of the epitaxial layers (a-c) and schematic (d-f) of the structures of the AS IR-LED chip, TS IR-LED chip, and reflective TS IR-LED chip.
Figure 3 .
Figure 3. SEM images of the epitaxial layers (a-c) and schematic (d-f) of the structures of the AS IR-LED chip, TS IR-LED chip, and reflective TS IR-LED chip.
Figure 4 .
Figure 4. Photon paths of the (a) AS IR-LED chip, (b) TS IR-LED chip, and (c) reflective TS IR-LED chip.
Figure 5 .
Figure 5. L-I-V curve for AS IR-LED chip, TS IR-LED chip, and Reflective TS IR-LED chip: (a) I-V curve and (b) L-I curve.
Figure 4 .
Figure 4. Photon paths of the (a) AS IR-LED chip, (b) TS IR-LED chip, and (c) reflective TS IR-LED chip.
Figure 4 .
Figure 4. Photon paths of the (a) AS IR-LED chip, (b) TS IR-LED chip, and (c) reflective TS IR-LED chip.
Figure 5 .
Figure 5. L-I-V curve for AS IR-LED chip, TS IR-LED chip, and Reflective TS IR-LED chip: (a) I-V curve and (b) L-I curve.
Figure 5 .
Figure 5. L-I-V curve for AS IR-LED chip, TS IR-LED chip, and Reflective TS IR-LED chip: (a) I-V curve and (b) L-I curve. | 2023-08-16T15:05:50.775Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "c5cd1c0fe38d53dacfb4518638bc3e67f63aedcb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/14/8/1586/pdf?version=1691817176",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f2fe191676ab42588c3e3f9c6cb243aa39d0260",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
80772057 | pes2o/s2orc | v3-fos-license | Refined exposure assessment of extracts of rosemary (E 392) from its use as food additive
Abstract The EFSA Panel on Food Additives and Nutrient Sources added to Food (ANS) provides a scientific opinion on the refined exposure assessment of extracts of rosemary (E 392) when used as a food additive. Extracts of rosemary (E 392) was evaluated by the AFC Panel in 2008. Following this EFSA evaluation, extracts of rosemary (E 392) was authorised for use as a food additive in the EU in several food categories with maximum levels. In 2015, the ANS Panel provided a scientific opinion on the safety of the proposed extensions of use for extracts of rosemary (E 392) in fat‐based spreads. In 2016, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) has evaluated this food additive and established a temporary acceptable daily intake (ADI) of 0–0.3 mg/kg body weight (bw) for rosemary extract, expressed as carnosic acid plus carnosol. Based on the data provided by food industry, the Panel was able to refine the exposure estimates of extracts of rosemary (E 392). The highest mean refined exposure estimate (non‐brand loyal scenario) was 0.09 mg/kg bw per day in children (3–9 years) and the highest 95th percentile of exposure was 0.20 mg/kg bw per day in children. Taking uncertainties into account, the Panel concluded that these exposure estimates very likely overestimate the real exposure to extracts of rosemary (E 392) from its use as a food additive according to Annex II. Margins of safety were estimated for children and adults using the refined exposure estimate; these are higher than the ones calculated in 2015. Intake of carnosic acid and carnosol from natural diet (herbs) was estimated. It was maximally 1.66 mg/kg bw per day (p95).
Summary
Following a request from the European Commission, the EFSA Panel on Food Additives and Nutrient Sources added to Food (ANS) performed a refined exposure assessment of extracts of rosemary (E 392) when used as a food additive. The Panel was not provided with a newly submitted dossier and based this assessment on concentration data available following a public call for data.
Extracts of rosemary (E 392) was evaluated in 2008 for its safety, by the EFSA Panel on Food Additives, Flavourings, Processing Aids and Materials in contact with Food (EFSA AFC Panel, 2008), for its use as a food additive. Following this EFSA evaluation, extracts of rosemary (E 392) was authorised for use as a food additive in the European Union (EU) according to Annexes II and III to Regulation (EC) No 1333/2008. The AFC Panel estimated that the toxicological data on the rosemary extracts are insufficient to establish an acceptable daily intake (ADI) but that the existing data, including the absence of effects in the 90-day studies on reproductive organs and negative genotoxicity data, did not give reason for concern. In 2015, the EFSA Panel on Food Additives and Nutrient Sources added to Food (ANS) provided a scientific opinion on the safety of the proposed extensions of use for extracts of rosemary (E 392) in fat-based spreads at 30 mg/kg and 100 mg/kg. In 2016, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) has evaluated this food additive and concluded that there are sufficient data to establish an ADI for rosemary extract prepared according to the specifications established at this meeting. Thus, JECFA established a temporary ADI of 0-0.3 mg/kg body weight (bw) for rosemary extract, expressed as carnosic acid plus carnosol.
In 2017, the European Food Safety Authority (EFSA) launched a public call for data aiming at collecting reported use levels from industry or analytical data on several food additives, including extracts of rosemary (E 392). Use levels were reported by industry. Added to these new data, information on the presence of food additives on the label of foods was retrieved from the Mintel's Global New Products Database (GNPD), an online database monitoring new introductions of packaged goods in the market worldwide. Consumption data were available through the EFSA Comprehensive Database.
Dietary exposure to extracts of rosemary (E 392) from its use as a food additive according to Annex II was calculated for different exposure scenarios based on the provided use levels. If actual practice changes, this refined estimates may no longer be representative and should be updated. The Panel also noted that the exposure to extracts of rosemary (E 392) from its use according the Annex III (Part 2, 4, 5A) was not considered in the exposure assessment.
Extracts of rosemary (E 392) is authorised in 33 food categories of which none was identified as a food category to which consumers may be brand loyal. Therefore, the Panel selected the refined non-brandloyal scenario as the most relevant exposure scenario for the safety evaluation of this food additive.
Food subcategories from the Mintel's GNPD included in the exposure assessment represented approximately 83% of the food products labelled with extracts of rosemary (E 392) in the database. In all exposure scenarios, it was assumed that 100% of food products contained extracts of rosemary (E 392), whereas information from the Mintel's GNPD showed that the additive was used in only a small percentage of food products.
Based on the data provided by food industry, the Panel was able to refine the exposure estimates of extracts of rosemary (E 392). The highest mean refined exposure estimate (non-brand loyal scenario) was 0.09 mg/kg bw per day in children (3-9 years) and the highest 95th percentile of exposure was 0.20 mg/kg bw per day in children. Taking uncertainties into account, the Panel concluded that these exposure estimates very likely overestimate the real exposure to extracts of rosemary (E 392) from its use as a food additive according to Annex II.
A range of margins of safety (MOS) values was calculated by the Panel by dividing the lowest value of the range of NOAELs of 20-60 mg carnosol plus carnosic acid/kg bw per day identified by the AFC Panel (EFSA AFC Panel, 2008) by the highest p95 exposure level in each population and the highest value of the range of NOAELs by the lowest p95 exposure level. Using this approach the range of MOS was 100-2,000 and 200-3,000 for children and adults, respectively. These new MOS estimates are higher than the ones calculated in 2015 (25-240 for children and 60-600 for adults) using an maximum permitted limit (MPL) scenario.
Intake of carnosic acid and carnosol from natural diet was estimated at the maximum up to 1.66 mg/kg bw per day (p95) in toddlers.
Introduction
The present opinion deals with the refined exposure estimation of extracts of rosemary (E 392) when used as a food additive.
1.1.
Background and Terms of Reference as provided by the European Commission 1.1.1. Background Regulation (EC) No 1333/2008 1 of the European Parliament and of the Council on food additives requires that food additives are subject to a safety evaluation by the European Food Safety Authority (EFSA) before they are permitted for use in the European Union. In addition, it is foreseen that food additives must be kept under continuous observation and must be re-evaluated by EFSA.
For this purpose, a programme for the re-evaluation of food additives that were already permitted in the European Union before 20 January 2009 has been set up under the Regulation (EU) No 257/2010. 2 This Regulation also foresees that food additives are re-evaluated whenever necessary in the light of changing conditions of use and new scientific information. For efficiency and practical purposes, the re-evaluation should, as far as possible, be conducted by group of food additives according to the main functional class to which they belong.
The order of priorities for the re-evaluation of the currently approved food additives should be set on the basis of the following criteria: the time since the last evaluation of a food additive by the Scientific Committee on Food (SCF) or by EFSA, the availability of new scientific evidence, the extent of use of a food additive in food and the human exposure to the food additive taking also into account the outcome of the Report from the Commission on Dietary Food Additive Intake in the EU 3 of 2001. The report "Food additives in Europe 2000 4 " submitted by the Nordic Council of Ministers to the Commission, provides additional information for the prioritisation of additives for re-evaluation. As colours were among the first additives to be evaluated, these food additives should be re-evaluated with a highest priority.
In 2003, the Commission already requested EFSA to start a systematic re-evaluation of authorised food additives. However, as a result of adoption of Regulation (EU) 257/2010, the 2003 Terms of References are replaced by those below.
Terms of Reference
The Commission asks the European Food Safety Authority to re-evaluate the safety of food additives already permitted in the Union before 2009 and to issue scientific opinions on these additives, taking especially into account the priorities, procedures and deadlines that are enshrined in the Regulation (EU) No 257/2010 of 25 March 2010 setting up a programme for the re-evaluation of approved food additives in accordance with the Regulation (EC) No 1333/2008 of the European Parliament and of the Council on food additives.
Interpretation of terms of Reference
In 2013, EFSA received a communication from European Commission suggesting to limit this evaluation to a refined exposure assessment of extracts of rosemary (E 392) by 2018 instead of a full re-evaluation. 5 Therefore, this opinion provides only a refined exposure assessment.
Information on existing authorisations and evaluations
Extracts of rosemary (E 392) is derived from Rosmarinus officinalis L. and contains several compounds which have been proven to exert antioxidative functions. These compounds belong mainly to the classes of phenolic acids, flavonoids, diterpenoids (carnosol and carnosic acid) and triterpenes.
Extracts of rosemary (E 392) was evaluated in 2008 for its safety, by the EFSA Panel on Food Additives, Flavourings, Processing Aids and Materials in contact with Food (AFC) Panel (EFSA AFC Panel, 2008), for its use as a food additive. Following this EFSA evaluation, extracts of rosemary (E 392) was authorised for use as a food additive in the European Union (EU) in several food categories with maximum levels, in accordance with Annexes II and III to Regulation (EC) No 1333/2008 on food additives. The AFC Panel estimated that the toxicological data on the rosemary extracts are insufficient to establish an acceptable daily intake (ADI), because the toxicity data set did not provide reproductive and developmental toxicity studies or a long-term study. On the other hand, the existing data, including the absence of effects in the 90-day studies on reproductive organs and negative genotoxicity data, did not give reason for concern.
Following a request by the European Commission in 2014, EFSA Panel on Food Additives and Nutrient Sources added to Food (ANS) provided a scientific opinion on the safety of the proposed extensions of use for extracts of rosemary (E 392) in fat-based spreads at 30 mg/kg and 100 mg/kg in 2015 (EFSA ANS Panel, 2015). In this opinion, only the scenarios based on the maximum permitted levels (MPLs) at that time and on the MPLs and proposed new use levels at that time were performed; no refined exposure scenario was done. The Panel concluded that these two additional extensions of use for extracts of rosemary (E 392) would not change the estimated exposure to the food additive, compared with the exposure based on the already approved permitted uses, in any part of the population. The Panel also considered that the conclusions of the EFSA AFC Panel in 2008 on the safety of rosemary extracts (E 392) would remain valid and that there was no need to reconsider the available toxicological assessment to address the Terms of Reference. Thus, the Panel considered at that time that it was unlikely that there was a safety concern with the already permitted uses together with the additional proposed extension of uses compared with the already permitted uses alone. Overall, the Panel noted that the use of wider food consumption surveys led to a lower Margin of Safety (MOS) with the upper end of the range at a level similar to the MOS previously identified and used in the EFSA 2008 opinion for the safety assessment of the uses of rosemary extracts as food additives. The Panel acknowledged that there are also limitations to the available toxicity database; however, the current no observed adverse effect levels (NOAELs) were the highest doses tested in sub-chronic studies.
The EFSA ANS Panel noted that since the publication of this scientific opinion on the extension of use for extracts of rosemary (E 392) in 2015, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) has evaluated this food additive (JECFA, 2016). This Committee concluded that there are sufficient data to establish an ADI for rosemary extract prepared according to the specifications established at this meeting. JECFA established a temporary ADI of 0-0.3 mg/kg bw for rosemary extract, expressed as carnosic acid plus carnosol, on the basis of a NOAEL of 64 mg/kg bw per day, expressed as carnosic acid plus carnosol, the highest dose tested in a short-term toxicity study in rats, with the application of a 200-fold uncertainty factor. The ADI was made temporary pending the submission of studies to elucidate the potential developmental and reproductive toxicity. The ANS Panel consider that the toxicological data should also be reviewed by EFSA when available. Rosemary
2.
Data and methodologies
Data
The ANS Panel was not provided with a newly submitted dossier. EFSA launched public call for data. 11 The Panel based its dietary refined exposure assessment of extracts of rosemary (E 392) on information submitted to EFSA following the public call for data.
The EFSA Comprehensive European Food Consumption Database (Comprehensive Database 12 ) was used to estimate the dietary exposure.
The Mintel's Global New Products Database (GNPD) is an online resource listing food products and compulsory ingredient information that should be included in labelling. This database was used to verify the use of extracts of rosemary (E 392) in food products.
Methodologies
This opinion was formulated following the principles described in the EFSA Guidance on transparency with regard to scientific aspects of risk assessment (EFSA Scientific Committee, 2009) and following the relevant existing guidance documents from the EFSA Scientific Committee.
The ANS Panel assessed the dietary refined exposure to extracts of rosemary (E 392) as a food additive in line with the principles laid down in Regulation (EU) 257/2010 and in the EFSA Statement on the approach followed for the refined exposure assessment as part of the safety assessment of food additives under re-evaluation (EFSA ANS Panel, 2017).
Specifications
The specifications for extracts of rosemary (E 392) as defined in the Commission Regulation (EU) No 231/2012 and by JECFA (2016-tentative) are listed in Table 1.
Definition
Extracts of rosemary contain several components, which have been proven to exert antioxidative functions. These components belong mainly to the classes of phenolic acids, flavonoids, diterpenoids. Besides the antioxidant compounds, the extracts can also contain triterpenes and organic solvent extractable material specifically defined in the following specification Rosemary extract is obtained from ground dried leaves of Rosmarinus officinalis L using food-grade solvents, namely, acetone or ethanol. Solvent extraction is followed by filtration, solvent evaporation, drying and sieving to obtain a fine powder. Additional concentration and/or precipitation steps followed by deodorisation, decolourisation and standardisation using diluents and carriers of food grade quality maybe included to produce the final product. Rosemary extract is characterised by its content of phenolic diterpenes, carnosic acid and carnosol, the principal antioxidative agents. Other antioxidant components present include triterpenes and triterpenic acids. Rosemary extract is identified by the total content of carnosol and carnosic acid as a ratio of reference volatile compounds which are responsible for flavour.
The product of commerce can be standardised to a total carnosic acid and carnosol content up to 33% Acetone: Not more than 50 mg/kg Ethanol: Not more than 500 mg/kg Arsenic: Not more than 3 mg/kg Not more than 3 mg/kg Lead Not more than 2 mg/kg Not more than 2 mg/kg 1 -Extracts of rosemary produced from dried rosemary leaves by acetone extraction (generic specifications applicable)
Description
Extracts of rosemary are produced from dried rosemary leaves by acetone extraction, filtration, purification and solvent evaporation, followed by drying and sieving to obtain a fine powder or a liquid.
Residual solvents
Acetone: Not more than 500 mg/kg 2 -Extracts of rosemary prepared by extraction of dried rosemary leaves by means of supercritical carbon dioxide (not in JECFA specifications)
Description
Extracts of rosemary produced from dried rosemary leaves extracted by means of supercritical carbon dioxide with a small amount of ethanol as entrainer Identification Content of reference antioxidative compounds ≥ 13% w/w, expressed as the total of carnosic acid and carnosol Antioxidant/ Volatiles -Ratio (Total % w/w of carnosic acid and carnosol) ≥ 15 (% w/w of reference key volatiles)* (* as a percentage of total volatiles in the extract, measured by gas chromatography -mass spectrometry detection, 'GC-MSD')
Authorised uses and use levels
Maximum levels of extracts of rosemary (E 392) have been defined in Annex II to Regulation (EC) No 1333/2008 on food additives, as amended. In this document, these levels are named MPLs.
Currently, extracts of rosemary (E 392) is an authorised food additive in the EU with MPLs ranging from 15 to 400 mg/kg in 33 food categories listed in Table 2. All MPLs are expressed as the sum of carnosol and carnosic acid, and some are expressed on the fat basis of the food. Purity
Residual solvents
Ethanol: Not more than 500 mg/kg 4 -Extracts of rosemary decolourised and deodorised, obtained by a two-step extraction using hexane and ethanol (not in JECFA specifications)
Description
Extracts of rosemary which are prepared from a deodorised ethanolic extract of rosemary, undergone a hexane extraction. The extract may be further purified, for example by treatment with active carbon and/or molecular distillation. They may be suspended in suitable and approved carriers or spray-dried Identification Content of reference antioxidative compounds ≥ 5% w/w, expressed as the total of carnosic acid and carnosol Antioxidant/ Volatiles -Ratio (Total % w/w of carnosic acid and carnosol) ≥ 15 (% w/w of reference key volatiles)* (* as a percentage of total volatiles in the extract, measured by gas chromatography mass spectrometry detection, 'GC-MSD')
Residual solvents
Hexane: not more than 25 mg/kg Ethanol: Not more than 500 mg/kg Refined exposure assessment of extracts of rosemary (E 392) from its use as food additive www.efsa.europa.eu/efsajournal Only fish and fishery products including molluscs and crustaceans with a fat content higher than 10%
(a)
Refined exposure assessment of extracts of rosemary (E 392) from its use as food additive www.efsa.europa.eu/efsajournal According to Annex III, Part 2 of Regulation (EC) No 1333/2008, extracts of rosemary (E 392) is authorised as a food additive other than carrier in food colour preparations with a maximum level of 1,000 mg/kg in the preparation, and 5 mg/kg in the final product expressed as the sum of carnosic acid and carnosol.
According to Annex III, Part 4, extracts of rosemary (E 392) is also authorised as a food additive in all food flavourings at the maximum level of 1,000 mg/kg (expressed as the sum of carnosol and carnosic acid).
In addition, according to Annex III, Part 5, Section A of Regulation (EC) No 1333/2008, extracts of rosemary (E 392) is also authorised at the level of 1,000 mg/kg in the preparation of b-carotene and lycopene and 5 mg/kg in the final product expressed as the sum of carnosol and carnosic acid.
3.3.
Exposure data 3.3.1. Reported use levels or data on analytical levels of extracts of rosemary (E 392) Most food additives in the EU are authorised at a specific MPL. However, a food additive may be used at a lower level than the MPL. Therefore, information on actual use levels is required for performing a more realistic exposure assessment.
In the framework of Regulation (EC) No 1333/2008 on food additives and of Commission Regulation (EU) No 257/2010 regarding the re-evaluation of approved food additives, EFSA issued a public call 15 for occurrence data (usage level and/or concentration data) on extracts of rosemary (E 392). In response to this public call, updated information on the actual use levels of extracts of rosemary (E 392) in foods was made available to EFSA by industry. No analytical data on the concentration of extracts of rosemary (E 392) in foods were made available by the Member States.
Summarised data on reported use levels in foods provided by industry
Industry provided EFSA with data on use levels (n = 44) of extracts of rosemary (E 392) in foods for 12 out of the 33 food categories in which extracts of rosemary (E 392) is authorised. The Panel noted that data were submitted for one food category which is not authorised to contain the food additive as such: noodles (FC 06.5). For this food category, levels for their seasoning were provided. These levels were not taken into account as seasoning as an ingredient is already covered by the FC 12.2.2.
Updated information on the actual use levels of extracts of rosemary (E 392) in foods was made available to EFSA by FoodDrinkEurope (FDE), the International Chewing Gum Association (ICGA), the Association of the European Self-Medication Industry (AESGP), l'Alliance 7, IMACE -European Margarine Association, European Potato Processors' Association (EUPPA), EU Fish Processors and Traders Association -European Federation of National Organizations of Importers and Exporters of Fish (AIPCE-CEP) and Intersnack.
The Panel noted that two use levels for a niche product were provided: one on other fat (FC 02.2.2) and one for sauce (FC 12.6). Since other use levels were available for sauces, the Panel did not consider the niche level for sauce in the analysis. The level provided for the FC 02.2.2 was used in the refined exposure assessment scenario as no other data were available.
MPLs for extracts of rosemary (E 392) are expressed as the sum of carnosol and carnosic acid. Exposure estimates should then be expressed also in mg of carnosol and carnosic acid/kg bw per day. However, some data providers buy a preparation from suppliers, e.g. seasonings in which rosemary extract is a component. Therefore, only the specifications for the seasoning are known by food industry, but not for the rosemary extract itself, because it is only a subcomponent. Thus, some levels reported to EFSA are expressed as extracts of rosemary in total and not as the sum of carnosol and carnosic acid. However, by doing so food industry is on the safe side, since as long as the levels of rosemary extract are lower than the MPLs, the amount of carnosol and carnosoic acid as well is below these maximum limits. This introduce an uncertainty on the exposure assessment as it should overestimate the intake of carnosol and carnosic acid.
Appendix A provides data on the use levels of extracts of rosemary (E 392) in foods as reported by industry.
Summarised data extracted from the Mintel's Global New Products Database
The Mintel's GNPD is an online database which monitors new introductions of packaged goods in the market worldwide. It contains information of over 2.5 million food and beverage products of which more than 1,000,000 are or have been available on the European food market. Mintel started covering EU's food markets in 1996, currently having 20 out of its 28 member countries and Norway presented in the Mintel GNPD. 16 For the purpose of this Scientific Opinion, the Mintel's GNPD 17 was used for checking the labelling of food and beverages products and food supplements for extracts of rosemary (E 392) within the EU's food market as the database contains the compulsory ingredient information on the label.
According to the Mintel's GNPD, extracts of rosemary (E 392) was labelled on more than 4,700 products between January 2013 and February 2018. The main Mintel's GNPD food subcategories containing the food additive were 'dry soup', 'pizzas' and 'stocks'. Most food subcategories are covered in the current assessment and food subcategories from the Mintel GNPD included in the exposure assessment represented approximately 83% of the food products labelled with extracts of rosemary (E 392) in the database. The Mintel categories which could not be included in the present assessment are meat substitutes, spreads (nut spread, spreadable cheese, chocolate spread, etc.), confectionary, meat pastes and pât es, rice. Appendix B lists the percentage of the food products labelled with extracts of rosemary (E 392) out of the total number of food products per food subcategories according to the Mintel's GNPD food classification. The percentages ranged from less than 0.1% in many food subcategories to 17.8% in Mintel's GNPD food subcategory 'dry soup'. The average percentage of foods labelled to contain extracts of rosemary (E 392) was 1%.
Food consumption data used for exposure assessment EFSA Comprehensive European Food Consumption Database
Since 2010, the EFSA Comprehensive European Food Consumption Database (Comprehensive Database) has been populated with national data on food consumption at a detailed level. Competent authorities in the European countries provide EFSA with data on the level of food consumption by the individual consumer from the most recent national dietary survey in their country (cf. Guidance of EFSA on the 'Use of the EFSA Comprehensive European Food Consumption Database in Exposure Assessment' (EFSA, 2011a). Consumption surveys added in the Comprehensive database in 2015 were also taken into account in this assessment. 18 The food consumption data gathered by EFSA were collected by different methodologies and thus direct country-to-country comparisons should be interpreted with caution. Depending on the food category and the level of detail used for exposure calculations, uncertainties could be introduced owing to possible subjects' underreporting and/or misreporting of the consumption amounts. Nevertheless, the EFSA Comprehensive Database includes the currently best available food consumption data across Europe.
Food consumption data from the following population groups were used for the exposure assessment: infants, toddlers, children, adolescents, adults and the elderly. For the present assessment, food consumption data were available from 33 different dietary surveys carried out in 19 European countries (Table 3).
Consumption records were codified according to the FoodEx classification system (EFSA, 2011b). Nomenclature from the FoodEx classification system has been linked to the food categorisation system (FCS) as presented in Annex II of Regulation (EC) No 1333/2008, part D, to perform exposure estimates. In practice, the FoodEx food codes were matched to the FCS food categories.
Food categories considered for the exposure assessment of extracts of rosemary (E 392)
The food categories in which the use of extracts of rosemary (E 392) is authorised were selected from the nomenclature of the EFSA Comprehensive Database (FoodEx classification system), at the most detailed level possible (up to FoodEx Level 4) (EFSA, 2011b). Some food categories (or their restrictions/exceptions) for which MPLs were available and/or use levels were submitted are not referenced in the EFSA Comprehensive Database and could therefore not be taken into account in the present estimate. This was the case for four food categories (Appendix C) and may have resulted in an underestimation of the exposure. The food categories which were not taken into account are described below (in ascending order of the FCS codes): • 01.5 Dehydrated milk as defined by Council Directive 2001/114/EC 19 , only milk powder for vending machines. The FoodEx codes do not allow restricting only to foods sold in vending machine. The whole food category was not taken into account because the restriction represents only a very small part of the food category; • 01.5 Dehydrated milk as defined by Council Directive 2001/114/EC, 19 only dried milk for manufacturing of ice-cream. The restriction indicates that the dried milk referred to in this food category is not sold directly to the consumer. In order to take into account this food category, the edible ices (FC 03) made of dried milk should be taken into account. However, no information on the consumption of this type of ice-cream is available in the EFSA Comprehensive database. To avoid overestimation of the exposure, the whole FC 03 was not considered in the exposure assessment; • 02.3 Vegetable oil pan spray; • 04.2.4.1 Fruit and vegetable preparations excluding compote, only seaweed based fish roe analogues No foods correspond to the two above food categories in the EFSA Comprehensive database, therefore they cannot be taken into account.
For the following food categories, the restrictions/exceptions which apply to the use of extracts of rosemary (E 392) could not be taken into account, and therefore the whole food category was considered in the exposure assessment. This applies also to four food categories (Appendix D) and may have resulted in an overestimation of the exposure: • 02.1 Fats and oils essentially free from water (excluding anhydrous milkfat), only vegetable oils (excluding virgin oils and olive oils) and fat where content of polyunsaturated fatty acids is higher than 15% w/w of the total fatty acid, for the use in non-heat-treated food products: the polyunsaturated fatty acids content could not be checked as well as their use in non-heattreated food products. Thus, vegetable oils and fat with the exception of olive oil, were taken into account.
• 02.1 Fats and oils essentially free from water (excluding anhydrous milkfat), À only fish oil and algal oil; lard, beef, poultry, sheep and porcine fat: no algal oil is available in the EFSA Comprehensive database, all fish oil and lard, beef, poultry, sheep and porcine fat available in the EFSA Comprehensive database were taken into account.
À fat and oils for the professional manufacture of heat-treated foods: this information is not available in the EFSA Comprehensive database and this restriction was not taken into account À frying oils and frying fat, excluding olive oil and pomace oil: oils and fat that can be used for frying were all taken into account
.2 Other fat and oil emulsions, including spreads as defined by Council Regulation (EC)
No 1234/2007 20 and liquid emulsions, only spreadable fats with a fat content less than 80%: low-fat butter and margarine were taken into account.
• 06.4.5 Fillings of stuffed pasta (ravioli and similar), only in fillings of stuffed dry pasta: all filled pasta were taken into account Furthermore, for the FCs 08.3.1 Non-heat-treated meat products and 08.3.2 Heat-treated meat products, it is not possible to distinguish heat-treated from non-heat-treated meat products. Meat products were separated between dried sausages, dehydrated meat and other meat. For each of these subcategories, for the regulatory scenario, MPLs as in Table 1 were applied depending on fat content of the foods. It has to be mentioned that use levels were only reported for cooked smoked sausage only; thus, only these kinds of meat products were taken into account in the refined exposure assessment. Thus, dried sausages, dehydrated meat and other meat products were included in the regulatory scenario but not in the refined exposure scenarios.
For the FCs 17.1/17.2/17.3 Food supplements, in solid, liquid, syrup-type or chewable form, the form consumed cannot be differentiated in the EFSA Comprehensive database and therefore the same use level was applied to the whole FC 17.
For the refined scenario, 13 additional food categories were not taken into account because no use levels were provided for these food categories to EFSA (Appendix A). For the remaining food categories, the refinements considering the restrictions/exceptions as set in Annex II to Regulation No 1333/2008 were applied.
Overall, 26 food categories were included in the regulatory maximum level exposure scenario, and 12, in the refined scenarios of the exposure assessment to extracts of rosemary (E 392) (Appendix C).
3.4.
Exposure estimates 3.4.1. Exposure to extracts of rosemary (E 392), expressed as the sum of carnosol and carnosic acid, from its use as a food additive The Panel estimated the chronic dietary exposure to extracts of rosemary (E 392) for the following population groups: infants, toddlers, children, adolescents, adults and the elderly. Dietary exposure to extracts of rosemary (E 392) was calculated by multiplying concentrations of extracts of rosemary (E 392) per food category (Appendix C) with their respective consumption amount per kilogram body weight for each individual in the Comprehensive Database. The exposure per food category was subsequently added to derive an individual total exposure per day. These exposure estimates were averaged over the number of survey days, resulting in an individual average exposure per day for the survey period. Dietary surveys with only one day per subject were excluded as they are considered as not adequate to assess repeated exposure.
This was carried out for all individuals per survey and per population group, resulting in distributions of individual exposure per survey and population group (Table 3). On the basis of these distributions, the mean and 95th percentile of exposure were calculated per survey and per population group. The 95th percentile of exposure was only calculated for those population groups with a sufficiently large sample size (EFSA, 2011a). Therefore, in the present assessment, the 95th percentile of exposure for infants from Italy and for toddlers from Belgium, Italy and Spain were not estimated.
Exposure assessment to extracts of rosemary (E 392) was carried out by the ANS Panel based on two different sets of concentration data: (1) MPLs as set down in the EU legislation (defined as the regulatory maximum level exposure assessment scenario); and (2) reported use levels (defined as the refined exposure assessment scenario). These two scenarios are discussed in detail below.
These scenarios do not consider the consumption of food supplements, which are covered in an additional scenario detailed below (food supplements consumers only scenario).
A possible additional exposure from the use of extracts of rosemary (E 392) as a food additive in food additives (Part 2), flavourings (Part 4) and nutrients (Part E, Section A) in accordance with Annex III to Regulation (EC) No 1333/2008 was not considered in any of the exposure assessment scenarios, as no concentration data were available reflecting this use of the food additive.
Regulatory maximum level exposure assessment scenario
The regulatory maximum level exposure assessment scenario is based on the MPLs as set in Annex II to Regulation (EC) No 1333/2008. For extracts of rosemary (E 392), the MPLs as listed in Table 2 were used to assess the exposure according to this scenario.
MPLs expressed for extracts of rosemary (E 392) on fat basis were converted to whole weight based on fat content information obtained from the EFSA Comprehensive Database.
The Panel considers the exposure estimates derived following this scenario as the most conservative since it is assumed that that the population will be exposed to the food additive present in food at the MPLs over a longer period of time.
Refined exposure assessment scenario
The refined exposure assessment scenario is based on use levels reported by food industry and analytical results reported by Member States. For extracts of rosemary (E 392), the refined exposure assessment scenario was only based on use levels reported by food industry. This exposure scenario can consider only food categories for which these data were available to the Panel.
Reported use levels expressed for extracts of rosemary (E 392) on fat basis were converted to whole weight based on fat content information per food obtained from the EFSA Comprehensive Database.
Based on the available data set, the Panel calculated two refined exposure estimates based on two model populations: • The brand-loyal consumer scenario: It was assumed that a consumer is exposed long-term to extracts of rosemary (E 392) present at the maximum reported use level for one food category. This exposure estimate is calculated as follows: À Combining food consumption with the maximum of the reported use levels for the main contributing food category at the individual level. À Using the mean of the typical reported use levels for the remaining food categories.
• The non-brand-loyal consumer scenario: It was assumed that a consumer is exposed long-term to extracts of rosemary (E 392) present at the mean reported use levels in food. This exposure estimate is calculated using the mean of the typical reported use levels for all food categories.
Appendix C summarised the concentration levels of extracts of rosemary (E 392) used in the refined exposure scenarios.
'Food supplement consumers only' scenario
Extracts of rosemary (E 392) is authorised in the FC 17 Food supplements as defined in Directive 2002/46/EC excluding food supplements for infants and young children. As exposure via food supplements may deviate largely that via food, and the number of food supplement consumers may be low depending on populations and surveys, an extra scenario was calculated in order to reflect additional exposure to food additives from the intake of food supplements. This additional exposure was estimated assuming that consumers only of food supplements were exposed to extracts of rosemary (E 392) present at the maximum reported use levels in these supplements on a daily basis. For the remaining food categories (12/33 categories), the mean of the typical reported use levels of extracts of rosemary (E 392) were used.
As FC 17 does not consider food supplements for infants and toddlers as defined in the legislation, exposure to extracts of rosemary (E 392) via food supplements was not estimated for these two population groups.
Dietary exposure to extracts of rosemary (E 392), expressed as the sum of carnosol and carnosic acid Table 4 summarises the estimated exposure to extracts of rosemary (E 392) from its use as a food additive in six population groups (Table 3) Refined exposure assessment of extracts of rosemary (E 392) from its use as food additive In the regulatory maximum level exposure assessment scenario, the mean exposure to extracts of rosemary (E 392) from its use as a food additive ranged from 0.03 mg/kg bw per day in infants to 0.44 mg/kg bw per day in toddlers. The 95th percentile of exposure to extracts of rosemary (E 392) ranged from 0.08 mg/kg bw per day in infants to 0.85 mg/kg bw per day in children.
In the brand-loyal refined exposure scenario, the mean exposure to extracts of rosemary (E 392) from its use as a food additive ranged from below 0.01 mg/kg bw per day in infants to 0.16 mg/kg bw per day in children. The high exposure (p95) to extracts of rosemary (E 392) ranged from below 0.01 mg/kg bw per day in infants to 0.35 mg/kg bw per day in children. In the non-brand-loyal refined exposure scenario, the mean exposure to extracts of rosemary (E 392) from its use as a food additive ranged from below 0.01 mg/kg bw per day in almost all population groups to 0.09 mg/kg bw per day in children. The 95th percentile of exposure ranged from below 0.01 mg/kg bw per day in infants to 0.20 mg/kg bw per day in children.
In the food supplements consumers only scenario, the mean exposure to extracts of rosemary (E 392) from its use as a food additive ranged from 0.01 mg/kg bw per day for adolescents and adults to 0.09 mg/kg bw per day for children. The 95th percentile of exposure to extracts of rosemary (E 392) ranged from 0.02 mg/kg bw per day for adolescents to 0.13 mg/kg bw per day for children. These exposure levels did not exceed the exposure levels calculated for the refined exposure scenario.
Main food categories contributing to exposure to extracts of rosemary (E 392) using the regulatory maximum level exposure assessment scenario In the regulatory maximum level exposure assessment scenario, the main contributing food categories to the total mean exposure estimates for infants were fats and oils essentially free from water (FC 02.1), and soups and broths (FC 12.5), meat products (FC 08.3) and fine bakery wares (FC 07.2). For all other population groups (toddlers, children, adolescents, adults and the elderly), the main contributing food categories were meat products (FC 08.3), fine bakery wares (FC 07.2) and soups and broths (FC 12.5).
Main food categories contributing to exposure to extracts of rosemary (E 392) using the refined exposure assessment scenario The main contributing food category from the refined estimated exposure scenario, both brandloyal and non-brand-loyal scenarios were fine bakery wares (FC 07.2) and soups and broths (FC 12.5) for infants; and fine bakery wares (FC 07.2) only for the others population groups.
Appendix E summarises the contributing food categories for the regulatory maximum level and the refined exposure assessment scenario.
Uncertainty analysis
Uncertainties in the exposure assessment of extracts of rosemary (E 392) have been discussed above. In accordance with the guidance provided in the EFSA opinion related to uncertainties in dietary exposure assessment (EFSA, 2007), the following sources of uncertainties have been considered and summarised in Table 5. Extracts of rosemary (E 392) is authorised in 33 food categories. Use levels of the additive were made available by industry for 12 food categories.
The Panel calculated that out of the foods authorised to contain extracts of rosemary (E 392) according to Annex II to Regulation (EC) No 1333/2008, 3% (for infants) to 86% (for children) of the amount of food consumed (by weight) per population group was taken into account in the current assessment.
The Panel also noted that information from the Mintel's GNPD (Appendix B) indicated that many food sub-categories, as categorised according to the Mintel's GNPD nomenclature, were labelled with the food additive. The main ones were included in the current exposure assessment: soups, fine bakery wares, sauces and snacks. Food subcategories from the Mintel GNPD included in the exposure assessment represented approximately 83% of the food products labelled with extracts of rosemary (E 392) in the database.
Furthermore, the percentage of foods per subcategory labelled to contain extracts of rosemary (E 392) was maximally up to 18% (Appendix B); in the assessment, it was assumed that the additive was present in 100% of the foods belonging to the food categories included in the different exposure scenarios.
As mentioned above, the refined exposure assessment scenario is based on use levels reported by food industry. This exposure scenario can consider only food categories for which these data were available to the Panel. Regarding extracts of rosemary (E 392), the main contributing food categories in the regulatory maximum exposure assessment scenario were fine bakery wares, meat products and soups and broth. These three food categories were also included in the refined exposure scenario as reported use levels were made available to EFSA.
Given these observations, the Panel considered overall that the uncertainties identified would, in general, result in an overestimation of the exposure to extracts of rosemary (E 392) from its use as a food additive according to Annex II, in European countries considered in the EFSA European database for all exposure scenarios.
Uncertainty in possible national differences in use levels of food categories +/-Concentration data: -use levels considered applicable to all foods within the entire food category, whereas on average 1% of the foods, belonging to food categories with foods labelled with extracts of rosemary (E 392), was labelled with the additive + Food categories selected for the exposure assessment: exclusion of food categories due to missing FoodEx linkage (n = 4/33 food categories) -Food categories selected for the exposure assessment: inclusion of food categories without considering the restriction/exception (n = 4 for the MPL scenario/n = 1 for the refined scenarios out of 33 food categories) + Food categories selected in the exposure assessment: no concentration data for certain food categories (n=14/33 food categories for the refined scenarios) The Panel noted that food categories which may contain extracts of rosemary (E 392) due to carryover (Annex III, Part 2, 4, 5A) were not considered in the current exposure assessment.
Exposure via the regular diet
Carnosic acid and carnosol are substances naturally present in foods. Their main sources are rosemary (Rosmarinus officinalis) and sage (Salvia officinalis and other Salvia species).
Natural content of carnosic acid and carnosol was retrieved from publications and databases to estimate their intake from natural sources. Levels of carnosic acid and carnosol were available for dried rosemary leaves (112 mg/kg (Loussouarn et al., 2017)), fresh rosemary leaves (12.18 mg/kg (Luis and Johnson, 2005)) and dried sage (5.3 mg/kg (FooDB version 1.0 database 21 )). In other herbs and spices and tea, carnosic acid and carnosol were detected but not quantified.
The foods taken into account to estimate natural intake of both compounds are rosemary (dried and fresh), sage (dried, infusion leaves), as well as aromatic herbs undefined (dried and fresh). The level of carnosic acid and carnosol applied to the latter was the one of dried sage (5.3 mg/kg).
Dietary intake of carnosic acid and carnosol
End April 2018, EFSA published a new release of the Comprehensive European Food Consumption Database. 22 This database includes now new surveys. This updated database was used to estimate the intake of carnosic acid and carnosol from the natural diet. Table 6 summarises the estimated intake of carnosic acid and carnosol in six population groups. Detailed results per population group and survey are presented in Appendix F.
The mean intake of carnosic acid and carnosol from natural diet was up to 0.34 mg/kg bw per day in toddlers, while the high intake (p95) reached 1.66 mg/kg bw per day also in toddlers.
Exposure from both dietary sources (as a food additive and from natural diet)
The range of exposure from both food additive and natural diet is indicated in Table 7. The percentage of carnosol and carnosic acid coming from the food additive is at the maximum of 35% for the population of adolescents. Refined exposure assessment of extracts of rosemary (E 392) from its use as food additive www.efsa.europa.eu/efsajournal
Exposure via other sources
Extracts of rosemary (E 392) is also permitted as an antimicrobial, refreshing and tonic in cosmetic products. According to the Regulation (EC) No 1223/2009 23 on cosmetic products, there is no limit.
Rosemary extracts have been registered under the REACH Regulation and is used in the following products: washing & cleaning products, air care products, biocides (e.g. disinfectants, pest control products), polishes and waxes, perfumes and fragrances and cosmetics and personal care products. Other release to the environment of this substance is likely to occur from: indoor use (e.g. machine wash liquids/detergents, automotive care products, paints and coating or adhesives, fragrances and air fresheners) and outdoor use as processing aid.
Data to calculate the exposure via all these sources were not available to the Panel and therefore the exposure resulting from these other sources could not be taken into account in this opinion
Discussion
To assess the dietary exposure to extracts of rosemary (E 392) from its use as a food additive, the exposure was calculated based on (1) MPLs set out in the EU legislation (defined as the regulatory maximum level exposure assessment scenario) and (2) the reported use levels (defined as the refined exposure assessment scenario).
Extracts of rosemary (E 392) is authorised in 33 food categories of which none was identified as a food category to which consumers may be brand loyal. Therefore, the Panel selected the refined nonbrand-loyal scenario as the most relevant exposure scenario for the safety evaluation of this food additive.
In total, 12 out of 33 food categories were taken into account in the refined exposure assessment scenarios.
The exposure estimates in the non-brand-loyal exposure assessment scenario was maximally 0.20 mg/kg bw per day (95th percentile for children) ( Table 4).
The Panel noted that the main food category contributing to exposure in the refined scenarios was fine bakery wares which is a highly consumed food category and for which the second highest use level (after the one for food supplements) was reported. More specific data on the foods belonging to this food category that contain the additive will result in a more refined exposure estimate.
The assessments were hampered by several uncertainties and overall it was estimated that the exposure was overestimated due to the reported use levels used and assumptions made in the exposure assessment. For an elaborate discussion of the uncertainties, see Section 3.4.1. The elderly (≥ 65 years) Dietary exposure to extracts of rosemary (E 392), non-brand-loyal scenario (mg/kg bw per day) • Mean • 95th percentile The Panel also noted that the refined exposure estimates are based on information provided on the reported level of use of extracts of rosemary (E 392). If actual practice changes, this refined estimates may no longer be representative and should be updated.
The Panel noted that more food categories were labelled with extract of rosemary (E 392) than for which use levels were reported by industry. However, the main ones were included in the current exposure assessment: soups, fine bakery wares, sauces and snacks. Food subcategories from the Mintel GNPD included in the exposure assessment represented approximately 83% of the food products labelled with extracts of rosemary (E 392) in the database.
Intake of carnosic acid and carnosol from natural diet was at the maximum up to 1.66 mg/kg bw per day (p95) in toddlers. This would represent approximately nine times the intake from the food additive for this population.
The Panel noted that the exposure to extracts of rosemary (E 392) from its use according the Annex III (Parts 2, 4 and 5A) was not considered in the exposure assessment, neither the use of rosemary as an ingredient in composite foods. This may have led to an underestimation of overall exposure. Exposure to the food additive via other sources was also not considered.
The Panel noted that the collection of concentration data of extracts of rosemary (E 392) could support a more refined assessment on dietary exposure.
The Panel calculated a range of MOS of 100-2,000 for children. This was calculated by dividing the lowest value of the range of NOAELs of 20-60 mg carnosol plus carnosic acid/kg bw per day identified by the AFC Panel (EFSA AFC Panel, 2008) by the highest p95 exposure level (0.2 mg/kg bw per day) in this population and the highest value of the range of NOAELs by the lowest p95 exposure level (0.03 mg/kg bw per day). Using the same approach for adults, the range of MOS was 200-3,000.
The Panel noted that the current refined exposure estimates were based on the EFSA Comprehensive consumption database as in the 2015 exposure estimates (EFSA ANS Panel 2015). The margins of safety for children and adults (100-2,000 and 200-3,000 respectively) using the refined exposure estimate are higher than the ones calculated in 2015 (25-240 for children and 60-600 for adults) using an MPL scenario.
Conclusions
Based on the data provided by food industry, the Panel was able to refine the exposure estimates of extracts of rosemary (E 392). The highest mean refined exposure estimate (non-brand loyal scenario) was 0.09 mg/kg bw per day in children (3-9 years) and the highest 95th percentile of exposure was 0.20 mg/kg bw per day in children. Taking uncertainties into account, the Panel concluded that these exposure estimates very likely overestimate the real exposure to extracts of rosemary (E 392) from its use as a food additive according to Annex II. | 2019-03-18T14:02:32.576Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "cdd811f24ff2aa1ef0de4c86ddd36ce62db6690c",
"oa_license": "CCBYND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.2903/j.efsa.2018.5373",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea601d4fb6b26b094c66561920fbeb479ffaef61",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250510331 | pes2o/s2orc | v3-fos-license | Validation of Multiple Soil Moisture Products over an Intensive Agricultural Region: Overall Accuracy and Diverse Responses to Precipitation and Irrigation Events
: Remote sensing and land surface models promote the understanding of soil moisture dynamics by means of multiple products. These products differ in data sources, algorithms, model structures and forcing datasets, complicating the selection of optimal products, especially in regions with complex land covers. This study compared different products, algorithms and flagging strategies based on in situ observations in Anhui province, China, an intensive agricultural region with diverse landscapes. In general, models outperform remote sensing in terms of valid data coverage, metrics against observations or based on triple collocation analysis, and responsiveness to precipitation. Remote sensing performs poorly in hilly and densely vegetated areas and areas with developed water systems, where the low data volume and poor performance of satellite products (e.g., Soil Moisture Active Passive, SMAP) might constrain the accuracy of data assimilation (e.g., SMAP L4) and downstream products (e.g., Cyclone Global Navigation Satellite System, CYGNSS). Remote sensing has the potential to detect irrigation signals depending on algorithms and products. The single-channel algorithm (SCA) shows a better ability to detect irrigation signals than the Land Parameter Retrieval Model (LPRM). SMAP SCA-H and SCA-V products are the most sensitive to irrigation, whereas the LPRM-based Advanced Microwave Scanning Radiometer 2 (AMSR2) and European Space Agency (ESA) Climate Change Initiative (CCI) passive products cannot reflect irrigation signals. The results offer insight into optimal product selection and algorithm improvement.
Introduction
Soil moisture (SM) is widely recognized as a key parameter in the hydrological cycle and in energy balance [1,2]. Despite the rapid development of in situ observation techniques [3,4], remote sensing, land surface model (LSM) and their combination (i.e., data assimilation) provide globally continuous SM products in space and time. Remote sensing algorithms and LSMs are developed, validated and improved in densely gauged areas, and are extended to data sparse areas. Radiometer-based remote sensing of SM is based on solving microwave transfer equations, supported by ancillary datasets including soil surface temperature, roughness and if necessary, vegetation optical depth (VOD) [5,6]. Radar-based SM products and their merging with radiometer-based products can better represent SM dynamics in densely vegetated areas [7,8]. LSMs are driven by meteorological
Study Area and In Situ Observations
Anhui Province, China (29 • 41 -34 • 38 N, 114 • 54 -119 • 37 E) is located in a humid to semi-humid transitional zone, known as a big agricultural province. The northern part of Anhui Province belongs to a semi-humid monsoon climate, and the southern part belongs to a subtropical humid monsoon climate. The annual mean temperature ranges from 14-16 • C, and the annual precipitation ranges from 800-1600 mm across the whole province [34]. Cropland is the main land-use land cover (LULC) type in the northern and central flat areas (Figure 1), where irrigation systems are essential to maintain agricultural production [35]. The northern part has no large lakes or rivers, whereas the central part is characterized by highly developed water systems, including the Yangtze River, the Huai River and the Chaohu Lake. The southern and southwestern areas are covered by hilly terrains with dense forests (Figure 1). A total of 20 meteorological stations (Table 1) recorded hourly SM data at a 10-cm depth beneath the soil surface in 2017-2020. The data were collected from the Anhui Meteorological Service Center and were checked for breaks and non-responsive values, following the method in [36]. The temporal consistencies of SM observations were checked by reference to multiple SM products (described below) to find any changes in sensor sensitivities (dynamic ranges) or obvious inconsistencies in SM trends. Finally, the mean SM values and SM dynamics (1σ) are shown in Table 1. The 0.05 • × 0.05 • monthly Terra Moderate-resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) data in 2017-2020 were extracted and averaged to show the mean vegetation conditions at each station.
Study Area and In Situ Observations
Anhui Province, China (29°41′-34°38′N, 114°54′-119°37′E) is located in a humid to semi-humid transitional zone, known as a big agricultural province. The northern part of Anhui Province belongs to a semi-humid monsoon climate, and the southern part belongs to a subtropical humid monsoon climate. The annual mean temperature ranges from 14-16 °C, and the annual precipitation ranges from 800-1600 mm across the whole province [34]. Cropland is the main land-use land cover (LULC) type in the northern and central flat areas (Figure 1), where irrigation systems are essential to maintain agricultural production [35]. The northern part has no large lakes or rivers, whereas the central part is characterized by highly developed water systems, including the Yangtze River, the Huai River and the Chaohu Lake. The southern and southwestern areas are covered by hilly terrains with dense forests (Figure 1). A total of 20 meteorological stations (Table 1) recorded hourly SM data at a 10-cm depth beneath the soil surface in 2017-2020. The data were collected from the Anhui Meteorological Service Center and were checked for breaks and non-responsive values, following the method in [36]. The temporal consistencies of SM observations were checked by reference to multiple SM products (described below) to find any changes in sensor sensitivities (dynamic ranges) or obvious inconsistencies in SM trends. Finally, the mean SM values and SM dynamics (1σ) are shown in Table 1. The 0.05° × 0.05° monthly Terra Moderate-resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) data in 2017-2020 were extracted and averaged to show the mean vegetation conditions at each station.
Soil Moisture Products and Data Preprocessing
Multiple radiometer-based products were compared in this study. The Advanced Microwave Scanning Radiometer 2 (AMSR2) products are based on the Land Parameter Retrieval Model (LPRM), JAXA algorithm, Normalized Polarization Difference (NPD) algorithm and Single Channel Algorithm (SCA). LPRM simultaneously retrieves surface temperature, SM and VOD, producing three datasets at 10.7 GHz (X band), 6.9 GHz (C1 band) and 7.3 GHz (C2 band) [38]. JAXA retrieves SM and VOD based on a lookup table approach [39]. NPD uses the polarization difference and a combined vegetation/roughness factor, and SCA uses Advanced Very High Resolution Radiometer (AVHRR) NDVI climatology for vegetation correction [40]. Unlike LPRM, other algorithms only generate X-band SM products to mitigate radio frequency interference (RFI) effects. These products differ substantially from each other [41], which motivates new retrieval and merging algorithms [42,43]. For all AMSR2 products, SM values within 0-0.6 cm 3 ·cm −3 were kept for validation.
Soil Moisture and Ocean Salinity (SMOS) products include SMOS-L3 V300 [44] and SMOS-IC V106 [45], derived from multi-angular TB observations by iterating the L-band Microwave Emission of the Biosphere forward model. Soil Moisture Active Passive (SMAP) products are generated based on three algorithms: SCA-H (H-pol), SCA-V (V-pol) and dual channel algorithm (DCA) [19]. The recent SMAP Version 8 SM products were validated in this study, with DCA as the baseline algorithm. Two versions of flagging strategies were applied for SMOS and SMAP. The rigorous one was the same as that in [46], which is commonly used in validation studies. Specifically, for SMOS-L3, data with a surface temperature < 273 K, quality index (Soil_Moisture_Dqx) > 0.06 or RFI probability (RFI_Prob) > 0.1 were excluded. SMOS-IC SM data with a surface temperature > 273 K and a quality flag = 0 ("data OK") were retained. For SMAP products, data with retrieval quality index = 0 or 8 were retained. The relaxed one is the same to that for AMSR2, reserving all SM values within 0-0.6 cm 3 ·cm −3 . Both data in the a.m. and p.m. orbits were considered for radiometer-based products ( Table 2).
Active-based products were also validated in this study. The National Aeronautics and Space Administration (NASA) Cyclone Global Navigation Satellite System (CYGNSS) product is generated by reference to The SMAP SCA-V, having the advantages of wide spatial and temporal coverage [47]. A 25-km MetOp-B Advanced Scatterometer (ASCAT) product was generated based on a change detection method [48]. SM values were retained only if wetland flag < 15%, topography flag < 20%, frozen soil probability < 10%, snow cover probability <10% and SM retrieval error < 10% [26]. The European Space Agency (ESA) Climate Change Initiative (CCI) V06.1 Active, Passive and Combined products were used in this study. The active and passive products are generated by fusing multiple satellite Remote Sens. 2022, 14, 3339 5 of 20 retrievals from scatterometers and radiometers, respectively, and the active and passive combined product is further rescaled to the Global Land Data Assimilation System (GLDAS) Noah SM climatology [49,50]. Data with snow cover, a surface temperature < 273 K, dense vegetation or failed retrievals were discarded. For ASCAT and ESA CCI Active, the degree of saturation was transformed into volumetric water content using the ESA CCI auxiliary porosity data. The nearest CCI porosity data were used to calculate the ASCAT SM. Five modeling-based products were validated in this study. Forced by a number of analysis-and observation-based products [51], the GLDAS Noah land surface model can provide SM products in the 0-10 cm soil layer, which currently serves as the reference for rescaling ESA CCI SM. The Modern-Era Retrospective Analysis for Research and Applications Version 2 (MERRA2) is the latest atmospheric reanalysis of the modern satellite era produced by NASA's Global Modeling and Assimilation Office (GMAO) [52], providing coarse-scale SM products in the 0-5 cm soil layer. ERA5 benefits from a decade of developments in model physics and data assimilation, providing enhanced modeling results compared to its predecessor, ERA-Interim [53]. It has been demonstrated that the direct assimilation of microwave-based SM products (e.g., SMAP SM and, recently, CYGNSS SM) into LSM improves SM modeling skills [13,54,55]. ERA5-Land shares with ERA5 most of the parameterizations and enhances the description of the hydrological cycle, in particular the soil moisture and lake description [56]. SMAP L4 product is derived by assimilating SMAP TB observations into the NASA Catchment Land Surface Model [57,58]. The basic properties of the modeling products are shown in Table 2. Note that only the products with the timestamp closest to UTC 0:00 were collected. For each product, SM data in the topmost layer were evaluated, and the data were discarded if the soil temperature was less than 273 K.
In Situ and Triple Collocation-Based Validations
A direct comparison of SM products with in situ observations is straightforward. For each station, quality-assured in situ SM observations and gridded SM products were spatially and temporally matched. The correlation coefficient (R), bias, ubRMSE and root mean square error (RMSE) values were calculated for each product. Taylor diagrams were used to compare R, RMSE and standard deviation (SD, square root of variance) values among these products. Based on the statistical analyses, the performance of each product was generalized. SD is a measure of the SM dynamic range. Low SD values mean low information content and exceptionally high SD values mean noisy retrievals. Correlation measures the overall consistency of SM products and observations. RMSE and ubRMSE measure the overall and bias-corrected SM differences, respectively. These metrics have been widely used in validation studies and are defined as follows: where x is in situ SM observation, y is gridded SM product, the overbar denotes the mean value, and N denotes the number of data pairs. The TCA was performed by considering the scale difference of multiple SM products. TC-based correlation coefficient (TC_R) and RMSE (TC_RMSE) values can be calculated based on three error-independent datasets, usually composed of a passive-based, an activebased and a modeling-based dataset [20,28]. TC_R and TC_RMSE show the correlation and overall difference between each triplet and the 'true' SM time series. For TCA, SM data in the a.m. and p.m. orbits were averaged to increase the sample size. The time series of SM anomaly was calculated based on a 31-day moving window, similar to [19], and the minimum length of the time series was 100. A critical hypothesis of TCA is the zero error cross-correlation (ECC) between SM triplets, which is usually violated even for activeand passive-based products. The consequence is that the evaluation results differ among triplets. Here, we calculated TC_R and TC_RMSE values for any possible triplet to find the lower and upper boundaries of the two metrics values. We are aware that optimistic statistics can be obtained by including ECC-dependent products in TCA (e.g., SMAP and CYGNSS or ERA5 and ASCAT in a triplet), and the statistics might vary greatly among triplets [32]. Therefore, the median TC_R and TC_RMSE values were compared. Although the median values might still be biased, they can be used for a fair comparison among sites and products. All data processing was accomplished based on the MATLAB R2015a platform. The mathematical form of TC_R and TC_RMSE is defined as follows: where X, Y and Z denote independent time series of SM anomaly (N > 100), σ denotes variance (for one dataset) or covariance (for two datasets).
Evaluating the Capabilities of Precipitation and Irrigation Detection
Precipitation and irrigation are the dominant natural and human factors of SM dynamics. The ability of multiple SM products to respond to precipitation signals was first evaluated. To this end, the 0.1 • × 0.1 • daily integrated multi-satellite retrievals for Global Precipitation Measurement (IMERG) precipitation product [59] were collected. For each station, the daily precipitation amount (cumulative P in UTC 0:00-UTC 0:00, approximately local time 8 a.m.-8 a.m.) was calculated and correlated with daily SM change (∆SM) in 24 h. To match precipitation and SM products, the in situ observations at 8 a.m., the SMOS/SMAP/ASCAT a.m. products (6 a.m., 9:30 a.m.), and the daily averaged AMSR2 products were used. The other products were composed to 8 a.m. or generated at 8: 00-9:30 a.m. (Table 2), with minimal time differences from the precipitation product. Correlation coefficients were calculated between daily P and ∆SM for in situ SM observations and SM products in order to evaluate the diverse responses of SM dynamics to precipitation. Because no in situ precipitation data were available, only IMERG precipitation data were used in this study.
The ability of multiple SM products to capture irrigation signals was evaluated. To this end, the monthly irrigation water use (IWU) product [33] was collected. Recently, Zhang et al. [33] considered multiple irrigation-related processes in the framework of hydrological balance and integrated multiple satellite observations to obtain ensemble IWU estimates from 2011-2018. To match the IWU product, multiple SM products were calculated to monthly averages. Because meteorological stations are not distributed in cropland areas, the observations only reflect precipitation signals. The difference between gridded products and in situ observations should reflect, if any, irrigation signals. Based on this hypothesis, the monthly SM difference (gridded minus in situ) was correlated with the monthly IWU. The stronger the correlationship, the better the capability of SM products to capture irrigation signals. Although representativeness errors might have an impact, results concluded from the 20 stations provide insight into the selection of irrigation-sensitive SM products. Figure 2 shows the statistical values for the validation of SM products at the 20 stations. Each boxplot shows the maximum, 75% quartile statistics, median, 25% quartile statistics and the minimum of the metrics, including data availability, correlation coefficient, bias, ubRMSE and RMSE for the 20 stations. Data availability means the proportion of quality assured SM data in the study period of 2017-2020. Generally, models provide more SM data than remote sensing, demonstrating the advantages of wide spatial and temporal coverage. Only minor data of frozen soils (<5%) are discarded. ESA CCI Combined and Active have provided more than 90% data in recent years, followed by CYGNSS. All AMSR2 products provide 50-80% data, most for JAXA and least for LPRM X. ASCAT provides about 40% data, whereas SMOS and SMAP provide much less data (<10%) due to a strict operational flagging strategy. A relaxed strategy (0-0.60 cm 3 ·cm −3 ) can largely increase data volume for L-band products ( Figure 2). ESA CCI Passive has a wide range of data availability, as it integrates AMSR2, SMOS and SMAP data.
Overall Performance of SM Products
Modeling-based products generally outperform remote sensing-based products. ERA5 and ERA5 Land can better capture SM dynamics (R ≈ 0.8) and have lower and more stable ubRMSE values than other products. Despite large differences in spatial resolution, MERRA2 and SMAP L4 provide almost unbiased SM data and have minimal differences from observations. For remote sensing, the L-band outcompetes C-/X-band for SM retrieval. The former has consistent data quality in the a.m. and p.m. orbits. In the L-band, SMAP products have lower uncertainties, although SMOS-IC outperforms in terms of correlation. Regarding ubRMSE and RMSE, SMAP DCA performs the best, followed by SMAP SCA-V, SCA-H and SMOS-IC (SMOS-L3 not included due to low data volume). SMAP DCA is almost unbiased, similar to ASCAT. A relaxed flagging strategy increases the L-band data volume, yet the data quality is not necessarily largely decreased (especially for R). All AMSR2 products are not well-correlated with the observations (R < 0.4). NPD performs the best, followed by LPRM C2, and JAXA and SCA have the largest ubRMSE values. Positive biases are observed for LPRM products and negative for other AMSR2 products. The differences in bias can be as large as 0.03 cm 3 ·cm −3 . Modeling-based products generally outperform remote sensing-based products. ERA5 and ERA5 Land can better capture SM dynamics (R ≈ 0.8) and have lower and more stable ubRMSE values than other products. Despite large differences in spatial resolution, MERRA2 and SMAP L4 provide almost unbiased SM data and have minimal differences from observations. For remote sensing, the L-band outcompetes C-/X-band for SM retrieval. The former has consistent data quality in the a.m. and p.m. orbits. In the L-band, SMAP products have lower uncertainties, although SMOS-IC outperforms in terms of correlation. Regarding ubRMSE and RMSE, SMAP DCA performs the best, followed by SMAP SCA-V, SCA-H and SMOS-IC (SMOS-L3 not included due to low data volume). SMAP DCA is almost unbiased, similar to ASCAT. A relaxed flagging strategy increases the L-band data volume, yet the data quality is not necessarily largely decreased (especially for R). All AMSR2 products are not well-correlated with the observations (R < 0.4). NPD performs the best, followed by LPRM C2, and JAXA and SCA have the largest ESA CCI and ASCAT are better than other remotely sensed products, considering both data availability and accuracy. ESA CCI Combined performs the best, followed by ESA CCI Active and Passive. ASCAT is the optimal single-satellite-based product, outperforming SMAP and SMOS in data availability and AMSR2 in overall accuracy. Integration of ASCAT and other radar data makes ESA CCI Combined the best remotely sensed product. ESA CCI Combined is positively biased, similar to GLDAS Noah, as the former is rescaled to the latter. Larger positive and negative biases were observed for ESA CCI Active and Passive, respectively. CYGNSS significantly extends the spatial coverage of SMAP data while at the expense of reduced data quality, which is even lower than that of poorly flagged SMAP data. Figure 3 shows Taylor diagrams of all SM products for two cropland stations (Figure 3a,b) and one forest land station (Figure 3c). All modeling-based products report large SM dynamics in cropland and low SM dynamics in forest land. ERA5 and ERA5 Land have wider SM dynamic ranges and consistently stronger correlations with observations than other products. Modeling-based products are closer to in situ observations than remote sensing-based products, including ESA CCI Combined, which is rescaled to GLDAS Noah. AMSR2 products differ greatly among algorithms. The JAXA and SCA products are close to each other. sensed product. ESA CCI Combined is positively biased, similar to GLDAS Noah, as the former is rescaled to the latter. Larger positive and negative biases were observed for ESA CCI Active and Passive, respectively. CYGNSS significantly extends the spatial coverage of SMAP data while at the expense of reduced data quality, which is even lower than that of poorly flagged SMAP data. Figure 3 shows Taylor diagrams of all SM products for two cropland stations ( Figure 3a,b) and one forest land station (Figure 3c). All modeling-based products report large SM dynamics in cropland and low SM dynamics in forest land. ERA5 and ERA5 Land have wider SM dynamic ranges and consistently stronger correlations with observations than other products. Modeling-based products are closer to in situ observations than remote sensing-based products, including ESA CCI Combined, which is rescaled to GLDAS Noah. AMSR2 products differ greatly among algorithms. The JAXA and SCA products are close to each other. Figure 4 shows that most products can capture short-term SM dynamics well, except for AMSR2 and CYGNSS. AMSR2 LPRM's perform similarly and have wide SM dynamic ranges, with unexpectedly large values in the winter season. Only LPRM X-band retrievals are shown in Figure 4 for comparison with other AMSR2 X-band retrievals. AMSR2 JAXA and SCA have abnormally high values in the summer season and consistently low values (~0.01 cm 3 ·cm −3 ) in other seasons. AMSR2 NPD has a very narrow SM dynamic range. CYGNSS also has a narrow SM dynamic range with large short-term noises. SMAP products are in better agreement with in situ observations than SMOS-IC and SMOS-L3 (not shown due to the low data volume). All ESA CCI products reproduce well in situ observed SM dynamics. For ESA CCI Combined, data fusion and rescaling to GLDAS Noah SM reduces dry biases in ESA CCI Active and wet biases in ESA CCI Passive. ASCAT performs similarly to ESA CCI Active, but the retrievals are almost unbiased.
Specific Behaviors of SM Products
JAXA and SCA have abnormally high values in the summer season and consistently low values (~0.01 cm 3 · cm -3 ) in other seasons. AMSR2 NPD has a very narrow SM dynamic range. CYGNSS also has a narrow SM dynamic range with large short-term noises. SMAP products are in better agreement with in situ observations than SMOS-IC and SMOS-L3 (not shown due to the low data volume). All ESA CCI products reproduce well in situ observed SM dynamics. For ESA CCI Combined, data fusion and rescaling to GLDAS Noah SM reduces dry biases in ESA CCI Active and wet biases in ESA CCI Passive. ASCAT performs similarly to ESA CCI Active, but the retrievals are almost unbiased. Forced by meteorological datasets, all modeling-based products can well reproduce the temporal pattern of SM variabilities. It is interesting to observe that the 0-10 cm ERA5 and ERA5 Land products fit better with in situ observations for dry soils, and the 0-5 cm MERRA2 and SMAP L4 products fit better for wet soils (Figure 4). The depth of the top soil layer and the quality of the forcing datasets might account for the differences. A detailed comparison of in situ observations is shown in Figure 5. Considering data availability and overall accuracy, only six major products are presented here. Regression slope Forced by meteorological datasets, all modeling-based products can well reproduce the temporal pattern of SM variabilities. It is interesting to observe that the 0-10 cm ERA5 and ERA5 Land products fit better with in situ observations for dry soils, and the 0-5 cm MERRA2 and SMAP L4 products fit better for wet soils (Figure 4). The depth of the top soil layer and the quality of the forcing datasets might account for the differences. A detailed comparison of in situ observations is shown in Figure 5. Considering data availability and overall accuracy, only six major products are presented here. Regression slope values can manifest the dynamic range of SM values. ERA5 has the largest SM dynamic range, followed by SMAP L4, MERRA2, ESA CCI Combined, GLDAS Noah and CYGNSS. Figure 1).
TC-Based Comparison of SM Products
TC-based correlations confirm the advantages of models over remote sensing, especially in the central water-contaminated and southern hilly areas. CYGNSS has a moderate correlation in the northern plain, where SMAP products also perform better. SMAP cannot provide enough quality-assured data for calibrating CYGNSS in the rest of study area, leading to decreased CYGNSS data quality (R < 0.2, Figure 6a). ESA CCI Combined has also decreased data quality in these areas (Figure 6b), where satellite-based SM retrievals generally have large uncertainties. Especially in forest areas, low SM dynamics and high retrieval uncertainties contribute to low correlations. Model performances are less dependent on land cover. SMAP L4 and ERA5 perform better in the northern plain, and the performance decreases marginally in the rest of the study area. Compared to GLDAS Noah and MERRA2, the assimilation-based ERA5 and SMAP L4 performed slightly better in the central and southern areas. Together with ESA CCI Combined, all modeling-based products have ubRMSE values that are better than 0.04 cm 3 · cm -3 , except for ERA5 (Figure 7). Although ERA5 shows a median RMSE value better than 0.04 cm 3 · cm -3 , the RMSE values are generally larger than other modeling-based products and exceed 0.04 cm 3 · cm -3 for some triplets. It seems that TC-based RMSE depends on the dynamic range of SM products. A wide SM dynamic range (e.g., for ERA5) might also amplify random errors, and a narrow SM dynamic range produces low σXX values in Equation (7) and thus low RMSE values. Figure 1).
TC-Based Comparison of SM Products
TC-based correlations confirm the advantages of models over remote sensing, especially in the central water-contaminated and southern hilly areas. CYGNSS has a moderate correlation in the northern plain, where SMAP products also perform better. SMAP cannot provide enough quality-assured data for calibrating CYGNSS in the rest of study area, leading to decreased CYGNSS data quality (R < 0.2, Figure 6a). ESA CCI Combined has also decreased data quality in these areas (Figure 6b), where satellite-based SM retrievals generally have large uncertainties. Especially in forest areas, low SM dynamics and high retrieval uncertainties contribute to low correlations. Model performances are less dependent on land cover. SMAP L4 and ERA5 perform better in the northern plain, and the performance decreases marginally in the rest of the study area. Compared to GLDAS Noah and MERRA2, the assimilation-based ERA5 and SMAP L4 performed slightly better in the central and southern areas. Together with ESA CCI Combined, all modeling-based products have ubRMSE values that are better than 0.04 cm 3 ·cm −3 , except for ERA5 (Figure 7). Although ERA5 shows a median RMSE value better than 0.04 cm 3 ·cm −3 , the RMSE values are generally larger than other modeling-based products and exceed 0.04 cm 3 ·cm −3 for some triplets. It seems that TC-based RMSE depends on the dynamic range of SM products. A wide SM dynamic range (e.g., for ERA5) might also amplify random errors, and a narrow SM dynamic range produces low σ XX values in Equation (7) and thus low RMSE values.
Diverse SM Responses to Precipitation Events
In situ observations show stronger SM responses to precipitation in cropland (R = 0.4) than in forest land (R = 0.3) (Figure 8). The responses of SM products to precipitation are shown in Figure 9. Among other AMSR2 products, LPRM X can best reproduce the correlation (R > 0.15) and JAXA the worst (almost uncorrelated). No SMOS results are available because of the low data volume. SMAP products also show weak correlations, and a relaxed flagging strategy does not improve performance. CYGNSS produces slightly better correlations than SMAP, attributable to higher data availability. A single satellitebased ASCAT product cannot reflect SM dynamics due to precipitation events. Integrating both MetOp-A and MetOp-B ASCAT data, ESA CCI Active shows much improved correlations (R > 0.3). ESA CCI Passive performs better than any individual radiometer-based product in response to precipitation, and the responsiveness of ESA CCI Combined is further enhanced by blending the Active product. All modeling-based products show close responsiveness to in situ observations. ERA5 performs the best, followed by SMAP L4, ERA5 Land, GLDAS Noah and MERRA2. However, with a finer spatial resolution, ERA5 Land does not perform as well as ERA5. These modeling-based products are forced by diverse precipitation datasets. However, the good data quality shared by precipitation datasets in this data-rich area is likely the major reason for the strong SM responses to IMERG-based precipitation.
Diverse SM Responses to Precipitation Events
In situ observations show stronger SM responses to precipitation in cropland (R = 0.4) than in forest land (R = 0.3) (Figure 8). The responses of SM products to precipitation are shown in Figure 9. Among other AMSR2 products, LPRM X can best reproduce the correlation (R > 0.15) and JAXA the worst (almost uncorrelated). No SMOS results are available because of the low data volume. SMAP products also show weak correlations, and a relaxed flagging strategy does not improve performance. CYGNSS produces slightly better correlations than SMAP, attributable to higher data availability. A single satellite-based ASCAT product cannot reflect SM dynamics due to precipitation events. Integrating both MetOp-A and MetOp-B ASCAT data, ESA CCI Active shows much improved correlations (R > 0.3). ESA CCI Passive performs better than any individual
Diverse SM Responses to Precipitation Events
In situ observations show stronger SM responses to precipitation in cropland (R = 0.4) than in forest land (R = 0.3) (Figure 8). The responses of SM products to precipitation are shown in Figure 9. Among other AMSR2 products, LPRM X can best reproduce the correlation (R > 0.15) and JAXA the worst (almost uncorrelated). No SMOS results are available because of the low data volume. SMAP products also show weak correlations, and a relaxed flagging strategy does not improve performance. CYGNSS produces slightly better correlations than SMAP, attributable to higher data availability. A single satellite-based ASCAT product cannot reflect SM dynamics due to precipitation events. Integrating both MetOp-A and MetOp-B ASCAT data, ESA CCI Active shows much improved correlations (R > 0.3). ESA CCI Passive performs better than any individual products show close responsiveness to in situ observations. ERA5 performs the best, followed by SMAP L4, ERA5 Land, GLDAS Noah and MERRA2. However, with a finer spatial resolution, ERA5 Land does not perform as well as ERA5. These modeling-based products are forced by diverse precipitation datasets. However, the good data quality shared by precipitation datasets in this data-rich area is likely the major reason for the strong SM responses to IMERG-based precipitation. products show close responsiveness to in situ observations. ERA5 performs the best, followed by SMAP L4, ERA5 Land, GLDAS Noah and MERRA2. However, with a finer spatial resolution, ERA5 Land does not perform as well as ERA5. These modeling-based products are forced by diverse precipitation datasets. However, the good data quality shared by precipitation datasets in this data-rich area is likely the major reason for the strong SM responses to IMERG-based precipitation. . Correlations between daily precipitation amount and soil moisture change based on multiple products. Each boxplot shows the distribution of correlation coefficient values at the 20 stations, including the maximum, 75% quartile statistics, median, 25% quartile statistics and the minimum. The symbol "(+)" means a relaxed flagging strategy for L-band products.
Diverse SM Responses to Irrigation Events
The ability to capture irrigation signals differs among products and algorithms ( Figure 10). As expected, modeling-based products can barely capture irrigation signals (R < 0.3), even though satellite data are assimilated (e.g., for ERA5 and SMAP L4). The radar-based ASCAT product is almost insensitive to irrigation signals, and ESA CCI Active further decreases sensitivity. Despite the low data volume, SMAP products are the most sensitive to irrigation signals. SMAP SCA-H and SCA-V have stronger sensitivities (R > 0.5) than SMAP DCA, although the latter shows better overall accuracy. It is noteworthy that the relaxed flagging strategy does not deprive SMAP of its ability to capture irrigation signals. The good performance even transfers to CYGNSS. SMOS products are less effective than SMAP products because of their lower data volume and overall accuracy. Due to contrasting SM values in cropping (high SM) and non-cropping (low SM) seasons (Figure 4), it is not surprising to see AMSR2 JAXA and SCA are strongly correlated (R > 0.5) with irrigation water use. It seems that single-channel algorithms are more competent than LPRM algorithms for detecting irrigation signals. With an LPRM algorithm, all three AMSR2 LPRM products and the ESA CCI Passive product have negative correlations (R < −0.3) with irrigation water use.
Diverse SM Responses to Irrigation Events
The ability to capture irrigation signals differs among products and algorithms (Figure 10). As expected, modeling-based products can barely capture irrigation signals (R < 0.3), even though satellite data are assimilated (e.g., for ERA5 and SMAP L4). The radarbased ASCAT product is almost insensitive to irrigation signals, and ESA CCI Active further decreases sensitivity. Despite the low data volume, SMAP products are the most sensitive to irrigation signals. SMAP SCA-H and SCA-V have stronger sensitivities (R > 0.5) than SMAP DCA, although the latter shows better overall accuracy. It is noteworthy that the relaxed flagging strategy does not deprive SMAP of its ability to capture irrigation signals. The good performance even transfers to CYGNSS. SMOS products are less effective than SMAP products because of their lower data volume and overall accuracy. Due to contrasting SM values in cropping (high SM) and non-cropping (low SM) seasons (Figure 4), it is not surprising to see AMSR2 JAXA and SCA are strongly correlated (R > 0.5) with irrigation water use. It seems that single-channel algorithms are more competent than LPRM algorithms for detecting irrigation signals. With an LPRM algorithm, all three AMSR2 LPRM products and the ESA CCI Passive product have negative correlations (R < -0.3) with irrigation water use. Figure 10. Correlations between soil moisture bias (gridded minus in situ soil moisture) and irrigation water use on monthly scales. Each boxplot shows the distribution of correlation coefficient values at the 20 stations, including the maximum, 75% quartile statistics, median, 25% quartile statistics and the minimum. The symbol "(+)" means a relaxed flagging strategy for L-band products.
Practices for Optimal Product and Algorithm Selection
No SM products perform consistently better than others. Modeling-based products have the advantages of continuous spatial and temporal coverage, strong correlations with in situ observations, and timely responses to rainfall events. Despite less accurate forcing data over poorly gauged areas, models still perform better and more consistently across different landscapes than remote sensing. The latter does not perform well or even Figure 10. Correlations between soil moisture bias (gridded minus in situ soil moisture) and irrigation water use on monthly scales. Each boxplot shows the distribution of correlation coefficient values at the 20 stations, including the maximum, 75% quartile statistics, median, 25% quartile statistics and the minimum. The symbol "(+)" means a relaxed flagging strategy for L-band products.
Practices for Optimal Product and Algorithm Selection
No SM products perform consistently better than others. Modeling-based products have the advantages of continuous spatial and temporal coverage, strong correlations with in situ observations, and timely responses to rainfall events. Despite less accurate forcing data over poorly gauged areas, models still perform better and more consistently across different landscapes than remote sensing. The latter does not perform well or even fails to produce meaningful retrievals in the central and southern parts of Anhui province. The main difficulties include the separation of water emissions and the effects of complex terrains and/or dense vegetation. This is evidenced by the extremely low data volume of operational SMOS and SMAP products. As a result, data assimilation (e.g., SMAP L4) can marginally improve SM modeling in these areas. Recently, several studies have demonstrated the limited contribution of data assimilation to SM and carbon fluxes modeling under different circumstances [9,60]. This might also explain the large uncertainties of the CYGNSS product, which uses the SMAP product as a calibration reference. The increased coverage of the CYGNSS product is at the expense of decreased accuracy. Although AMSR2 products are not acceptable in terms of absolute accuracy, JAXA and SCA detect plausible irrigation signals. It is more likely a coincidence arising from high SM biases in cropping seasons and low biases in non-cropping seasons. If we focus on the detection of irrigation events, SMAP products are an optimal choice, especially SCA products.
High-resolution SM products are necessary for regional-scale drought monitoring [61,62], especially in intensive agricultural regions. ERA5 Land performs better than or at least comparably to ERA5 in terms of multiple metrics (Figure 2), providing a finer-resolution (~9 km) alternative for drought monitoring. This study corroborates the current use of ERA-5 Land SM for drought monitoring in Anhui province. Remote sensing offers an objective description of land surfaces. However, the products do not correlate as well with in situ observations or respond as well to rainfall events on a daily scale. Shortterm random noises might be the major reason. As a result, temporal aggregation can improve the comparability of remotely sensed and modeling-based products. For example, Liu et al. [63] observed a slightly better performance of ESA CCI over GLDAS Noah for global drought monitoring on a monthly scale. Moreover, an ensemble of results from multiple datasets is recommended.
Implications for Improving the Retrieval Algorithm
AMSR-E and AMSR2 provide over 20 years of multifrequency global observations. Several retrieval algorithms have been developed, among which LPRM C2 and NPD stand out in this study. More stringent flagging strategies might improve the evaluation metric values, especially considering the RFI effects. Poor flagging partly contributes to a high percentage of data availability and, in the meantime, causes low data quality. This applies equally to other AMSR2 products. AMSR2 SCA has high values in wet seasons and low values in dry seasons. Soil and vegetation parameters can be refined for better retrieval as SCA proves to be successful and serves previously as a baseline algorithm for SMAP. More importantly, it is necessary to rethink the appropriate parameterization, as multiple retrieval algorithms have diverse SM biases.
SMAP SCAs have proven superior abilities in capturing irrigation signals. The use of real-time instead of climatological vegetation data might further improve SM retrievals and the detectability of irrigation signals because cropland phenology experiences substantial inter-annual variabilities [64,65]. Water correction is critical to SM retrieval in regions with a dense network of rivers and lakes. This is probably the main reason for the decreased SM accuracy in the central Anhui province. The distribution of complex terrains and dense forests explains the low SM accuracy in the south. Both effects apply equally to other remotely sensed products but are not decisive for modeling-based products. Recent studies have shown SM sensitivities of L-band radiometry under temperate forest canopies and deeper than a few centimeters [66,67]. This might improve SM retrieval under dense vegetation cover. Biases in effective soil surface temperature data also play a role in SM retrieval [46,68,69], especially under dense vegetation cover that masks out a large portion of soil emissions.
The ESA and NASA effects on GNSS-R for SM retrieval have been well elaborated upon by Pierdicca et al. [70]. This technique is currently far from mature for SM retrieval, although more advanced algorithms have been recently developed, e.g., change detection in [71], machine learning in [72,73] and semiempirical method in [74]. The currently operational CYGNSS SM product is generated using a linear relationship calibrated between reflectivity and SMAP SM. The residual nonlinearities and uneven distribution of calibration samples explain the reduced SM dynamic range, i.e., underestimating high SM values and overestimating low SM values [70]. The same issue is also encountered in remote sensing of soil salinity based on linear regression [75]. Moreover, CYGNSS retrievals are also affected by low-quality SMAP data and noisy observations over mountainous regions, making SM time series noisier than SMAP. To meet both ends, nonlinear and physically based methods are needed for further improvement. For nonlinear methods, such as machine learning [72,73], high-quality satellite retrievals are still required. LPRM products (AMSR2 and ESA CCI Passive) show strong negative correlations between irrigation water use and SM bias. The underestimation of high SM values and/or overestimation of low SM values might be responsible. Although spatial scales differ between product grids and site observations, SMAP products still show strong positive correlations. At the least, LPRM products are biased from SMAP products. This result underscores the importance of comparing multiple SM climatology [76] and further investigation into the LPRM algorithm.
Recommendations for Validation of Soil Moisture Products
Validation practices for satellite soil moisture products have been well documented by Gruber et al. [11]. The representativeness of in situ sites is emphasized in this study. Meteorological stations record long-term soil moisture observations, which are invaluable for validation purposes. However, these stations are distributed for ease of management, generally far away from agricultural land. The observations only naturally reflect soil drying and wetting processes and are unaffected by irrigation events. This might be one of the reasons for better model performance than remote sensing. Based on MODIS products, we observed a recent (2000-2021) NDVI decreasing trend at the 20 meteorological stations. The intensive urbanization processes in China might lower the representativeness of in situ observations. The measurement depth of in situ data is also a critical factor affecting the validation results. The 10-cm measurement depth is closer to that of modeling-based products, e.g., 0-5 cm, 0-7 cm and 0-10 cm, which produces better validation metrics values. Remotely sensed products have a shallower penetration depth of 0-2 cm at the Xand C-bands and 0-5 cm at the L-band. The inconsistencies in soil depth underscore the difficulties in SM product evaluation, especially for biases.
The other recommendation is on the validation method. TCA has several assumptions and the results might differ among triplets. The basic assumption is a linear relationship between SM datasets and the unknown true SM time series plus zero-mean random noise. The core assumption is zero error cross-correlation between SM datasets, which is not held even for passive-and active-based retrievals. Although it is feasible to examine this assumption by introducing a fourth dataset [24], using multiple triplets is more practical. For example, Zheng et al. [27] used multiple triplets and a bootstrapping technique to enhance the TCA results. Similarly, in this study, the method can depict the upper and lower boundaries of TC-based metric values. It becomes more useful with a growing number of products from observations, models and remote sensing. The median correlation and RMSE values are more robust, reducing the risk of over-optimistic evaluation results.
Conclusions
This study compared multiple remotely sensed, modeling-and assimilation-based SM products against in situ observations in a humid to semi-humid transitional region with diverse landscapes. Models generally outperform remote sensing in hilly and densely vegetated areas and areas with developed water systems. Remote sensing has difficulties in these areas, as evidenced by the extremely low data volume of operational SMOS and SMAP products. The limited and noisy SMAP reference data are mainly responsible for the low accuracy and narrow dynamic range of the CYGNSS product. For the same reason, data assimilation can marginally improve SM modeling. AMSR2 products have diverse but generally low performances depending on retrieval algorithms, which is better for LPRM C2 and NPD in terms of overall accuracy. ASCAT is the optimal single-satellite product, having both acceptable accuracy and spatial coverage. Models can better reproduce the responses of SM to precipitation events than remote sensing, while by nature they cannot reflect irrigation events. SMAP SCA-H and SCA-V are among the best products for detecting irrigation signals. The plausible irrigation signals revealed by AMSR2 SCA and JAXA are likely caused by retrieval errors. All LPRM products failed to identify irrigation events, probably due to an overestimation of low SM values and/or an underestimation of high SM values. The evaluation results provide guidance to select optimal products, improve retrieval algorithms and recommend common practices for SM validation. Data Availability Statement: The AMSR2 LPRM SM product can be found here: https://disc. gsfc.nasa.gov/datasets?keywords=AMSR2; the AMSR2 JAXA SM product can be found here: ftp: //ftp.gportal.jaxa.jp; the AMSR2 LANCE (NPD and SCA) SM product can be found here: https: //n5eil01u.ecs.nsidc.org/DP1/AMSA/AU_Land.001/; the SMOS L3 and SMOS IC SM products can be found here: ftp://ftp.ifremer.fr; the SMAP L3 SM product can be found here: https:// nsidc.org/data/SPL3SMP/versions/8; the CYGNSS SM product can be found here: https://data. cosmic.ucar.edu/gnss-r/soilMoisture/cygnss/level3/; the ASCAT SM product can be found here: https://navigator.eumetsat.int/product/EO:EUM:DAT:METOP:SOMO25; the ESA CCI SM product can be found here: https://esa-soilmoisture-cci.org/data; the GLDAS Noah SM product can be found here: https://ldas.gsfc.nasa.gov/gldas/; the MERRA2 SM product can be found here: https: //disc.gsfc.nasa.gov/datasets?keywords=MERRA2; the ERA5 SM product can be found here: https: //www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5; the ERA5 Land SM product can be found here: https://www.ecmwf.int/en/forecasts/dataset/ecmwf-reanalysis-v5-land; the SMAP L4 SM product can be found here: https://nsidc.org/data/SPL4SMGP/versions/6; the MODIS NDVI product can be found here: https://ladsweb.modaps.eosdis.nasa.gov/. The GPM IMERG precipitation product can be found here: https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGDF_06 /summary?keywords=%22IMERG%20final%22; the irrigation water use product is openly available in National Tibetan Plateau/Third Pole Environment Data Center (TPDC) at doi.org/10.11888/hydro. tpdc.271220; A registration is generally compulsory for data collection. In-situ soil moisture data are not publicly available due to data privacy policy.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-07-14T18:27:23.930Z | 2022-07-11T00:00:00.000 | {
"year": 2022,
"sha1": "8f32d2533b1c617ae815fb5166f69be6ab33bdf9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/14/14/3339/pdf?version=1657543001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "153dcfa5f7916229484cf234e85762b094129d49",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
249644963 | pes2o/s2orc | v3-fos-license | PROcalcitonin and NEWS2 evaluation for Timely identification of sepsis and Optimal use of antibiotics in the emergency department (PRONTO): protocol for a multicentre, open-label, randomised controlled trial
Introduction Sepsis is a common, potentially life-threatening complication of infection. The optimal treatment for sepsis includes prompt antibiotics and intravenous fluids, facilitated by its early and accurate recognition. Currently, clinicians identify and assess severity of suspected sepsis using validated clinical scoring systems. In England, the National Early Warning Score 2 (NEWS2) has been mandated across all National Health Service (NHS) trusts and ambulance organisations. Like many clinical scoring systems, NEWS2 should not be used without clinical judgement to determine either the level of acuity or a diagnosis. Despite this, there is a tendency to overemphasise the score in isolation in patients with suspected infection, leading to the overprescription of antibiotics and potentially treatment-related complications and rising antimicrobial resistance. The biomarker procalcitonin (PCT) has been shown to be useful in specific circumstances to support appropriate antibiotics prescribing by identifying bacterial infection. PCT is not routinely used in the care of undifferentiated patients presenting to emergency departments (EDs), and the evidence base of its optimal usage is poor. The PROcalcitonin and NEWS2 evaluation for Timely identification of sepsis and Optimal (PRONTO) study is a randomised controlled trial (RCT) in adults with suspected sepsis presenting to the ED to compare standard clinical management based on NEWS2 scoring plus PCT-guided risk assessment with standard clinical management based on NEWS2 scoring alone and compare if this approach reduces prescriptions of antibiotics without increasing mortality. Methods and analysis PRONTO is a parallel two-arm open-label individually RCT set in up to 20 NHS EDs in the UK with a target sample size of 7676 participants. Participants will be randomised in a ratio of 1:1 to standard clinical management based on NEWS2 scoring or standard clinical management based on NEWS2 scoring plus PCT-guided risk assessment. We will compare whether the addition of PCT measurement to NEWS2 scoring can lead to a reduction in intravenous antibiotic initiation in ED patients managed as suspected sepsis, with at least no increase in 28-day mortality compared with NEWS2 scoring alone (in conjunction with local standard care pathways). PRONTO has two coprimary endpoints: initiation of intravenous antibiotics at 3 hours (superiority comparison) and 28-day mortality (non-inferiority comparison). The study has an internal pilot phase and group-sequential stopping rules for effectiveness and futility/safety, as well as a qualitative substudy and a health economic evaluation. Ethics and dissemination The trial protocol was approved by the Health Research Authority (HRA) and NHS Research Ethics Committee (Wales REC 2, reference 20/WA/0058). In England and Wales, the law allows the use of deferred consent in approved research situations (including ED studies) where the time dependent nature of intervention would not allow true informed consent to be obtained. PRONTO has approval for a deferred consent process to be used. Findings will be disseminated through peer-reviewed journals and presented at scientific conferences. Trial registration number ISRCTN54006056.
ABSTRACT Introduction Sepsis is a common, potentially lifethreatening complication of infection. The optimal treatment for sepsis includes prompt antibiotics and intravenous fluids, facilitated by its early and accurate recognition. Currently, clinicians identify and assess severity of suspected sepsis using validated clinical scoring systems. In England, the National Early Warning Score 2 (NEWS2) has been mandated across all National Health Service (NHS) trusts and ambulance organisations. Like many clinical scoring systems, NEWS2 should not be used without clinical judgement to determine either the level of acuity or a diagnosis. Despite this, there is a tendency to overemphasise the score in isolation in patients with suspected infection, leading to the overprescription of antibiotics and potentially treatmentrelated complications and rising antimicrobial resistance. The biomarker procalcitonin (PCT) has been shown to be useful in specific circumstances to support appropriate antibiotics prescribing by identifying bacterial infection. PCT is not routinely used in the care of undifferentiated patients presenting to emergency departments (EDs), and the evidence base of its optimal usage is poor. The PROcalcitonin and NEWS2 evaluation for Timely identification of sepsis and Optimal (PRONTO) study is a randomised controlled trial (RCT) in adults with suspected sepsis presenting to the ED to compare standard clinical management based on NEWS2 scoring plus PCT-guided risk assessment with standard clinical management based on NEWS2 scoring alone and compare if this approach reduces prescriptions of antibiotics without increasing mortality. Methods and analysis PRONTO is a parallel two-arm open-label individually RCT set in up to 20 NHS EDs in the UK with a target sample size of 7676 participants.
Participants will be randomised in a ratio of 1:1 to standard clinical management based on NEWS2 scoring or standard clinical management based on NEWS2 scoring plus PCT-guided risk assessment. We will compare whether the addition of PCT measurement to NEWS2 scoring can lead to a reduction in intravenous antibiotic initiation in ED patients managed as suspected sepsis, with at least no increase in 28-day mortality compared with NEWS2 scoring alone (in conjunction with local standard care pathways). PRONTO has two coprimary endpoints:
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ Sepsis has a problem with both over and under diagnosis, and a major strength of PROcalcitonin and NEWS2 evaluation for Timely identification of sepsis and Optimal (PRONTO) is the use of coprimary outcomes to assess effectiveness as an antimicrobial stewardship intervention but also to ensure safety which is vital for widespread clinical adoption of this intervention. ⇒ PRONTO is designed to integrate into routine UK clinical pathways and includes assessment of acceptability and practicality in emergency department settings. ⇒ Limitations of the study design include the intervention being a change in risk assessment rather than a formal prescribe/do not prescribe rule for antibiotic use, which could lead to higher rate of clinician preference in the study. ⇒ The use of deferred consent also has the potential to increase participant withdrawal from the trial, as not all patients would have agreed to prospective informed consent.
Open access initiation of intravenous antibiotics at 3 hours (superiority comparison) and 28-day mortality (non-inferiority comparison). The study has an internal pilot phase and group-sequential stopping rules for effectiveness and futility/safety, as well as a qualitative substudy and a health economic evaluation.
Ethics and dissemination The trial protocol was approved by the Health Research Authority (HRA) and NHS Research Ethics Committee (Wales REC 2, reference 20/WA/0058). In England and Wales, the law allows the use of deferred consent in approved research situations (including ED studies) where the time dependent nature of intervention would not allow true informed consent to be obtained. PRONTO has approval for a deferred consent process to be used. Findings will be disseminated through peerreviewed journals and presented at scientific conferences.
INTRODUCTION
Sepsis is defined as life-threatening organ dysfunction caused by a dysregulated host response to infection 1 and is a medical emergency requiring prompt antimicrobial therapy and physiological support. The identification, assessment and management of sepsis is challenging because of its many non-specific symptoms and signs, which can be caused by both infectious and non-infectious diseases. In line with international recommendations, the UK National Institute for Health and Care Excellence (NICE) sepsis guidelines suggest the administration of intravenous antibiotics within an hour in patients at risk of intensive care unit (ICU) admission and death. 2 However, up to 50% of patients initially managed as sepsis in the emergency department (ED) do not have a final diagnosis of sepsis 3 4 and often do not have an infection. 5 6 The current approach leads to overuse of antibiotics with the associated risk of antimicrobial resistance, antibioticrelated adverse drug reactions (eg, Clostridium difficile infection) 7 and extended hospital stays. The challenge of delivering high-quality sepsis care in an ED setting has been well recognised. 8 9 The third international consensus definition (Sepsis 3) 1 recommended use of the quick Sequential Organ Failure Assessment (qSOFA) score, to identify patients at high risk of death and prolonged ICU stay. National Early Warning Score (NEWS) and NEWS2 are rapid physiology-based scoring systems which are used to detect and track the deteriorating patient. NEWS has been demonstrated to have better diagnostic accuracy to qSOFA in detection of severe outcomes in sepsis. 10 11 However, with its higher sensitivity comes reduced specificity which can result in significant increased numbers of patients being managed as high risk for suspected sepsis with a corresponding pressure on ED departments. NEWS2 replaced NEWS scoring system as the standard monitoring tool in the National Health Service (NHS) in 2019 12 and has been found to be comparable or superior to NEWS. [13][14][15][16] In October 2021, Surviving Sepsis Campaign recommended that immediate antibiotics (within 1 hour) should be targeted to those with septic shock and others with suspected sepsis could wait for up to 3 hours for initial assessment to target antimicrobial choice or identify non-infectious mimics. 17 The emergence of COVID-19 has exacerbated this previously highlighted problem. COVID-19 is a viral infection which presents within the sepsis syndrome constellation. Secondary bacterial infections are uncommon at presentation to ED (3.5%), 18 despite this up to 83% of patients with COVID-19 received antibiotics. 19 20 NEWS2 scores are broadly predictive of COVID-19 outcome on presentation but does not appear to be predictive of bacterial coinfection. 21 Initial investigations in the ED can be helpful in distinguishing between COVID-19 and bacterial pneumonia including typical radiographic change, and COVID-19 point-of-care diagnostics. 8 These results would be available within 3 hours for assessment and could potentially reduce unnecessary antimicrobial usage in COVID-19 management.
Procalcitonin (PCT) is a reliable biomarker that changes early in the course of bacterial infection. A recent PCT is currently the biomarker with the most available evidence to identify bacterial infections and inform antibiotic prescription decisions. Cochrane meta-analysis 9 demonstrated that the use of PCT to guide antibiotic treatment in patients with acute respiratory infections reduced antibiotic exposure and side effects, and improved survival. Nevertheless, while the US Food and Drug Administration (FDA) has approved PCT assays for use in sepsis, current UK NICE guidance does not recommend PCT use on the basis of insufficient evidence. 22 23 PCT predictive of outcome in COVID-19, and this may be because of its ability to identify superadded bacterial infection. 10 11 24 The available evidence suggests a low PCT will have good negative predictive value for a bacterial coinfection in cases of Aims and objectives Primary objective To assess whether the addition of PCT measurement to NEWS2 scoring leads to a reduction in intravenous antibiotic initiation at 3 hours, with no increase in 28-day mortality compared with NEWS2 scoring alone in the management of patients presenting to hospital EDs in England and Wales with suspected sepsis.
Secondary objective
The assessment of (1) feasibility, (2) cost-effectiveness and (3) acceptability to healthcare practitioners, patients and their family
METHODS AND ANALYSIS Study design
PROcalcitonin and NEWS2 evaluation for Timely identification of sepsis and Optimal (PRONTO) is a multicentre, parallel two-arm, open-label, individually randomised controlled trial with two coprimary endpoints, an internal pilot phase and group-sequential stopping rules for effectiveness and futility/safety. Participants will be randomised in a ratio of 1:1 to standard clinical management based on NEWS2 scoring or standard clinical management based on NEWS2 scoring plus PCT-guided risk assessment.
Internal pilot
An internal pilot phase will be conducted over the first 9 months of the recruitment period with ten lead sites. Predefined progression criteria will be used to assess feasibility to progress to the full trial, such as site and patient absolute recruitment and consent rate, proportion of patients undergoing PCT assessments and the ability to collect coprimary outcome data.
Inclusion criteria
Up to 20 EDs from across England and Wales will recruit adults (≥16 years) who are being managed as suspected sepsis over a 24-month period. There is no minimum NEWS2 score for inclusion into the study.
Exclusion criteria
Patients already receiving intravenous antibiotics, currently receiving myeloablative chemotherapy, patients with solid-organ transplantation, allogeneic bone marrow or stem cell transplantation within 3 months prior to consent or patients known to require urgent surgical intervention at the time of randomisation.
Patients with an advance directive to withhold lifesustaining treatment or patients not wishing to receive cardiopulmonary resuscitation may qualify provided they receive all other resuscitative measures for example, respiratory support and fluid resuscitation.
Study procedures and progress
The trial schema is shown in figure 1.
The COVID-19 pandemic resulted in a delay to the original start date of June 2020. First participant was recruited on 20 November 2020. Current planned end date is 30 November 2022.
Identification and screening
Patients with suspected sepsis will be identified at ED triage. After initial NEWS2 scoring and assessment according to current standard of care the eligibility criteria will be assessed and if no exclusion criteria apply, patients will be enrolled into the trial and randomised. A screening log of all eligible and randomised patients will be kept at each site so that any biases from differential recruitment will be detected.
Randomisation
Participants will be individually randomised in a 1:1 ratio by delegated research staff within the ED to either to standard clinical management based on NEWS2 scoring (control) or standard clinical management based on NEWS2 scoring plus PCT-guided risk assessment (intervention). We will use minimisation with NEWS2 score (≥or < 5) and site as balancing factors and add a random element to reduce the risk of subversion. 26 This will be implemented in a secure 24-hour web-based randomisation programme controlled centrally by the Centre for Trials Research (CTR) in Cardiff. Full details are provided in the PRONTO randomisation strategy.
Trial intervention
The BRAHMS PCT-direct reader (ThermoFisher Diagnostics (Altrincham, Cheshire, UK) is a fully validated, CE-marked point-of-care test to determine levels of PCT in the blood. The test requires 20 µL blood which will be obtained from either venous blood during standard care procedures at triage or via a finger-prick. This will be used in combination with NEWS2 assessment of adult patients with suspected sepsis in ED, using a guidanceonly algorithm for clinicians (figure 1). The risk algorithm categorises individuals as low, medium or high risk, interpretation and management (table 1). Clinicians have oversight at all times as to whether to adhere to the algorithm As currently mandated in UK, NICE clinical guidelines and quality standard QS161, 27 urgent senior review within an hour will take place should any healthcare provider identify at least one risk factor indicating high risk of progression to severe illness or death regardless of underlying aetiology. This equates to a NEWS2 ≥5 or an individual having a single feature of the evidence-based 'NICE high-risk criterion'.
Informed consent
Research carried out in emergency situations is challenging in terms of obtaining consent. Emergency research is when treatment needs to be given urgently, and it is necessary to take urgent action for the purposes of the study. In some emergency situations people may lack capacity to give consent themselves and obtaining consent from a legal representative or consulting others is not reasonably practicable. In England and Wales, the law allows adults who lack capacity to take part in emergency research without prior consent from a legal representative or consulting others, if certain conditions are met (Medicines for Human Use (Clinical Trials) Amendment (No 2) Regulations SI 2006 2984, Mental Capacity Act s32). 28 Given the requirement for rapid clinical assessment and treatment in the management of suspected sepsis, for this trial we will use a deferred consent model. Patients and their relatives will be informed that a study is ongoing but a lengthy consent discussion will not be had so as not to delay treatment. Should the patient or consultee wish not to take part at this point, then the decision will be respected and the patient will not be enrolled into the trial. Following randomisation an approach to obtain informed consent will be made as soon as is practicably feasible, ideally within 72 hours (figure 2). Where a participant lacks mental capacity, a maximum of three approaches will be made. After three approaches, or if the participant is not likely to regain mental capacity, a personal consultee will be approached. In extreme circumstances, where no personal consultee can be identified, a nominated consultee will be approached. Separate informed consent will be taken for participation in the Open access qualitative data collection. Patients who do not consent to continue in the study will be withdrawn completely from the study. A tiered consent model is used in this study and allows participants to consent to different aspects of the study (online supplemental appendix table 1). An example participant consent form is available in online supplemental appendix.
Data collection during primary admission
All data collection will be by electronic data capture using a bespoke database developed by the CTR and hosted by Cardiff University secure servers. It is encrypted and accessed by individual username and password. Paper copies of all case report forms will be available. Essential documents will be kept securely in a locked cupboard, and at the end of the trial, will be archived at an approved external storage facility for 10 years. A member of the research team in ED will undertake the data collection relating to the NEWS2 screening, trial intervention and whether clinical teams followed the intervention or standard of care risk assessment. Participants who consent to continue in the study will have daily information collected from the date of randomisation until they are discharged from hospital or until day 28, whichever is sooner. Trial data is collected from patients' health records and no trial visits occur between consent and day 28. Key follow-up data are listed in online supplemental appendix table 2. Open access FOLLOW-UP Twenty-eight-day follow-up Day 28 follow-ups will be conducted via telephone or in person if the participant remains an inpatient. These will comprise a European Quality of life five dimension, five level (EQ-5D/5L) validated questionnaire for participant or proxy completion, and a Health Economics questionnaire where patient outcomes (readmission, retreatment, hospital-acquired infection) and use of healthcare resource (hospital admissions, outpatient parenteral antimicrobial therapy, other prescribed medicines, privately purchased over-the-counter medicines, General Practitioner (GP) and hospital outpatient attendance) will be captured. In addition, direct non-medical costs borne by patients/carers as a result of attending hospital (travel costs, childcare costs, expenses incurred while in hospital, self-reported lost earnings and other direct non-medical expenses) will be collected.
Ninety-day follow-up EQ-5D/5L questionnaires will be repeated and a shortened Health Economics questionnaire to capture any additional costs or hospital admissions since the day 28 questionnaires will be completed.
Withdrawal
Participants have the right to withdraw from the study at any time and can request that all data collected up to that point is not used.
Safety and pharmacovigilance
The trial population comprises unwell hospital inpatients. Events such as prolongation of existing hospitalisation, life threatening events and death are expected in this population and are recorded as part of routine data collection and therefore are not subject to expedited reporting. Serious adverse events will be reported if the event results in persistent or significant disability
Open access
or incapacity or consists of a congenital anomaly or birth defect. An assessment of causality between the event and the trial intervention will be carried out by the principal investigator or delegated clinician, and then independently by a clinical reviewer. If the clinical reviewer classifies the event as probably or definitely caused by the intervention, it will be classified as a serious adverse reaction. Non-serious Adverse Events (AEs) potentially attributable to the PCT test will be collected as part of routine follow-up at 28 days. Any other non-serious AEs will not be collected.
Data management
Details of data management procedures (such as checking for missing, illegible or unusual value (range checks) will be specified in the PRONTO Data Management Plan. Details of Monitoring procedures will be specified in the PRONTO Monitoring plan.
STATISTICAL ANALYSIS Outcome measures
The coprimary outcomes of this study are the initiation of intravenous antibiotics at 3 hours (intervention arm to be shown superior to control) and 28-day mortality (intervention arm to be shown non-inferior to control). Coprimary and secondary outcomes are listed in box 1. Final decisions about the primary effectiveness of the intervention, using these coprimary outcomes will be made based on the decision matrix (table 2). All outcomes will be stratified by COVID-19 diagnosis (SARS-CoV2 PCR positive or high likelihood of clinical COVID-19 as determined by a senior clinician).
Sample size
The sample size calculation is based on two coprimary outcomes: 29 1. Twenty-eight-day mortality, for which we want to show non-inferiority of the PCT guided assessment as compared with current standard practice, using an absolute 2.5% non-inferiority margin. Assuming a 28-day mortality of 15% in patients managed as suspected sepsis treated in the ED, 3 30 this means that any increase in 28-day mortality from 15% to not more than 17.5% would be considered non-inferior. For 90% power and one-sided 5% significance level the sample size required is 7002, assuming there is no difference in 28-day mortality between arms. Our patient focus group were also consulted on the 2.5% non-inferiority margin and felt that this was acceptable if there were mechanisms to monitor trial outcomes, and if this was what was needed to provide a sample size which would ensure the trial could be completed as well as answer the research question. 2. Initiation of antibiotics treatment, for which we want to show superiority. Currently around 90% of patients managed as suspected sepsis receive antibiotics (Liverpool University Hospitals NHS Foundation Trust, unpublished data). A reduction to 80% would be considered a success. To detect such an effect with 90% power and two-sided 5% significance level the sample size required is 532, which is substantially lower than what is needed for the non-inferiority endpoint.
With 7002 patients we would be able to detect effects as small as a reduction from 90% to 87.6%, with 90% power. Accounting for 5% drop-out, we would need a total sample size of 7372. The group-sequential design with O'Brien-Fleming stopping boundaries for both Open access effectiveness and futility/safety will increase the total maximum sample size (if the study is not stopped after the interim analysis) by just over 4% to 7676 (inflated for 5% drop-out). These sample sizes were calculated using SAS V.9.4 PROC POWER and PROC SEQDESIGN.
Interim analysis
A planned interim analysis of the coprimary outcomes will be conducted when 50% of patients have been recruited and followed up for 28 days. Stopping the study shall be recommended by the independent data monitoring committee (IDMC) based on group-sequential O'Brien-Fleming boundaries. They shall recommend stopping for effectiveness if: ► The PCT-guided assessment is superior in terms of 28-day mortality (ie, a significant reduction to less than 15%). ► The PCT-guided assessment is non-inferior in terms of 28-day mortality and superior in terms of initiation of antibiotics. They shall recommend stopping for futility if the results of the interim analysis suggest futility for both endpoints. This strategy ensures overall type I error rate control. 31 32 The exact stopping rules will be specified in an interim analysis plan.
Final analysis
The primary analysis will be intention to treat and will fit separate two-level logistic regression models (patients nested within sites) to both coprimary outcomes (antibiotic initiation and mortality), controlling for baseline NEWS2 score (minimisation factor). The intervention will be considered effective if there is both a significant reduction in antibiotic initiation (two-sided 5% level) and if the difference in mortality between the two groups is noninferior (one-sided 5% level). In case the 28-day mortality rate in the control arm deviates from the assumed 15%, the absolute 2.5% non-inferiority margin will be replaced with an arcsine difference 'non-inferiority frontier'. 33 The primary analysis will be adjusted to account for the group-sequential design. Imputation of missing data will be done as part of sensitivity analyses.
In a secondary analysis, complier adjusted causal effect models will be fitted to allow for non-adherence to the intervention. Two models will be fitted allowing for two different definitions of adherence: 1. Patients randomised to PCT-guided care in whom a PCT test is done and the clinician considers the results as part of their decision making. 2. Patients randomised to PCT-guided care in whom a PCT test is done and the clinician follows the algorithm exactly. Analyses of secondary outcomes will also be performed as intention to treat and using appropriate two-level regression models depending on the type of outcome (eg, linear regression for continuous outcomes, Cox regression for time-to-event outcomes) to allow for patients nested within sites. This includes an HTA and economic evaluation as per CHEERS 2022 guidance. Analyses will be split by organ system of the infection (eg, lower urinary tract, lower respiratory, intra-abdominal, bacteraemia, skin and soft tissue). Stratified analyses will be undertaken at different levels of NEWS2 scoring ≤4, 5-6 and ≥7, and will also be undertaken by COVID-19 status. All further details will be specified in a statistical analysis plan which will be finalised prior to database lock for the planned interim analysis and subsequently published.
Missing primary outcome data are likely to be minimal, so complete-case analysis will be used. However, if this exceeds more than 20% of participants we will employ multiple imputation and report the impact on the treatment effect alongside the complete-case analysis.
QUALITATIVE STUDY
The qualitative work will have three components: interviews with clinicians, interviews with patients/carers, and observations of trial implementation (when appropriate during the ongoing current COVID-19 pandemic). Findings will be used to aid understanding of the quantitative data and provide areas for improvement in processes to enhance the efficiency of the trial.
Interviews with clinicians will take place at two time points. Interview 1 will take place during the pilot phase and will be a semistructured interview with 10-12 clinicians at <5 study sites (2-3 per site). This will explore the feasibility and acceptability of research processes and integration of the PCT algorithm into their ED setting. Interview 2 will be with clinicians towards the end of the trial when they have more experience of using the PCT algorithm and will identify barriers and facilitators to the use of the PCT test and algorithm in more detail, including reasons for deviating from the study algorithm.
We will conduct semistructured interviews with patients after the 90-day follow-up, in order to gain a detailed understanding of patients' experiences of care to aid understanding of trial results. We will encourage patients to include a close family member in the interview also. This will allow us to capture an additional perspective on the patients' care.
PATIENT AND PUBLIC INVOLVEMENT
The proposal has benefited from multiple interactions with patient and public involvement (PPI) groups to refine the research question and design. Author JC is a lay coapplicant/patient representative, who has coproduced and helped finalise the study design. As a coapplicant JC is a member of the trial management group (TMG) ensuring that all patient facing materials are presented in a suitable way. Her experience is invaluable throughout the project, including the promotion of the trial to potential participants and appropriate dissemination of findings to the lay public.
Open access
In addition, we have convened wider PPI advisory panels from both higher education institutions and NHS patient groups. We discussed the trial with the panel at the Royal Liverpool Hospital in August 2018, focusing on need, conception, design and trial management. The group fully supported the need for this trial recognising the potential for PCT measurement to improve outcomes for patients with suspected sepsis and supported the use of deferred consent. Specific feedback about these aspects has now been used to update the relevant parts of the proposal.
TRIAL MANAGEMENT
The trial is sponsored by the University of Liverpool and coordinated by Cardiff University CTR.
Trial management group
The TMG will meet monthly throughout the course of the trial and will include the cochief investigators, coapplicants, collaborators, trial manager, data manager and administrator. TMG members will be required to sign up to the remit and conditions as set out in the TMG charter.
Trial steering committee and IDMC An independent trial steering committee (TSC) consisting of an independent chairperson, two independent members and a patient representative will provide oversight of the PRONTO trial. There will also be a separate IDMC to provide oversight of all matters relating to patient safety and data quality, and recommend continuing or stopping the trial depending on the results of the interim analysis. Members will be required to sign up to the remit and conditions as set out in the TSC and IDMC charters and will meet at least annually.
ETHICS AND DISSEMINATION Ethics approvals and consent
The trial was approved by the NHS Research Ethics Committee (Wales REC 2, reference 20/WA/0058) on the 21 July 2020 and subsequent Health Research Authority (HRA) and Health and Care Research Wales approval was granted on 22 July 2020. In England and Wales, the law allows the use of deferred consent in approved research situations (including ED studies) where the time dependent nature of intervention would not allow true informed consent to be obtained. PRONTO has approval for a deferred consent process to be used, full details are in Informed Consent section above. The following substantial amendments were made to the trial and were communicated to all trial sites: Amendment 5 (23 October 2020); Amendment 7 (10 December 2020); Amendment 9 (25 February 2021); Amendment 12 (29 June 2021), Amendment 15 (15 October 2021), Amendment 17 (6 January 2022).
Dissemination plan
We will engage with patient groups and the wider public through relevant charities such as UK Sepsis Trust and Antibiotics Action, and seek to present trial updates at their annual conferences. We will use press releases and social media outlets to publicise the trial and disseminate findings. A 90 s animation outlining the PRONTO main aims was commissioned https://www.youtube.com/ watch?v=H3x-rNVlwJI 34 and accessed via posters and patient information leaflets via a scannable QR code. At the end of the trial, a final report will be prepared for the National Institute of Health Research Health Technology Assessment Journal series. The results will be disseminated locally, nationally and internationally among scientific, clinical and lay groups including participants and their families. All publications and presentations related to the trial will be authorised by the TMG in accordance with the PRONTO publication policy. Where appropriate, the results of this trial can be directly implemented in the revisions of the NICE guidelines.
Author affiliations
implementation of the protocol. JE is the Trial Manager and ET-J is the senior trial manager who coordinate the operational delivery of the trial protocol and recruitment. LB-H is the lead qualitative researcher. PP is the trial statistician. SG is the data manager. All authors listed provided critical review and final approval of the manuscript.
Funding This trial is funded by the National Institute of Health Research Health Technology Assessment (NIHR HTA) programme, funder reference 17/136/13. The Centre for Trials Research at Cardiff University receives infrastructure funding from Health and Care Research Wales. The study is supported by the NIHR Clinical Research Network.
Disclaimer The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. Neither the Sponsor nor the Funder had any role in the study design; collection, management, analysis and interpretation of data; writing of this manuscript or in the decision to submit this manuscript for publication. ThermoFisher are not funding the study, and have no influence on the design, conduct or reporting of the study Competing interests EC is co-CI for the BATCH Trial (HTA 15/188/42) and the PEACH study (HTA Project: NIHR132254) on PCT use, and member of NICE Diagnostic advisory committee (2014-2020), and NICE Sepsis guideline development committee (2014-6). All other authors declare no competing interests.
Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Patient and public involvement section for further details.
Patient consent for publication Not applicable.
Provenance and peer review Not commissioned; peer reviewed for ethical and funding approval prior to submission.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. | 2022-06-15T06:17:44.920Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "9c53d6726ddfe867e544b6feb2c36c1defd3b83a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "fc9bfc2e54908f1261e9025787963139d53ecd39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249119829 | pes2o/s2orc | v3-fos-license | Low Transmission of Coronavirus via Aerosols during Outdoor Running Races and Athletic Events
Introduction: Outdoor contacts were reported to rarely result in transmission of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Little is known, however, about the risk during popular outdoor running events. This study assessed the transfer of aerosols from infected runners to other race participants. Methods: In the experimental part of the study, a group of dummies was pulled at different speeds over an athletics track and field circuit. Fine aerosols were produced with a fog machine, large aerosols with a pesticide sprayer releasing food colorant, with the size matching the two size modes of human expiratory aerosols. The experimentally determined transfer rates fed a Monte Carlo simulation of different race distances, starting sequences and block sizes. Runners were modeled using start and end times of SwissCityMarathon — Lucerne participants and a previously published distribution of virus emission strengths. The race distance was divided into 10-meter segments in which the transfer from the sources to collocated runners was calculated. Results: The experiments showed that fog and spray transfer decreased with increasing distance from the source. Increased speed was associated with decreased fog but increased spray transfer. The simulations suggest that more runners received small amounts of virus by fog-transfer. However, critical virus-transfers defined as more than 100 virus copies happened mostly by spray. The estimated rate of people getting a potentially infectious dose was in most races well below the simulated prevalence rate of virus-emitting runners, mostly about five-fold smaller. Changing from block starts to individual starts further reduced the estimated transfer. Only an artificial group running 30 km in close distance at high speed brought the rate above parity. Discussion: These findings suggest that outdoor running events are associated with a low risk for virus infection as long as runners are not trailing each other over very long distances.
INTRODUCTION
The severe acute respiratory syndrome coronavirus 2 emerged in late 2019 in Wuhan, China, and spread rapidly across the globe (Hsih et al., 2020;Korean Society of Infectious Diseases et al., 2020;Zhou et al., 2020). The virus is easily transmissible between humans and was detected early in the air of hospitals . Today, transmission by respiratory microdroplets is recognized as the predominant route of infection (Greenhalgh et al., 2021;Morawska and Cao, 2020;Randall et al., 2021;Zhang et al., 2020). These respiratory droplets are formed during normal breathing, which produces mostly aerosols around 1 µm in diameter . More is released when people are talking and singing with a bimodal aerosol emission consisting of fine aerosols around 2 µm and larger aerosols between 20 and 200 µm (Asadi et al., 2020(Asadi et al., , 2019Hamner et al., 2020;Johnson and Morawska, 2009). Similarly sized aerosols are also released during physical exercise (Wilson et al., 2021).
There are considerable differences in transmission processes between indoor and outdoor settings. Indoors, virus-loaded fine aerosols can accumulate and rapidly create infectious concentrations if the infected person has a high viral load (Riediker and Tsai, 2020). While outdoors wind transport of large droplets is of concern in the range of a few meters (Feng et al., 2020), they are likely to disperse rapidly in the atmosphere. Another difference concerns the airborne half-life of the virus, which is in the range of over an hour when hovering in the dark (van Doremalen et al., 2020) while it is reduced to a few minutes when exposed to simulated sunlight (Schuit et al., 2020). Such differences translate into the transmission risk. Already early in the pandemic, outdoor settings were found to contribute little to the spread of the disease (Bulfone et al., 2021;Leclerc et al., 2020), estimated at well below 10% of all transmissions (Bulfone et al., 2021). A more recent assessment of over 1.7 million PCR-confirmed cases in England used contact tracing data to link index cases and their contacts and attributed only 2.9% of all infections to outdoor contacts (Lee et al., 2021). A review of airborne RNA measurements in indoor and outdoor settings also reported mostly undetectable quantities of viruses outdoors and much lower levels in crowded or close contact situations compared to similar indoor situations (Dinoi et al., 2022).
Also in outdoor sports, virus transmission appears to be low. An analysis of the Danish football (soccer) league found no signs of transmission chains between the players. However, it was a cohort with a very low incidence rate (Pedersen et al., 2021). Also, a study in professional golfers competing in 23 events during the PGA European Tour did not find any transmission, though it also had the limitation of a very low incidence rate (Robinson et al., 2021b).
Running events represent a special type of outdoor contacts. In most situations, the participants of a race do not interact with each other, yet depending on their speed and starting schedule, they may spend prolonged time next to each other. Computational fluid dynamics models suggest that most large droplets emitted by a runner will drop rapidly in the slipstream but that some droplets may still reach a person trailing the source within 10 meters distance (Blocken et al., 2020). An ongoing prospective study of Dutch runners assessed symptoms indicative of COVID-19 but did not find any association with running behavior and habits (Cloosterman et al., 2021), suggesting that the risk of infection is likely low during outdoor trainings. More challenging is the situation for organizers of popular races. They need to consider (1) the risk of transmission from pre-and post-race activities and (2) the risk from running in large crowds. The first risk is common to many outdoor events and strategies established for such events can be followed (Honein et al., 2020). This study aimed to evaluate the second risk, the one from running as a larger crowd. The proportion of aerosols transmitted to subsequent runners as a function of speed and distance was investigated; and the consequences of this transmission for the risk of infection of participants in races with different starting regimes were simulated.
Study Design
The study was designed to collect experimental data under real-world conditions and to use the results for a subsequent simulation of different races and starting regimes. The experimental part assessed what proportion of the emitted aerosols gets transferred from a running person to those trailing at positions and distances similar to a race. The goal to assess the proportion relates to the fact that the virus number contained in a microdroplets is expected to remain stable during the short transfer time even if the microdroplets would shrink. To cover both modes of human aerosol emissions (fine aerosols around 2 µm and larger aerosols between 20 and 200 µm), a setup was chosen as follows: Two emitter dummies were built so that they released either fine aerosols created by a theatrical fog machine, or larger color droplets produced with a professional pesticide sprayer. The fog machine and the pesticide sprayer were chosen so that the released aerosol matches the two modes of the distribution of exhaled human aerosols (Johnson et al., 2011). Both forms of aerosol production have a much larger emission strength than human aerosol formation. The high emission rates allowed ignoring the (low) background aerosol. For the fog, the number concentration of PM2.5 was chosen because we had tested this type of aerosol experimentally and found that the PM2.5 number concentration was stable for several minutes even in wind tunnels and very large halls. For the larger spray aerosols, a colorant was added to the liquid so that the transfer of the released aerosol could be determined. The amount of virus transferred was then calculated based on the documented emission size distribution (Johnson et al., 2011) and the transfer rates determined during the experiments.
A group of seven runner dummies equipped with sensors was crafted and pulled behind one of the two emitters at a time at different speed over the 400 m athletics track and field circuit of the Deutweg sports grounds in Winterthur, Switzerland. Each experiment lasted one circle. The computational part of the study consisted of a Monte Carlo simulation of the dose received by runners from an infected participant during races of different length, starting sequence and starting block size. It used for this the proportion of fog and spray aerosols emitted by a source reaching trailing runners at different distance and speed.
Specification of the Measurement Dummies
Seven measurement dummies were crafted using 4 mm thick plywood. Each dummy had a torso of 595 mm height and 416 mm width. The dummy head was made of a bent polystyrene mirror and presented a cross-sectional surface of 250 mm × 146 mm. The head was made so that blotting paper could easily be mounted to later collect the sprayed aerosol. Behind each polystyrene mirror, an optical particle sensor was installed. The dummies were mounted on a structure attached to an e-bike's rack and on top of a cycle trailer. The height of the mounted dummies was 1720 mm, 1760 mm and 1680 mm at the source, the rack, and on the trailer, respectively. The dummies were arranged in three rows at a distance of about 1 (Position 1 and 2), 2 (Position 3 to 5) and 3 meters (Position 6 and 7) from the source. Fig. 1 shows the overall arrangement with the fog-head mounted as source. A detailed description with exact measures is provided in the Supplementary Material.
Measurement Devices
Fine aerosol counts in the PM2.5 size range were measured at 1 Hz interval with a miniature optical particle counter (SPS30, Sensirion AG, Switzerland) that reports the numbers in four size bins (< 0.5 µm, < 1 µm, < 2.5 µm and < 10 µm). The Sensirion SPS30 sensor is a low-cost sensor that provides long-term stable measurements (Tryner et al., 2020). The sizing, mass and number concentrations compare well to general ambient aerosol and also artificially created aerosols as long as the size of the aerosol is within the detection range of the sensors (Kuula et al., 2020). For the portion of the aerosol that is within the detectable size-range it provides a good betweensensor and repeat measurement accuracy (Hong et al., 2021). Also at high humidity, the sensors perform well, which was the finding of a long-term assessment in Taiwan, a country with frequent high humidity conditions (Hong et al., 2021). For the analysis of experiments, only the number concentration in the PM2.5 size range was used. Before and after data acquisition, the sensors were cross-corrected using individual correction-factors for each sensor to obtain an accuracy of ±3% after correction.
Spray aerosol was collected onto extra white A3+ blotting paper (local stationery shop). The sheets of blotting paper were dried on site and stored in dry, dark conditions. All sheets were scanned at 300 dpi in TIFF-format (Epson WorkForce WF-7840, Seiko Epson Corp, Japan). The average color intensity across the entire sheet was analyzed with ImageJ (version 1.53a, National Institutes of Health, USA). The signal was first inverted, then the blue component of the RGB color spectrum was measured.
Calibration curves were obtained in separate spray experiments as follows: Humidity-adjusted blotting paper was weighted using a microbalance (EMB 2000-2, Kern & Sohn GmbH, Germany) before and immediately after varying amounts of food colorant mixture were sprayed onto the blotting paper. Spraying was done at room temperature and 85% relative humidity to avoid evaporative losses within the few seconds from spraying until weighing. Afterwards the paper was dried and stored in dry, dark conditions until optical analysis. The weight difference during calibration was highly and linearly correlated with the optical intensity measurements described above (R 2 = 0.98).
Before and after each running experiment, the spray container was weighted to obtain the amount of sprayed color. The amount of sprayed color deposited on the surface of a blotting paper was calculated from the measured color intensity and the calibration curve obtained in the laboratory calibration described above. The amount that would deposit on mouth, nose and eyes was assumed as 5% of what would deposit on the entire dummy head (1,825 mm 2 mouth, nose and eye, the cross-sectional surface of the dummy head was 36,500 mm 2 ).
The weather conditions on the days of the experiments were obtained from a weather station (WeatherScreen Pro, DNT Innovation GmbH, Germany) positioned on the track and field ground.
Generation of Aerosols
Theatrical fog was generated with a vaporizing fog generator (Power-Tiny, Look Solutions GmbH, Germany) that evaporates a mixture of tri-ethylene glycol, mono-propylene glycol, di-propylene glycol and demineralized water. When measuring with the mini-sensor used in the study, the emitted particles showed a mean diameter of ~1.5 µm. The emission strength of the fog machine was 5 × 10 13 particles min -1 in the fine particulate matter (PM2.5)-size range, determined with the mini-sensors in a wind-tunnel experiment (100,000 particles cm -3 after diluting the fog into 500 m 3 min -1 ). To obtain the transfer rate for fine aerosols, the same size channel was used. The fog was conducted with a 32 mm wide tube to a hollow gypsum head on top of a torso and released through the mouth and nose openings at a steady flow.
The spray solution was created using a mixture of 79% w w -1 water, 19% w w -1 propylene glycol and 2% w w -1 red food colorant (E124 in propylene glycol, TRAWOSA AG, Switzerland). The spray was released through a hole at the mouth position of the face mirror of a dummy identical in build to the measurement dummies. The spray was guided through the mirror's surface 2 cm above the lower edge, spraying in the direction of travel. The spray was created with a professional grade pesticide sprayer with the pressure controlled at 3 bar and a 1.3 mm mist nozzle (Spraymatic 5S; constant pressure valve PR3; 1.3 mm Duro mist nozzle, all Birchmeier Sprühtechnik AG, Switzerland). This choice of spray pressure and nozzle was based on spray characterizations done by Birchmeier Sprühtechnik AG in partnership with the University of Lucerne using a phase Doppler anemometer (Dantec Dynamics, Denmark) on a measurement plane 60 mm in front of the nozzle with 225 (15 × 15) measurement points. Phase Doppler Anemometry assesses aerosols using the concept of doppler shift of laser light interacting with the flow field of moving aerosols (Durst et al., 1997). The scattering angle between the emitting and receiving optics was 68°. The laser power in the measuring volume was about 9 mW. Fig. 2 shows the size distribution of the spray (Data kindly provided by Birchmeier Sprühtechnik AG). The spray was released as short bursts every two seconds. For each experiment, the spray emission was determined by gravimetrically assessing the weight of the spray system before and after the experiment.
Simulation and Statistical Analysis
All statistical analyses were done using STATA SE 15.1 (StataCorp, USA). The transfer between runners was simulated with a Monte Carlo approach. An overview graph describing the steps is shown in the Supplementary Material. First, a runner population was randomly drawn from an anonymized list of 9,647 participants of the SwissCityMarathon 2019 in Lucerne, Switzerland (kindly provided by the organizers) that contained the race time of all the participants over different distances. Then these runners were assigned starting times. Block starts were modeled so that twenty runners passed the start line every second in random order, independent of their later race performance. The same concept applied within each block of sequential block starts. Sequential individual starts were modeled with two seconds distance between runners.
For each runner participating in a simulated run, a random binomial draw decided first on the infection status of this runner based on the prevalence rate. Afterwards, the virus emission strength into the fog-size range of each of these positive runners was drawn from a previously modeled emission distribution of a population of people infected with a variant producing very high viral loads (distribution " O × 100" in Riediker et al., 2022) and scaled to 100% speaking quietly at high physical activity, which corresponds to the emission part of the indoor scenario simulator (Riediker et al., 2022;Riediker and Monn, 2021;Riediker and Tsai, 2020). For the spray emission, the virus emission strength was calculated on the basis of the number of viruses contained in the volume of an average emitted "large spray" droplet at the viral load of this emitter (Johnson et al., 2011).
Within each 10-meter segment of the race, a pairwise comparison was done between runners. The virus transfer was calculated if a runner was collocated in that segment behind an emitter. For calculating the amount of virus transferred, experimental data was used for the combination of the nearest distance between the runners and the nearest speed of the rear runner as follows: For the fog, a random draw defined the fog transfer rate, combined with the above-described emission strength of that runner. For the spray, the number of droplets emitted in the time spent in the segment was calculated, followed by the determination of the number of droplets transferred to the trailing runner. This was determined with a binomial draw using the transfer rate as droplet impact probability. Experimental data is available only for the first three meters of a segment. Previous research suggests that emitted microdroplets from runners can reach runners trailing up to ten meters, while the slip-stream of a runner modulates the sedimentation and distribution of emitted microdroplets only in the first few meters (Blocken et al., 2020). For runners farther than the last row (> 3 m), transfer rates of the last row were assumed to linearly decrease down to zero until 10 meters distance. At the completion of a simulated race, the cumulative virus dose received via fog, spray and in total was calculated for each runner.
This random simulation of a race was repeated 1000 times for each studied race type. Afterwards, the proportion of runners receiving different doses during these races was calculated. In addition, for the 10 km race with a 100 person block start, different prevalence rates for starting infected runners were tested.
Experimental Assessment of Transfer Rates
In total 29 experiments were conducted on 4 different days in July 2021, each consisting of a full round of 400 meters driven on the athletics track and field circuit. In total 18 rounds were with the fog source and 11 with the spray device; 11 rounds were at slow (10 km h -1 ), 9 at medium (15 km h -1 ) and 9 at fast (20 km h -1 ) speed. On the days of the experiments, the weather situation varied from "sunny and hot" to "cloudy and windy with intermittent showers". To protect the equipment, no experiments were done while it was raining. During the experiments, temperature ranged from 18.0°C to 27.4°C (mean: 21.5°C, SD: 2.9°C), relative humidity from 37% to 74% (59.6%, 12.8%) and wind from 0 m s -1 to 3.4 m s -1 (1.2 m s -1 , 0.6 m s -1 ).
Visually, the released fog flowed around the head to then rapidly become turbulent and well mixed across the runner field after the first row of runners. The two front runners were only partly in the fog stream. Fig. 3 shows a box plot of the number of fine aerosols measured in one-second intervals at the different positions and speed during all the tests conducted with theatrical fog. The optical sensors did not show any peaks nor any significant differences from background during the spray experiments. The concentrations were measured at occasionally high humidity levels. However, also at the highest level of 74% relative humidity, the uncertainties of the obtained values should remain in a reasonable range (Hagan and Kroll, 2020). A limitation is that the count values are above the coincidence levels of the sensors, which puts some doubts on the accuracy of the number concentrations. Fig. 3 clearly shows that the number concentration decreases with the dilution as the runners' speed increases. Furthermore, we found in wind tunnel experiments that the dilution factors are accurately described for the same test aerosol used in these runner experiments. This suggests that the sensors have a reasonably working algorithm to address coincidence, at least for the aerosol used in these experiments.
In the spray experiments, the exit jet of the nozzle was visible but the droplet trajectory could not be observed in the outdoor setting. However, the dummies' body and head became rapidly colored. Fig. 4 shows the proportion of spray deposited on the head on a surface equivalent to mouth, nose and eyes at different positions and speed during a full 400-meter round on the track and field circuit. The spray droplets showed large variability not only with distance but also in orthogonal direction. Regression analysis showed that the amount of deposited spray significantly increased with speed and decreased with distance. The pattern of the fog concentrations was complex. While the highest values were observed at Positions 1 and 2 (sidewards behind the source), these runners did not have the highest mean exposure, which can be attributed to the fog being channeled between them, as suggested by the visual observations. For the runners further back, the fog concentration significantly decreased with speed and distance.
Distance and speed of the runners influence the transfer rates, but also the flow field generated by the moving bodies especially very close behind the source, which is consistent with computational fluid dynamics simulations (Blocken et al., 2020).
Findings of Monte Carlo Simulation
For the simulated race with 100 runners starting as a block, three different prevalence rates, 0.25%, 1% and 10% were used. Table 1 shows the summary statistics and the percentage of runners receiving a virus dose above 1, 10, 100, 1,000 and 3,000 virus copies, respectively. The mean received doses increased proportional to the prevalence rate. The fine aerosols, simulated in the experiments with the theatrical fog, contributed very little to the total dose. The maximal dose from fog suggests that for low prevalence rates, virus doses from fog will remain below 100 virus copies. An analysis of the timelines showed that fog exposure was frequent but at very low levels. Overall, fine aerosol seems to contribute relevant doses only when a very large proportion of runners is positive. In contrast, the maximal doses from spray were very high and not well related to the prevalence rate. An analysis of the timelines showed that most high doses can be traced back to only a few spray droplet transfer event. With increasing prevalence rates, a few runners were receiving spray droplets more than once during the race.
A more refined understanding can be gained when looking at the proportion of runners receiving doses above given values. The proportion of runners receiving very high doses above 3,000 virus copies is about ten-times smaller than the prevalence rate, while the proportion of those receiving at least one virus copy is about half the prevalence rate. For the critical dose of 100 virus copies, the proportion was four to seven times below the prevalence rate. Table 2 summarizes the findings for the different types of races and starting regimes. For most simulated races, the proportion of runners receiving a potentially infectious virus dose was small. The race with 500 participants suggests that changing the starting procedure from a single block start to five smaller blocks of 100 participants leads to a clear reduction in potential virus transfer. Introducing single starts every two seconds further reduces this, as seen also for the race with 1,000 participants. Running a short athletics race of 1,500 meters in a group running on average at the same speed but with speed variations on each 10 meter segment also gave an infection risk below the prevalence rate. However, an exception among the simulations is the artificial situation of a 30 km race where runners are trailing each other in a fixed position over prolonged time, similar to a group of pacemakers. In this simulated group-race, having one positively tested participant resulted in several other runners likely getting an elevated dose.
Looking at the proportion of runners receiving doses above given values informs about the risk of infection. The infection risk starts to increase rapidly if the received dose is above the minimal infective dose. For the wild-type (the variant first described in Wuhan, China), we estimated this earlier to be in the range of 500 virus copies determined by the Polymerase Chain Reaction (PCR) method, for the Delta variant around 300 virus copies and for Omicron around 100 virus copies (Riediker et al., 2022;Riediker and Monn, 2021). The Monte Carlo simulation of the races suggests the infection risk in popular running events to be low even in a pessimistic scenario that takes the critical dose as criterion for infection or not. This is consistent with the findings of low contributions .. over 10 10 km, 100 runners, all at same time 0.026% 0.042% 0.055% 10 km, 100 runners, 1 every 2 seconds 0.010% 0.015% 0.029% 10 km, 500 runners, all at same time 0.102% 0.160% 0.219% 10 km, 500 runners, 5 × 100 at same time 0.036% 0.058% 0.084% 10 km, 500 runners, 1 every 2 seconds 0.032% 0.045% 0.062% 10 km, 1000 runners, all at same time 0.146% 0.244% 0.337% 10 km, 1000 runners, 1 every 2 seconds 0.034% 0.058% 0.081% 1.5 km, 20 runners, variable 20 ± 2 km h -1 0.045% 0.055% 0.055% 30 km, 11 runners, fixed formation 0.582% 0.755% 0.836% of outdoor encounters to the spread of the disease (Lee et al., 2021) and the absence of transmission chains in other types of outdoor sports (Pedersen et al., 2021;Robinson et al., 2021b) and running practices (Robinson et al., 2021a).
CONCLUSIONS
The experiments suggest that only a small proportion of fine (fog) and larger (spray) respiratory aerosol gets transferred from a source to trailing runners. For popular races, a strategy to keep the runners' risk low is to switch from mass starts to individual starts. The simulations suggest that having every two seconds a single runner start reduces the risk by a factor of three to four compared to a mass start. The risk reduction achieved by this measure is much more pronounced for large races with many participants. However, it should be noted that the estimated risk to participants in a 1,000-person race with single start every two seconds was still higher than the risk of a block start during a small race of 100 runners. Thus, more sophisticated start regimes may be needed such as blocks of individual starts. An additional approach to reduce the risk is to reduce spray transfer by avoiding proximity with rules that ensure sufficient lateral distance. While testing should be a routine element of every race, the simulation of a group running closely together shows that for such race types it will be crucial to ensure with a good testing strategy that no runner is infectious. Taken together this study suggests that most types of outdoor running events contribute very little to the spread of the disease, assuming that the protection strategies before and after the race are correctly defined.
ACKNOWLEDGMENTS
This study was financially supported by the associations Swiss Runners and Swiss Athletics. The spraying device including the nozzle was kindly donated by Birchmeier Sprühtechnik AG, Switzerland. The city of Winterthur, Switzerland kindly gave access to the track and field circuit. A big thank you goes to all the athletes in Winterthur and the field-keeper for welcoming the pulk of dummies on their grounds. | 2022-05-27T17:36:02.421Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "b8338aa2e02f65d921d39bf1cab9610378ba144d",
"oa_license": "CCBY",
"oa_url": "https://aaqr.org/articles/aaqr-22-02-oa-0069.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "353f2cbef4557f2c6b267fb362b7e44addb2db8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237448407 | pes2o/s2orc | v3-fos-license | Predictors of response to pharmacological treatments in treatment-resistant schizophrenia – A systematic review and meta-analysis
BACKGROUND
As the burden of treatment-resistant schizophrenia (TRS) on patients and society is high it is important to identify predictors of response to medications in TRS. The aim was to analyse whether baseline patient and study characteristics predict treatment response in TRS in drug trials.
METHODS
A comprehensive search strategy completed in PubMed, Cochrane and Web of Science helped identify relevant studies. The studies had to meet the following criteria: English language clinical trial of pharmacological treatment of TRS, clear definition of TRS and response, percentage of response reported, at least one baseline characteristic presented, and total sample size of at least 15. Meta-regression techniques served to explore whether baseline characteristics predict response to medication in TRS.
RESULTS
77 articles were included in the systematic review. The overall sample included 7546 patients, of which 41% achieved response. Higher positive symptom score at baseline predicted higher response percentage. None of the other baseline patient or study characteristics achieved statistical significance at predicting response. When analysed in groups divided by antipsychotic drugs, studies of clozapine and other atypical antipsychotics produced the highest response rate.
CONCLUSIONS
This meta-analytic review identified surprisingly few baseline characteristics that predicted treatment response. However, higher positive symptoms and the use of atypical antipsychotics - particularly clozapine -was associated with the greatest likelihood of response. The difficulty involved in the prediction of medication response in TRS necessitates careful monitoring and personalised medication management. There is a need for more investigations of the predictors of treatment response in TRS.
Introduction
Treatment-resistant schizophrenia (TRS) is a severe yet highly prevalent form of schizophrenia (Kennedy et al., 2014). About 1% of the global population has schizophrenia and the percentage is even higher in some parts of the world, for example, Northern Finland, with its estimate of 1.8% (Perälä et al., 2008). One-fifth to one-third of all patients with schizophrenia present with a form of the illness resistant to treatment (Conley and Kelly, 2001).
The burden of TRS on patients and society is high. Many comorbidities are associated with the disease and the treatment. Unemployment and suicide risk are also notably increased. The healthcare costs of TRS are 3 to 11 times higher than schizophrenia in general (mainly due to the high number of hospitalizations), representing 60% to 80% of the total economic burden of schizophrenia (Kennedy et al., 2014).
Estimates of the proportion of treatment responders in TRS vary widely. Suzuki et al. (2011) reviewed 33 clinical trials of antipsychotics on TRS and the response rate varied between 0%-76%. In a systematic review of 65 trials, the average response rate was 41%, ranging from 0% to 74% (Kennedy et al., 2014).
Studies have examined the effects of antipsychotic medications and other treatments on the likelihood of response in TRS (Siskind et al., 2016). Meta-analyses on non-pharmacological predictors of response in TRS are rare (Okhuijsen-Pfeifer et al. 2020). A small number of original studies have examined predictors of response in TRS. Based on these studies, later age of illness onset (Semiz et al., 2007), shorter hospitalizations (Zito et al., 1993) and less severe symptoms at baseline (Hong et al., 1997;Zito et al., 1993;Wirshing et al., 1999) predict better treatment response. Remarkably, more severe positive or negative symptoms may also predict better treatment response (Wirshing et al., 1999). Shorter delay in clozapine initiation and fewer pre-clozapine hospitalisations have been associated with better clozapine response (Shah et al., 2019). Gender (Lieberman et al., 1994) and age at study initiation have not predicted treatment response (Zito et al., 1993;Hong et al., 1997;Lindenmayer et al., 2002;Semiz et al., 2007). In a metaanalysis of 34 articles, Okhuijsen-Pfeifer et al. (2020) analysed demographic and clinical predictors of clozapine response in schizophrenia. They found that lower age, lower PANSS negative score and paranoid schizophrenia subtype predicted better response to clozapine. To our knowledge, there are no systematic reviews or meta-analyses summarising predictors of response to any psychopharmacological treatment of TRS.
The goal of this systematic review and meta-analysis was to determine the average response rate and identify predictors of treatment response in patients with TRS in drug trials. We focused on putative predictors assessable at the start or switch of antipsychotic treatmentusually obtained during the baseline or pre-treatment phases in clinical trials. Based on previous literature, we hypothesised that later age of illness onset, shorter duration of hospitalisation and less severe symptoms at baseline will predict better treatment response. There is a negligible number of individual studies analysing whether patient characteristics predict treatment response. It is therefore not possible to perform a patient level meta-analysis. Thus, in this study, we analysed the associations at study level, i.e. we analysed the associations between patient and study characteristics and the response percentage in the corresponding study.
Methods
We followed the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) guidelines for systematic reviews and metaanalyses (Page et al., 2021) (see Online supplement appendix 1).
Search strategies
A comprehensive literature search was performed in November 2016 and updated in March 2019, using the electronic databases ISI Web of Science, PubMed (MEDLINE) and Cochrane CENTRAL (Cochrane Central Register of Controlled Trials). An information specialist (NH) conducted the search. The search strategy included the keyword 'schizophreni*' in the title of the article linked with an AND operator to a set of keywords describing treatment resistance ('treatment-resistan*', 'ultra-resistan*', 'treatment-refractory', 'clozapine') in the abstract and/ or topic of the article. The search was restricted to articles in English and to clinical trials as a topic or publication type. There was no time restriction. See the online supplement Table 1 for a description of the search strategy for each database. Furthermore, articles were searched using a chaining method, i.e. finding interesting articles in the reference lists of included articles.
At least two authors (AS, EJ, JP) evaluated all search results based on the titles and abstracts of the articles. Subsequently, AS and JP evaluated full text articles. For studies that met the inclusion criteria, AS and JP extracted the data. When questions arose related to full text evaluation and data extraction, study authors (AS, JP, EJ, JM, and JS) resolved these by consensus.
Study selection
We wanted to examine trials that studied response to medication in a TRS population. The articles included in the analyses were required to meet each of the following eligibility criteria: 1. The article detailed an original study of people diagnosed with DSM-III, DSM-III-R, DSM IV or ICD-9 or ICD-10 schizophrenia or schizoaffective disorder adjudged as treatment resistant. 2. The article presented clear criteria for treatment resistance (for further details, see 2.3). 3. The study included a sample size of at least 15 individuals at its initiation. 4. The study had at least a 6-week follow up period. 5. The article detailed a clinical trial analysing the effect of medications (mostly antipsychotics; in a few studies, mood stabilisers or antidepressants; and in a very few studies, other pharmacological treatment). Both naturalistic and controlled trials were included. 6. The article presented the response rate of the sample. 7. The study presented at least one baseline characteristic (i.e. predictor of response in this study). 8. The article presented the study characteristics and inclusion criteria of the sample. 9. The articles were in English.
The exclusion criteria included: 1. Studies analysing non-pharmacological treatments, for example, psychotherapies and ECT since these would be difficult to combine with pharmacological trials based on the different kinds of patient selection and methods. 2. Samples including children or adolescents (patients had to be at least 18 years of age at the study initiation). 3. Cross-over studies due to the inability to compare them with other studies.
Definition of TRS in this review
We included all the clinical trials that reported their sample as a TRS sample, and that defined TRS as a history of use of at least one trial of antipsychotics without response.
There are multiple operational definitions of TRS. The original Kane et al. (1988) criteria were very strict and the required medication dose was high. When developing a consensus for the definition of TRS, Howes et al. (2017) suggested a more specific definition with six points to consider, including use of a symptom questionnaire and performance evaluation. Table 1 summarizes various definitions of treatmentresistant schizophrenia.
We acknowledge that a consistent definition of TRS is important. However, in the studies identified, there was great variability in the operational definition of TRS and in reporting the definition. In order to capture all possible TRS samples, we chose to include all the clinical trials that reported their sample as a TRS sample, and that defined TRS as a history of use of at least one trial of antipsychotics without response. The review included a range of TRS definitions. For example, a broader definition from Scheepers et al. (2001): "All subjects were previously treated with at least one typical antipsychotic for a minimum of four weeks". In contrast, there was a narrower TRS definition from Dossenbach et al. (2000): "BPRS ≥ 45; Score ≥ 4 in 4 BPRS psychotic symptoms; non-response to ≥ 3 APs from different classes at ≥ 1000 mg for ≥ 4 months; a history of hospitalization for ≥365 days; non-response (20% decrease in BPRS) to CLZ for ≥4 months or intolerance to CLZ". The sample also included studies using the Kane et al. (1988) criteria (see Table 1).
Most of the identified clinical trials defined TRS based on only the number of failed antipsychotic medication trials. Most studies did not report the dosage or treatment duration of each failed antipsychotic trial. Moreover, several were missing standard assessments of symptom severity (e.g. PANSS, BPRS) and the level of disability. Therefore, medication dosage and duration, symptom severity, and disability did not figure into our classification of TRS criteria. Rather, we classified the included studies into three subclasses based on the number of previous antipsychotic trials. We conducted the analyses in the total sample and conducted a sensitivity analysis including only studies in groups 2 and 3: 1. History of non-response to at least one adequate trial of antipsychotic treatment (broad criteria). 2. History of non-response to at least two adequate trials of antipsychotic treatment (average strict criteria). 3. History of non-response to at least three or more adequate trials of antipsychotic (narrow criteria).
Definition of response
There was also heterogeneity across studies regarding the definition of response. Howes et al. (2017) suggested the following criteria for adequate treatment response: 1.) Symptoms are rated no more than mild severity; 2.) Duration of response sustained for a minimum of 12 weeks; and 3.) Functional impairment rated as mild or better on a standardised scale such as the Social and Occupational Functioning Scale (SOFAS). In addition, whenever possible, they recommended that investigators ascertain response prospectively over at least six weeks and defined as at least a 20% improvement in symptom scores and meeting the absolute thresholds (symptoms rated at no more than mild severity). Suzuki et al. (2011) found that the most commonly used criteria for treatment response is at least a 20% reduction in PANSS or BPRS.
Of the 77 studies included in our review, 64 used a 20% reduction in symptoms as a definition of response, 18 studies used a 30% reduction, a single study used a reduction of 40% and 50%, and eight studies used the Kane 1988 criteria. Kane et al. (1988) defined response with ≥20% decrease in the BPRS total score, and either a post-treatment CGI-Severity score of ≤3 (i.e., better than mild) or BPRS of ≤35. Given that only one study used a 40% or 50% reduction, we combined the one study with those using the reduction of 30%. A small number of studies reported more than one response criteria. Based on these figures, we present the results for the response rate of studies using the following response definitions: 1.) reduction in 20% of symptoms, 2.) reduction in 30% of symptoms and 3.) the Kane criteria. Since most of the studies used a reduction in 20% of symptoms as the response criteria, we studied the associations between baseline and study characteristics and the percentage of response among these studies in a meta-analysis. In addition, as a sensitivity analysis, we performed the analyses in the total sample regardless of the response criteria.
Recorded variables and analysed predictors of response
Our team (AS and JP) recorded the following variables from each article: year of publication, original and final sample size, duration of follow-up, number of drop-outs, type of pharmacological treatment, proportion of males, mean age of participants, duration of illness, age of onset, age at first hospitalisation, number of hospitalisations, weight, BMI, ethnicity, inpatient/outpatient status, duration of current hospitalisation, years of education, baseline overall (PANSS, BPRS, or CGI) and positive and negative symptom (PANSS) severity, and proportion of response. BPRS positive and negative symptoms were not studied as those were reported only in a few studies.
Sensitivity analysis
We completed a sensitivity analysis by including only studies that used a more common definition of TRS, i.e. studies that included patients who had tried at least two different antipsychotic medications (i.e. studies using the average strict and narrow TRS criteria). Given that there are differences on the effects of different treatments, we also analysed the percentage of treatment response in subpopulations classified by the medication that was analysed in the trial. Further, we examined associations between predictors and treatment response in 1) studies that included only atypical or typical antipsychotics as a trial treatment and separately in 2) studies that included only atypical antipsychotics. Here, we combined the treatment categories in different trials regardless of the comparison treatment.
Statistical analysis
We divided predictor variables into three classes based on tertiles or into two classes based on median. Based on the expected heterogeneity of the treatment response percentage between studies, we used a random effects meta-analysis to pool overall estimates of response. In the random effects analysis, we weighted each study by the inverse of its variance and the between-studies variance. We used random effects meta-regression to explore the influence of potential predictor variables on response proportion. We assessed the heterogeneity of the studies using I 2 statistics, and adjudged the statistical significance of heterogeneity using a chi-square test. The values of I 2 ranged from 0% to 100%, reflecting the proportion of the total variation across studies beyond Table 1 Examples of definitions of treatment resistant schizophrenia.
Author Definition Kane et al., 1988 1. The patient should have manifested a failure to respond to three or more adequate trials of antipsychotic treatment within the last 5 years, including medication from two distinct classes with dosing at least the equivalent of 1000 mg per day of chlorpromazine. 2. There must be at least moderately severe continuous symptoms in certain psychosis symptoms (conceptual disorganization, suspiciousness, hallucinatory behaviour and unusual thought content). 3. There must be evidence of substantial current symptoms despite current optimized treatment to which the patient is adherent: defined as a score of greater than or equal to 45 on the Brief Psychiatric Rating Scale (BPRS) or 90 in the Positive and Negative Syndrome Scale (PANSS). Suzuki et al., 2012 1. At least two failed adequate trials with different antipsychotics (at chlorpromazine-equivalent doses of ≥600 mg/day for ≥6 consecutive weeks) that could be retrospective or preferably include prospective failure to respond to one or more antipsychotic trials 2. Both a score of ≥4 on the Clinical Global Impression-Severity (CGI-S) and a score of ≤49 on the Functional Assessment for Comprehensive Treatment of Schizophrenia (FACT-Sz) or ≤ 50 on the Global Assessment of Functioning (GAF) scales Howes et al., 2017 1. The patient should have at least moderate severity of symptoms for 12 weeks (using standardised scale) 2. At least moderate functional impairment measured using a validated scale. 3. At least two past treatments with different antipsychotic drugs for at least for 6 weeks with a dosage equivalent to 600 mg of chlorpromazine per day 4. Adherence is followed systematically, at least 80% of prescribed doses taken. Antipsychotic plasma levels monitored on at least one occasion. 5. In ideal cases, at least one antipsychotic drug trial to make sure of the treatment resistance 6. Criteria clearly separating responsive from treatment-resistant patients.
Search results
The initial literature search produced 1373 references, and after the removal of duplicates, 1148 unique publications were identified (Fig. 1). After inspecting the abstracts, 160 original articles were included for review against the above-mentioned eligibility criteria. 77 articles were included in the systematic review. The overall sample included 7546 TRS patients.
Study characteristics
In the included studies (online supplement Table 2), the median age at onset was 21.8 years (range 20.5-22.9), median baseline PANSS was 94.0 (81.6-104.4), BPRS 50.6 (42.6-57.5) and the majority 69.3% (62.0-74.0) of the samples were male. Table 2 includes a summary of the characteristics of included samples. 40 samples were from North America (35 from the USA), 20 from Europe, 17 from Asia, two from Africa and one from Australia. Three of the studies included patients from two different countries. Most of the studies had used DSM-IV as a diagnostic system (n = 48), 19 had used DMS-III-R, six studies DSM-III, two studies ICD-10 and two studies did not report the used diagnostic system. Regarding the strictness of the definition of TRS, 31 of the studies required a history of at least three antipsychotics, 29 of the studies required a history of at least two antipsychotics and 12 studies had a broad definition of history of at least one antipsychotic. It was not possible to classify the strictness of the definition of TRS for five studies. Nine studies also included schizoaffective patients in the sample and in six of them; the proportion of schizoaffective patients was less than 20% of the whole sample. The highest proportion of schizoaffective patients in an individual study was 40%.
Response percentage
In all the studies, 41.3% (95% CI: 36.8, 45.8) of the patients achieved response. When only analysing studies using a 20% reduction in symptoms as the response criteria (n = 61), 40.8% (36.1, 45.5) achieved response and 40.6% (31.9, 49.3) achieved response when the criteria was 30% of decrease of symptoms (n = 18). In studies using the Kane criteria for the response (n = 8), 35.0% (19.3, 50.7) of the patients experienced response. When only including studies using the most commonly used TRS criteria (groups 2 and 3, i.e. TRS history of at least 2 AP medications) (n = 60), 42.6% (37.4-47.7) achieved response. When using a 20% reduction in symptoms as the response criteria and excluding studies using the broad TRS criteria (n = 44), 42.6% (36.8-47.6) achieved response (Figs. 2-5). Table 3 includes response percentages by baseline and study characteristic variables. Of the included variables only baseline positive symptoms associated statistically significantly with response (p = 0.008). Among those studies with highest mean of positive symptoms (highest tertile), median response was 50.0%, whereas in the lowest tertile response was 17.8%. None of the other baseline and study characteristics achieved statistical significance. In the studies of the youngest age at baseline, the median response rate was 50.7%, in the middle tertile the response rate was 44.4% and in the oldest tertile it was 39.4%. Among the studies in which the age at time of first hospitalisation was low, only 18.2% achieved response, whereas in the older group the rate was 59.0%. When the cumulative number of hospitalisations was lower, the median response rate was 27.6%, and when it was higher, it was 46.2%. In studies with low proportion of inpatients at baseline, the response rate was 44.0%, and in studies with higher proportion of inpatients, it was 31.9%. In the samples with shorter duration of current hospitalisation, 41.2% achieved response, whereas in the samples with longer duration, only 23.9% achieved response. There was no trend in response rates regarding year of publication. Using the TRS classification previously described, in the studies with a broad definition of TRS, 37.4% achieved response, in studies with a moderately strict definition, 45.2% achieved response and in studies with a narrow definition, 46.7% achieved response.
Association between baseline and study characteristics and treatment response
In the sensitivity analysis, when only studies that used a more common definition of TRS were included, i.e. studies that included patients who had tried at least two different antipsychotic medications, the results did not change: Only positive symptoms achieved statistical significance and there was no other statistically significant associations between any of the baseline and study characteristics and percentage of treatment response.
We also analysed the response percentage in the baseline and study characteristic variables in all 77 studies, i.e. including studies with variable response criteria. Only higher positive symptom score at baseline was associated with higher response percentage, and no other statistically significant associations occurred (see Table 2 in the Supplement). Table 4 summarizes the proportion of responders to antipsychotic medications in several subpopulations. Of patients using typical antipsychotics, 25.0% achieved response whereas of those using atypicals, 41.5% achieved response. Patients using clozapine monotherapy had the highest response rate, 50.0%, and patients on chlorpromazine had the lowest, 10.3% (although the number of studies was low). There were no significant associations in baseline or study characteristics and the response rate in subgroups by type of medication (analysed separately for 1: studies including both typical and atypical antipsychotics and 2: studies including atypicals only).
Main results
In this systematic review of medication trials of TRS, 41% of patients achieved response defined as a 20% reduction in symptoms. Rather surprisingly, none of the baseline or study characteristics other than positive symptoms predicted response. Studies of clozapine and other atypical antipsychotics produced the largest proportion of responders. Given that there were no significant difference in the percentage of responders by publication year, we can assume that the efficacy of medications for TRS has remained unchanged for 30 years.
There was no statistically significant association between the number of hospitalisations and treatment response. However, there were some differences in the response percentages. Surprisingly, samples with a higher cumulative number of hospitalisations had better treatment response (46% compared to a response rate of 28% in samples with a lower number of hospitalisations). One possibility is that patients with a fewer hospitalisations had more severe symptoms; thus, they may have spent longer periods in hospital and long-stay institutions and had fewer discharges. It is also possible that patients with fewer hospitalisations received less follow-up care.
Response rates varied relatively little by the strictness of TRS criteria, nor by different response criteria. However, among studies using the Kane et al. (1988) criteria, the response percentage was slightly lower than in other studies (35% vs. 41%).
When analysed in strata by antipsychotic drugs, the highest response rate was in studies with patients using clozapine. The response percentage was also high in studies analysing injections, although the results remains unsure due to the very low number of studies (n = 3). The difference in response rate between typical and atypical antipsychotic drugs was notable, whereas response rates did not vary greatly among individual atypical antipsychotic agents.
Comparison with previous results and clinical implications
The current meta-analysis obtained a response rate (40.8%) equivalent to Kennedy et al.' (2014) estimate of 41%. The response rates were very similar regardless of TRS criteria, which supports the reliability of the result. As a comparison, in general schizophrenia the response rates range from 23%-51% (Haddad and Correll, 2018).
The association between higher baseline positive symptom score and higher probability of response did not support our hypothesis of lower symptoms at baseline and better response. However, the result that higher positive symptoms specifically, but not negative symptoms, predicts better response is understandable, since antipsychotics are effective in the treatment of positive, but less so in the treatment of negative symptoms. Earlier meta-analysis of clozapine response had somewhat different results, showing that fewer negative symptoms predicted clozapine response, but positive symptoms were not statistically significant (Okhuijsen-Pfeifer et al., 2020). The differences in our study and the study by Okhuijsen-Pfeifer et al. (2020) may be explained by differences in the inclusion criteria, and the differences in the characteristics (e.g. symptom severity at baseline, analysed medications) of included samples. More severe positive symptoms at baseline have been associated with better treatment response also in original study with treatment-refractory schizophrenia patients (Wirshing et al., 1999). We found no statistically significant difference between patient gender or age at the study moment and response. This result is similar to previous original studies that analysed the associations at patient level (Zito et al., 1993;Lieberman et al., 1994;Hong et al., 1997;Lindenmayer et al., 2001;Semiz et al., 2007). Age of illness onset, length of hospitalisation did not predict response either, and similar results were found in previous original studies (Semiz et al., 2007;Hong et al., 1997;Zito et al., 1993;Wirshing et al., 1999). Predicting response in TRS using 20%: reduction in 20% of symptoms in PANSS or BPRS, 30%: reduction in 30% of symptoms in PANSS or BPRS, Kane: ≥20% decrease in the BPRS total score, and either a post-treatment CGI-Severity score of ≤3 (i.e., better than mild) or BPRS of ≤35. patient characteristics is challenging. In comparison, in first-episode psychosis, being female, antipsychotic-naïve, having a more severe illness and shorter duration of illness at baseline predicted a higher response rate (Zhu et al., 2017).
It may be that TRS has a complex nature with multiple factors affecting the course of the illness. Thus, identifying associations between certain patient characteristics and response is challenging. There has been some tentative evidence of etiological differences between treatment-resistant and non-treatment-resistant schizophrenia (Gillespie et al., 2017). Treatment-resistant patients have shown a lack of dopaminergic abnormalities but rather show glutamatergic abnormalities, a significant reduction in brain gray matter, and higher familial loading compared to treatment-responsive patients (Gillespie et al., 2017). Okhuijsen-Pfeifer et al.'s (2020) meta-analysis showed that younger age (35,9 years in responders, 37,2 in non-responders), few negative symptoms, and paranoid schizophrenia subtype were Fig. 3. Percentage of response in studies using 20% of decrease of symptoms as response criteria. A. Seppälä et al. associated to better clozapine response. It may be that the more homogenous sample of their study (only clozapine users) associated to the fact that significant predictors were found.
To our knowledge, this is the first systematic study of the predictors of response to any pharmacological treatment in TRS. The number of individual investigations of predictors of treatment response in TRS is rather few. It was therefore not possible to perform a patient-level metaanalysis. Thus, in this study, we analysed the associations in a study level, as did Okhuijsen-Pfeifer et al. (2020). We examined the associations between sample and study characteristics and the response rates in the corresponding study using a relatively crude method. Our metaanalysis generally did not support the few previous findings in which baseline characteristics predicted treatment response.
In our study, of patients using typical antipsychotics, 25.0% achieved response, whereas among patients using atypicals (not including clozapine), the response rate was 41.5%. In a meta-analysis of 15 antipsychotic medications, Leucht et al. (2013) found only minor differences in efficacy in schizophrenia patients. They identified 212 trials involving 43,049 participants. All drugs were significantly more effective than placebo. Their findings challenge the straightforward classification of antipsychotics into typical and atypicals and the idea that atypical antipsychotics are more effective than typicals. Our finding of different response percentage between typicals and atypicals is interesting. Despite criticism for classifying antipsychotics into typicals and atypicals, it may be that TRS patient response differently to these two classes and one reason behind this could be differences in etiology of the illness in TRS and schizophrenia in general. Samara et al. (2015) found no major differences in the efficacy of different antipsychotic agents in TRS, or when comparing clozapine with other atypicals. However, clozapine was more effective than typical antipsychotics. Several studies that support the efficacy of clozapine in the treatment of TRS, and the earlier initiation of clozapine may improve the outcomes in TRS (Haddad and Correll, 2018). Early recognition and treatment of TRS are important because for as many as 84% of patients with treatment resistance may be present from the illness onset (Demjaha et al., 2017). Our study revealed that the prediction of medication response in TRS is difficult to tease out. In such a situation, careful monitoring, follow-up and personalised medicine should be applied. In practice, this means tailored antipsychotic medication. When providers start, switch, taper or terminate antipsychotics, a one-three-month experimental period with well-planned medication management (Isohanni et al., 2020) is often useful. In practice this stresses good collaboration with the patient and relatives and follow-up of clinical responses and efficacies, side effects, and patients' experiences and beliefs about antipsychotics (Isohanni et al., 2018(Isohanni et al., , 2020. TRS poses a challenge to the treatment system, where standard treatment recommendations and algorithms tend often to fail. Unfortunately, there are no anticipated breakthroughs in near future in antipsychotic medication efficacy of TRS. In such situation, non-pharmacological efforts designed by sophisticated professional team must be activated. In addition, in TRS, especially in non-responders, it is important to ascertain diagnostic accuracy and the impact of comorbid conditions on response and efficacy. For instance, it is reasonable to consider the effect of neurological or metabolic disorders given that these may complicate the overall treatment course (Lally and Gaughran, 2019).
Strengths and limitations
There are several important caveats related to this review. The protocol of this study was not pre-published. We included only English language articles so we may have missed some non-English publications. We included studies with variable definitions of TRS and this may have caused some heterogeneity and noise in the results. On the other hand, the results did not change in sensitivity analyses restricted to studies that only had stricter TRS criteria. The broad inclusion of TRS studies was necessary, as we wanted to have a large number of studies in order to study potential predictors. There are multiple definitions of TRS as indicated in Online supplement Table 4. When developing a consensus for the definition of TRS, Howes et al. (2017) suggested a much more specific definition, including a symptom questionnaire and the evaluation of functioning capacity. However, studies have rarely adopted this TRS standard. We acknowledge that the field remains in a state of flux with respect to the conceptual validity of treatment resistance, as well as the definition of response.
It is important to consider pseudoresistance when analysing the response to treatment . Unfortunately, most of the studies included in this meta-analysis did not separately mention pseudo-resistant subjects, and this may have caused additional heterogeneity in the sample. In addition, we did not separate the ultra-resistant patients since this would have led to a small number of studies in the analyses.
77 studies were included. However, the eventual number of studies in the analyses of different predictors varied notably, and for some predictors the number of studies was very low. Studying these predictors at study level and not at patient level is not very powerful statistically and there is a need for original studies that focus on individual predictors. Our analyses on the response rate in the categories of used medications are crude and do not reflect a standard analyses of efficacy. Regarding analysing of response, it is possible that some original studies may have not correctly subtracted minimum points (30 in PANSS and 18 in some versions of BPRS) before calculating the response (Obermeier et al., 2010;Thompson et al., 1994). In other words some studies may have used e.g. the original 1-7 scale of PANSS without subtraction.
A strength of this study was that we were able to analyse predictors of treatment response by utilising a meta-analysis, which has not been done before. Several plausible predictors that could be utilized in clinical practice were included. Our search strategy included multiple search terms and databases, and was comprehensive enough to identify at least most of the published drug trials on TRS.
Conclusions
In this systematic review, we identified that higher positive symptoms at baseline predicts higher response, but no other baseline characteristics predicted treatment response in TRS. The response rate remained relatively similar across studies with different definitions of TRS and response criteria. It also appears that the percentage of responders has remained static from earlier to recent studies. Our results support the complex nature of TRS and the need for more effective pharmacological and non-pharmacological treatments of TRS. In future studies, it would also be important to study predictors of treatment response at patient level and studies should specifically focus on analysing predictors of treatment response and other outcomes in TRS. To help the future studies on this subject, the patient material should be more homogenous and researchers should rule-out pseudoresistance in clinical trials. The field would also benefit from coherent criteria for TRS and treatment response.
CRediT authorship contribution statement
AS, JS, JM and EJ designed this study. NH performed literature search. AS, JP and EJ extracted the data. HL analysed data. AS wrote the first draft of the manuscript. All authors contributed to and have approved the final manuscript.
Declaration of competing interest
There are no conflicts of interests.
Acknowledgement
We thank all authors of the included studies. This work was supported by grants from the Finnish Cultural Foundation, Grant number 2DC49079, Jalmari and Rauha Ahokas' Foundation, the Academy of Finland, Grant number 316563 and Oulu University Hospital funding.
Role of the funding source
The funders had no role in the study design, data collection, data analysis, interpreting the results or the decision to publish the article. | 2021-09-09T16:27:48.368Z | 2021-09-05T00:00:00.000 | {
"year": 2021,
"sha1": "114e05ecdc2add5b3b9c698a2073b14d0b404a02",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.schres.2021.08.005",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "b8868c47ffd2d9e28fdfe65330452aa3353dd44a",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118656702 | pes2o/s2orc | v3-fos-license | Neutrinoless double positron decay and positron emitting electron capture in the interacting boson model
Neutrinoless double-$\beta$ decay is of fundamental importance for determining the neutrino mass. Although double electron ($\beta^-\beta^-$) decay is the most promising mode, in very recent years interest in double positron ($\beta^+\beta^+$) decay, positron emitting electron capture ($EC\beta^+$), and double electron capture ($ECEC$) has been renewed. We present here results of a calculation of nuclear matrix elements for neutrinoless double-$\beta^+$ decay and positron emitting electron capture within the framework of the microscopic interacting boson model (IBM-2) for $^{58}$Ni, $^{64}$Zn, $^{78}$Kr, $^{96}$Ru, $^{106}$Cd, $^{124}$Xe, $^{130}$Ba, and $^{136}$Ce decay. By combining these with a calculation of phase space factors we calculate expected half-lives.
I. INTRODUCTION
Double-β decay is a process in which a nucleus (A, Z) decays to a nucleus (A, Z ± 2) by emitting two electrons or positrons and, usually, other light particles (A, Z) → (A, Z ± 2) + 2e ∓ + anything. (1) Double-β decay can be classified in various modes according to the various types of particles emitted in the decay. The processes where two neutrinos are emitted are predicted by the standard model, and 2νβ − β − decay has been observed in several nuclei. For processes not allowed by the standard model, i.e. the neutrinoless modes: 0νββ, 0νβEC, 0νECEC, the half-life can be factorized as τ 0ν where G 0ν is a phase space factor, M 0ν is the nuclear matrix element, and f (m i , U ei ) contains physics beyond the standard model through the masses m i and mixing matrix elements U ei of neutrino species. For all processes, two crucial ingredients are the phase space factors (PSFs) and the nuclear matrix elements (NMEs). Recently, we have initiated a program for the evaluation of both quantities and presented results for β − β − decay [1][2][3][4][5][6][7]. This is the most promising mode for the possible detection of neutrinoless double-β decay and thus of a measurement of the absolute neutrino mass scale. However, in very recent years, interest in the double positron decay, β + β + , positron emitting electron capture, ECβ + , and double electron capture ECEC, has been renewed. This is due to the fact that positron emitting processes have interest- * jbarea@udec.cl † jenni.kotila@yale.edu ‡ francesco.iachello@yale.edu ing signatures that could be detected experimentally [8].
In a previous article [9] we initiated a systematic study of β + β + , ECβ + , and ECEC processes and presented a calculation of phase space factors (PSF) for 2νβ + β + , 2νECβ + , 2νECEC and 0νβ + β + , 2νECβ + . The process 0νECEC cannot occur to the order of approximation used in [9], since the emission of additional particles, γγ or others, is needed to conserve energy and momentum.
In this article, we focus on calculation of neutrinoless decay nuclear matrix elements (NME), which are common to all three modes, and half-life predictions for 0νβ + β + and 0νECβ + modes. Results of our calculations are reported for nuclei listed in Table I.
A. Nuclear matrix elements
The theory of 0νββ decay was first formulated by Furry [14] and further developed by Primakoff and Rosen [15], Molina and Pascual [16], Doi et al. [17], Haxton and Stephenson [18], and, more recently, by Tomoda [19] and Šimkovic et al. [20]. All these formulations often differ by factors of 2, by the number of terms retained in the non-relativistic expansion of the current and by their contribution. In order to have a standard set of calculations to be compared with the QRPA and the ISM, we adopt in this article the formulation of Šimkovic et al. [20]. A detailed discussion of involved operators can also be found in Ref. [4].
We consider the decay of a nucleus A Z X N into a nucleus A Z−2 Y N +2 . An example is shown in Fig. 1. If the decay proceeds through an s-wave, with two leptons in the final state, we cannot form an angular momentum greater than one. We therefore calculate, in this article, only 0νββ matrix elements to final 0 + states, the ground state 0 + 1 , for which, in a previous article [9] we have calculated the phase space factors, and to the first excited state 0 + 2 . In order to evaluate the matrix elements we make use of the microscopic interacting boson model (IBM-2) [23]. The method of evaluation is discussed in detail in Ref. [1] for double electron decay (β − β − ). For double positron decay (β + β + ) and positron emitting electron capture (ECβ + ) the same method applies except for the interchange π → ν in Eq. (5) of [1] and in the mapped boson operators of Eq. (18) of [1]. The matrix elements of the mapped operators are evaluated with realistic wave functions, taken either from the literature, when available, or obtained from a fit to the observed energies and other properties (B(E2) values, quadrupole moments, B(M 1) values, magnetic moments, etc.). The values of the parameters used in the calculation are given in Appendix A.
Here, we present our calculated NME for the decays of Table I. The NMEs depend on many assumptions, in particular on the treatment of the short-range correlations (SRC). In Table II, we show the results of our calculation of the matrix elements to the ground state, 0 + 1 , and to the first excited state, 0 + 2 , using the Miller-Spencer (MS) parametrization of SRC, and broken down into GT, F and T contributions and their sum as We note that we have two classes of nuclei, those in which protons and neutrons occupy the same major shell (A = 64, 78, 124, 128, 130, 136) and those in which they occupy different major shells (A = 58, 96, 106). The magnitude of the Fermi matrix element, which is related to the overlap of the proton and neutron wave functions, is therefore different in these two classes of nuclei, being large in the former and small in the latter case. This implies a considerable amount of isospin violation for nuclei in the first class. This problem has been discussed in detail in Ref. [4] and will form a subject of subsequent investigation. It is common to most calculations of NME and has been addressed recently within the framework of QRPA in Refs. [24,25]. Here we take it into account by assigning a large error to the calculation of the Fermi matrix elements. In the same Ref. [4] it is also shown that the NME depend on the short range correlations (SRC), and that use of Argonne/CD-Bonn SRC increases the NME by a factor of 1.1-1.2. The same situation occurs for β + β + decay. In order to take into account the sensitivity of the calculation to parameter changes, model assumptions and operator assumptions [4], we list in Table III IBM-2 NMEs with an estimate of the error. The values of the 0 + 1 matrix elements vary between 2.3 − 6.1, the matrix element for the 64 Zn→ 64 Ni transition being notably the largest. They are therefore of the same order of magnitude than the nuclear matrix elements for β − β − TABLE II. IBM-2 nuclear matrix elements M (0ν) (dimensionless) for neutrinoless β + β + /ECβ + /ECEC decay with Jastrow M-S SRC and gV /gA = 1/1.269. Table III we also compare our results with the available QRPA calculations from Ref. [26] with the addition of some more recent calculations from Refs. [27,28]. The QRPA [26] NMEs are calculated taking into account GT and F contributions, and using the value g A = 1.25. As in the case of β − β − decay, QRPA tend to give larger values than IBM-2 and these two methods seem to be in a rather good correspondence with each other. The calculation of nuclear matrix elements in IBM-2 can now be combined with the phase space factors calculated in [9] to produce our final results for half-lives for light neutrino exchange in Table IV and Fig. 2. The half-lives are calculated using the formula where i = β + β + , ECβ + . The values in Table IV and Fig. 2 are for m ν = 1eV. They scale with m ν 2 for other values. Comparing the half-life predictions listed in Table IV to the ones reported in Ref. [4] for 0νβ − β − we can see that values reported here are much larger. This is due to the fact that in cases studied here the available kinetic energy is much smaller compared to β − β − decay. Furthermore, the Coulomb repulsion on positrons from the nucleus gives a smaller decay rate. As concluded also in Refs. [21,22], the 124 Xe 0νECβ + decay is expected to have the shortest half-live. In case of the neutrinoless double electron capture process, 0νECEC, the available kinetic energy is larger and Coulomb repulsion does not play a role. However, this decay mode cannot occur to the order of approximation we are considering, since it must be accompanied by the emission of one or two particles in order to conserve energy, momentum and angular momentum.
III. CONCLUSIONS
In this article we have presented evaluation of nuclear matrix elements in 0νβ + β + /0νECβ + /0νECEC within the framework of IBM-2 in the closure approximation. The closure approximation is expected to be good for these decays since the virtual neutrino momentum is of order 100 MeV/c and thus much larger than the scale of nuclear excitations. By using these matrix elements and the phase space factors of Ref. [9], we have calculated the expected 0νβ + β + /0νECβ + half-lives in all nuclei of interest with g A = 1.269 and g V = 1, given in Table IV and Fig. 2.
ACKNOWLEDGMENTS
This work was performed in part under the US DOE Grant DE-FG-02-91ER-40608 and Fondecyt Grant No. 1120462. We wish to thank K. Zuber for stimulating discussions.
IV. APPENDIX A
A detailed description of the IBM-2 Hamiltonian is given in [23] and [29]. For most nuclei, the Hamiltonian parameters are taken from the literature [30][31][32][33][34][35][36]. The values of the Hamiltonian parameters, as well as the references from which they were taken, are given in Table V. The quality of the description can be seen from these references and ranges from very good to excellent. | 2015-09-17T08:13:11.000Z | 2013-05-07T00:00:00.000 | {
"year": 2015,
"sha1": "73dad445ae30af8bf6082a38f9d18d6bef1eef54",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevC.87.057301",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "f2dd76018bfc6fc1c977bb883516b24771cc00e9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
208062373 | pes2o/s2orc | v3-fos-license | Determinants of quality of life in pediatric- and adult-onset multiple sclerosis
Objective To evaluate quality of life (QoL), measured by the EQ-5D, in adults with pediatric-onset multiple sclerosis (POMS) or adult-onset multiple sclerosis (AOMS) and explore determinants of QoL in both groups. Methods Data were collected from the nationwide Swedish multiple sclerosis (MS) registry. Demographic characteristics, EQ-5D-3 level, Multiple Sclerosis Impact Scale (MSIS-29) score, Expanded Disability Status Scale (EDSS) score, Symbol Digit Modalities Test score, relapses, and disease-modifying therapy (DMT) exposure were collected on an approximately annual basis (2011–2019). Patients with definite MS with ≥2 EQ-5D measurements collected between ages 18 and 50 were included. The principal outcome was the EQ-5D visual analogue scale (EQ-VAS) score. Linear mixed models compared all available EQ-VAS scores between patients with POMS and patients with AOMS and determinants of EQ-VAS among patients with POMS and patients with AOMS (assessed separately). Results A total of 5,094 persons met inclusion criteria: 354 (6.9%) had POMS. A total of 21,357 unique EQ-5D scores were recorded. Most participants were female (70.0%) with a relapsing-onset disease course (98.1%). There was no difference in EQ-VAS scores between patients with POMS and patients with AOMS following adjustment for confounders (β-coefficient for patients with POMS vs patients with AOMS [reference]: 0.99; 95% confidence interval −0.89 to 2.87). Experiencing a relapse, severe neurologic disability (EDSS ≥6.0 vs <3.0), and higher MSIS-29 psychological score were consistently associated with lower QoL, while higher information processing efficiency and exposure to first-line DMTs were associated with higher QoL scores in both groups. Conclusions There were no differences in QoL between patients with POMS and patients with AOMS in adulthood. Findings provide support for a focus on reducing neurologic disability and improving psychological status as approaches to potentially improve the QoL of persons with MS.
Multiple sclerosis (MS) is a chronic, unpredictable disease that can affect mobility, cognition, and mood, and has consistently been associated with detriments to quality of life (QoL). 1 Disease onset typically occurs in early adulthood, 2 but for fewer than 10% of cases, it arises in childhood. 3 On average, patients with pediatric-onset MS (POMS) reach disability milestones at a significantly younger age than their adult-onset counterparts, and they appear to be vulnerable to heightened inflammation early in disease 4 and increased cognitive impairment. 5 Qualitative research has suggested that a POMS diagnosis is particularly difficult for patients and their families to process. 6 Few studies have quantitatively assessed the longer-term effect of a pediatric onset of MS on QoL through adulthood. Using Swedish nationwide data, we aimed to evaluate QoL, measured by the EQ-5D, in persons with POMS in comparison to persons with adult-onset MS (AOMS), and explore potential determinants of QoL in both groups. We hypothesized that persons with POMS would experience greater impairments to QoL than persons with AOMS.
Methods
Data sources and study population Data were collected from the Swedish MS Registry (SMSreg), a nationwide quality register that is estimated to capture 80% of all MS cases in Sweden. 7 Demographic and MS-specific clinical information are collected by neurologists and nurses from persons that attend any neurology clinic in Sweden and include sex, date of MS onset and diagnosis, clinical course, disease-modifying therapy (DMT) exposure, and region of residence in Sweden.
To be included in this cohort study, persons must have been registered in the SMSreg with a definite MS diagnosis. Following recommendations from the International Pediatric MS Study Group, pediatric-onset was defined as having a first recorded clinical symptom of MS prior to the age of 18. 8 Adultonset cases included all persons with an MS onset on or after the age of 18 years. To ensure a comparable age and disease duration range between groups, only persons who completed the EQ-5D between ages 18 and 50 and had a disease duration of less than 35 years were included. Participants were required to have a minimum of 2 complete EQ-5Ds recorded. Baseline was considered the date of the first EQ-5D assessment.
Outcomes and exposures
The EQ-5D-3L 9 was introduced across Sweden as a component of the Immunomodulation and MS Epidemiology study, a postmarketing phase 4 surveillance study of newly introduced second-line DMTs in Sweden in September 2011. It was adopted as a component of routine care of persons with MS and completed on an approximately annual basis. The EQ-5D is a patient-reported measure of health status, relating to 5 dimensions of health (mobility, self-care, usual activities, pain/ discomfort, and anxiety/depression) and a visual analogue scale (referred to as EQ-VAS). Persons are asked to report on each health dimension as having no problems, some problems, or extreme problems. Responses to these 5 questions can be converted into a single index value. The index values are anchored in 1 (full health) and 0 (death), but the scale also allows for negative values, representing states worse than death. The EQ-VAS is similar to a thermometer, on which persons score their perceived health from 0 (worst imaginable health state) to 100 (best imaginable health state). 9 Additional metrics available in the SMSreg included Multiple Sclerosis Impact Scale-29 (MSIS-29), Expanded Disability Status Scale (EDSS), Symbol Digit Modalities Test (SDMT) scores, and relapses. All tests were assessed along with the EQ-5D on an approximately annual basis, coinciding with an individual's neurology clinic visit. The MSIS-29 is a patientreported questionnaire that measures the effect of MS based on a set of 29 questions in terms of physical (20 items) and psychological health (9 items), where each question has 5 response levels (not at all, a little, moderately, quite a bit, extremely). 10 Cumulative scores are generated separately, and transformed on a scale of 0-100, with 0 representing no perceived disability and 100 representing severe disability. 10 The EDSS is the most commonly used measure of neurologic disability in MS, and is completed by the neurologist as a component of routine clinical care. 11 We explored the EDSS as a continuous outcome, and categorized as mild (0.0-2.5), moderate (3.0-5.5), and severe (6.0+) neurologic disability. The SDMT is a validated and objective measure of information processing efficiency in MS. 12 Scores range from 0 to 120, with higher scores indicating greater information-processing efficiency.
Clinical relapses are recorded in the SMSreg, and defined as the acute appearance of neurologic disturbance, lasting ≥24 hours, unrelated to fever or infection. 13 Persons were considered actively relapsing if they had an EQ-5D assessment within 90 days of a relapse onset. Information on DMT use was collected prospectively and included product name and start and stop dates. DMTs were classified as first-line (interferon-β, glatiramer acetate, teriflunomide, or dimethyl fumarate) and second-line (fingolimod, daclizumab, rituximab, Glossary AOMS = adult-onset multiple sclerosis; CI = confidence interval; DMT = disease-modifying therapy; EDSS = Expanded Disability Status Scale; EQ-VAS = EQ-5D visual analogue scale; IQR = interquartile range; MS = multiple sclerosis; MSIS-29 = Multiple Sclerosis Impact Scale-29; OR = odds ratio; POMS = pediatric-onset multiple sclerosis; QoL = quality of life; SDMT = Symbol Digit Modalities Test; SMSreg = Swedish MS Registry. mitoxantrone, and natalizumab) based on prescribing regulations in Sweden. Complete data were available from the SMSreg until February 24, 2019.
Statistical analyses
Clinical and demographic characteristics and patient-reported outcomes were summarized using frequency (%), mean (SD), or median (interquartile range [IQR]), based on the distribution of the data, and compared between POMS and AOMS using the Pearson χ 2 or Fisher exact test for categorical variables, and the Student t test or Wilcoxon rank-sum test for continuous variables. Cell sizes less than 5 were suppressed to protect confidentiality.
The principal outcomes of interest were the 5 EQ-5D dimensions and the EQ-VAS score. The 5 EQ-5D dimensions were transformed such that "moderate and extreme problems" were combined as "any problems" due to the small cell sizes in the category of "extreme problems." Reporting "any problems" was compared to "no problems" using logistic mixed-effects models between POMS and AOMS (reference cohort). Mixed-effects allowed for the incorporation of all available scores contributed by an individual, while accounting for the clustering within persons. All analyses were adjusted for sex and time-varying age at assessment, disease course, and DMT exposure. Results were reported as odds ratios (ORs) with 95% confidence intervals (CIs).
The EQ-VAS was selected as an outcome because it is a comprehensive measure of QoL, defined by the individual, as opposed to a summary score based on preselected dimensions of health. 9 It was modeled as a continuous variable using mixed-effects multivariable linear regression. We compared the EQ-VAS score between patients with POMS and patients with AOMS, and separately assessed the patients with POMS and patients with AOMS to explore potential predictor variables in each cohort. First, we explored the individual effect of each of the 5 health dimensions captured in the EQ-5D on the EQ-VAS at each visit. We then examined sex, age at onset, and region of Sweden (modeled as South, Central, or North Sweden) as fixed variables. Age at assessment, disease duration at assessment, disease course at assessment (modeled as progressive or relapsing), MSIS-29 physical and psychological score, EDSS, SDMT, and DMT exposure were assessed as time-varying covariates. MSIS-29, EDSS, and SDMT scores were included if they were collected on the same day of the EQ-5D assessment or within the previous year. DMT exposure was modeled as first-line therapy or second-line therapy vs no treatment. Persons were considered firstor second-line DMT exposed if they were receiving that drug at the time of the EQ-5D assessment. Covariates were selected based on clinical importance (age and sex) or statistical significance in univariate analyses. The MSIS-29 physical score and EDSS were not included in the same model, given how closely they reflect one another. 14 The most parsimonious multivariable model was chosen by means of the Akaike Information Criterion. 15 Results were presented as β-coefficients with 95% CI. Linear mixed model assumptions were verified using QQ plots to test for a normal distribution, and plotting residuals vs fitted values, for linearity and constant variance.
Finally, we modeled the change in EQ-VAS score between visits (including all available visits) using linear mixed effects models to better estimate within-person effects over time. 16 The dependent variable in these models was the change in EQ-VAS score between visits, and potential predictors included sex, disease course, baseline age, baseline EQ-VAS, and change in EDSS, SDMT, MSIS-29 physical or psychological scores, and time between visits. Statistical analyses were performed using R Version 3.4.3 (Vienna, Austria; R-project. org/).
Standard protocol approvals, registrations, and patient consents
The study was approved by the Regional Ethical Review Board of Stockholm, and informed consent was provided from patients for the collection of their clinical information.
Data availability
Data related to the current article are available from Jan Hillert, Karolinska Institutet. To share data from the Swedish MS Registry, a data transfer agreement needs to be completed between Karolinska Institutet and the institution requesting data access. This is in accordance with the data protection legislation in Europe (General Data Protection Regulation). Persons interested in obtaining access to the data should contact Jan Hillert (jan.hillert@ki.se).
Results
Of 6,722 persons with at least one EQ-5D in the SMSreg, 5,094 met inclusion criteria, of whom 354 (6.9%) had their MS onset in childhood. A total of 21,357 unique EQ-5D assessments were recorded between 2011 and 2019. The median age at onset was 16.3 years (IQR 14.4-17.3) for POMS and 28.9 years (IQR 24.2-34.7) for AOMS. At baseline, patients with POMS were 10 years younger, on average, than the patients with AOMS (27.4 vs 36.9 years). Patients in the POMS group were more likely to have been exposed to a second-line DMT, live in the north of Sweden, and have a higher mean SDMT at baseline than the AOMS group (p < 0.05 for all). The sex ratio, disease course, baseline EQ-VAS, EDSS, MSIS-29 physical and psychological scores, and number of relapses during follow-up were all comparable between groups (table 1, unadjusted for age).
The median EQ-VAS score over the full follow-up was 76 (IQR 60-89), and the mean (SD) was 72.1 (20.3) for the full cohort. A total of 466 (9.14%) persons reported having a "best imaginable health state" on the EQ-VAS (score of 100), and 11 (0.22%) reported a "worst imaginable health state" (score of 0) at least once during follow-up. Less than one-third of the cohort (26.2%) reported problems with mobility at baseline. Few persons reported problems with self-care (5.5%). About one-third (30.7%) reported problems with usual activities, while 55.6% reported pain/discomfort, and 53.7% reported anxiety/depression (table 2).
Differences in EQ-5D dimensions and EQ-VAS between POMS and AOMS Over the full follow-up, patients with POMS were less likely than patients with AOMS to report anxiety/depression (any problems vs no problems; OR 0.71; 95% CI 0.53-0.95)
Determinants of QoL in POMS
In univariate linear longitudinal analyses, all 5 dimensions of health were significantly associated with the EQ-VAS in both POMS and AOMS, with effects ranging from −21.2 to −12.0 for "any problems" vs "no problems." Onset age, age at assessment, and disease duration at assessment were not associated with the EQ-VAS score among patients with POMS (table 3). Women with POMS had significantly lower EQ-VAS scores than men, as did persons living in central and northern Sweden, relative to southern Sweden. Persons with moderate (EDSS ≥3.0 and <6.0) or severe disability on the EDSS (6.0+) reported lower EQ-VAS scores than persons with mild disability (EDSS <3.0). Higher SDMT scores were associated with higher EQ-VAS scores, while increases on either MSIS subscale were associated with lower EQ-VAS scores. Both firstand second-line DMT exposure was associated with higher EQ-VAS score compared to no therapy. Relative to periods of remission, actively relapsing was associated with lower EQ-VAS scores (table 4). The best-fitting adjusted model contained sex, age at assessment, EDSS, SDMT, MSIS-29 psychological score, and DMT exposure. EDSS significantly contributed to EQ-VAS score, as did the SDMT, the psychological effect of MS, and exposure to firstline, but not second-line DMTs (table 5). Determinants of QoL in AOMS Higher onset age, age at assessment, and disease duration at assessment were all associated with lower scores on the EQ-VAS (table 3). Women with AOMS had significantly lower EQ-VAS scores than men. Persons who lived in central Sweden reported lower EQ-VAS scores than persons living in southern Sweden. Moderate or severe disability on the EDSS was associated with lower EQ-VAS scores, while increased information processing efficiency on the SDMT, and lower MSIS-physical and psychological, were each independently associated with a higher EQ-VAS score. First-and second-line DMT exposure were associated with an increase on the EQ-VAS compared to receiving no therapy. Actively relapsing was associated with significantly lower EQ-VAS scores (table 4). In the fully adjusted model, moderate and severe EDSS, increased psychological effect of MS, and experiencing a relapse contributed to a lower EQ-VAS score, while exposure to first-line DMTs and a higher SDMT score contributed to higher EQ-VAS scores (table 5).
Determinants of change in QoL
There was no difference in change over time between groups (β-coefficient for POMS vs AOMS: −0.40 (95% CI −1.49 to 0.69), following adjustment for sex, age, disease course, DMT exposure, and baseline EQ-VAS score. In the POMS group, significant determinants of decreasing EQ-VAS score were higher baseline EQ-VAS score and increasing MSIS-29 psychological score (table 6), while male sex was associated with increasing EQ-VAS. Among the AOMS cohort, higher baseline EQ-VAS score, increases in EDSS and MSIS-29 psychological score, and transitioning from remission to relapse led to reductions on the EQ-VAS. Exposure to a first-line DMT and improvement on the SDMT contributed to improved EQ-VAS (table 6).
Discussion
This was a nationwide, longitudinal cohort study of QoL among persons with POMS and AOMS. At baseline, problems with pain/discomfort or anxiety/depression were reported in nearly half of the cases, while detriments to mobility and usual activities were noted among over one-quarter. Fewer than 10% of the cohort reported any problems with self-care. We found no difference in overall QoL (measured by EQ-VAS) between persons with AOMS and persons with POMS, but those with POMS were less likely to report anxiety/depression over the full follow-up period. In multivariable analyses, similar determinants of QoL were identified in both groups. Experiencing an acute relapse, severe neurologic disability on the EDSS, and high psychological effect of MS (as measured by the MSIS-29 psychological scale) were consistently associated with lower QoL, while higher information processing efficiency (SDMT score) and exposure to first-line DMTs were associated with higher QoL scores. While there was minimal change in QoL over time for the entire cohort, worsening score on the MSIS-29 psychological score was associated with worsening QoL in both groups.
An individual's health-related QoL is largely influenced by their perspective of their position in life in the context of the culture in which they live. 17 It can also be thought of as "the extent to which an individual's hopes and ambitions are matched and fulfilled by experience." 18 In the context of chronic illness, this can mean that a person's perceived QoL is in a dynamic state, constantly adapting to the current stage of disability. Given the known consequences of POMS on physical and cognitive abilities relative to AOMS, 4,19 it is notable that, overall, patients with POMS did not perceive their health status as significantly worse than did patients with AOMS. Abbreviations: CI = confidence interval; DMT = disease-modifying therapy; EDSS = Expanded Disability Status Scale; MSIS = Multiple Sclerosis Impact Scale; SDMT = Symbol Digit Modalities Test. Age at onset, sex, and region of Sweden were collected at baseline and modeled as fixed covariates. All other covariates were collected at each EQ-5D assessment and modeled as time-varying.
Reporting problems in any of the 5 domains of health collected on the EQ-5D was associated with a lower mean EQ-VAS score. This raises the question of whether specifically targeting these areas could improve QoL. Anxiety or depression were endorsed by over half of the cohort, suggesting that active management of both (pharmacologic or nonpharmacologic) should be a clinical priority, given the high prevalence and potential to improve QoL.
Experiencing an acute relapse and neurologic disability, measured by the EDSS, were both significant contributors to impaired QoL, in line with previous findings. 1,20 Slowing disability progression and reducing relapse rates have long been goals of MS treatment, and these results suggest that these are meaningful targets. First-line DMT exposure was associated with a higher QoL and improved QoL among patients with AOMS, suggesting that drug treatment may contribute to improvements in QoL. These results should be interpreted cautiously, however. Second-line DMTs were not consistently associated with improved QoL, which suggests that findings may be due to indication bias. 21 Persons with more severe disease are more likely to be treated with secondline therapies (or switched from firstto second-line therapies), and this disease worsening likely also contributes to worse health-related QoL.
While all of these factors were statistically significant, their clinical significance is less clear. There are, to our knowledge, no previous studies on minimal clinically important differences on the EQ-VAS for this particular patient group. As EQ-VAS represents the respondent's assessment of his or her own health (as opposed to the EQ-5D index value, which is calculated through value sets based on preferences from the general population), we consider EQ-5D VAS scores to be more closely linked to the patient's own perspective of a meaningful change, compared to the index values.
Both psychological effect of MS and detriments to information processing efficiency were consistently associated with lower QoL scores, albeit with small effect sizes. Nonetheless, addressing the psychological effects and cognitive detriments of MS may also be a means of improving QoL, through improved support for these symptoms.
Six identified studies explored QoL among persons with POMS and all employed the Pediatric Quality of Life Inventory. [22][23][24][25][26][27] Sample sizes ranged from 41 23 to 64 MS cases, 24 and all were completed in the United States [22][23][24] or Italy. [25][26][27] Similar to our findings, the factor most consistently associated with a worse QoL among patients with POMS during childhood was higher EDSS score. 22 was also consistently associated with reduced QoL scores, and one study found that affective disorders were associated with lower QoL, while resilience competence was associated with higher QoL. 26 We found no effect of DMT exposure on within-person QoL change in POMS, but a single-arm observational study of a self-injecting device for β-interferon reported improvements in QoL from baseline to end of treatment at 52 weeks. 27 The literature regarding determinants of QoL in patients with AOMS is much larger and has been summarized in a review 1 ; consistent predictors of reduced health-related QoL include disability, depression, cognitive impairment, pain, hopelessness, and lack of autonomy and support. 1 While we did not have information on each of these determinants, our results were consistent with those captured, including impairments to cognition, psychological effect, and physical disability.
Strengths of this study include the use of the large, population-based Swedish MS Registry. The EQ-5D has been described as satisfactory for use within the MS population 28 ; it is the most widely used metric of health-related QoL in the QoL and health economics literature, 29 and it has been used extensively within the field of MS research. 1,30 Nonetheless, employing the EQ-VAS as the primary outcome may have limited interpretation of findings as a minimal clinically important difference has not been established in the context of MS, which precluded us from commenting on the clinical significance of our results.
The use of objective information on neurologic disability and information processing efficiency, and the availability of serially collected measures of QoL enabling us to evaluate the individual-level effects on changing QoL over time, were further strengths of the study. Limitations include a lack of information on other factors that likely contribute to QoL, such as fatigue, body mass index, and work and family situation. Though this represents a large cohort of persons with POMS given the rarity of the condition, we may still have been underpowered to detect differences within this group. This may have contributed to some of the differences observed between POMS and AOMS following stratification by group. For instance, effect sizes were often similar between groups, but the wide CIs among POMS meant that statistical significance was not achieved. It is possible that a larger sample size, followed over a longer period of time, may have elicited different findings. Few persons with POMS reported having "extreme" problems on the EQ-5D domains, which precluded us from exploring this category as an outcome. All persons in this study were followed in an outpatient neurology clinic, and most were receiving a DMT. It is possible that these results are not generalizable to the wider MS population of untreated persons, or persons who do not attend clinic. 31 Finally, Sweden is a country that consistently scores high on Table 6 Multivariable models of determinants of change in EQ-5D visual analogue scale (EQ-VAS) score in patients with pediatric-onset multiple sclerosis (POMS) and patients with adult-onset multiple sclerosis (AOMS) (assessed separately) QoL metrics at the population level 32 ; these results may not be generalizable to persons living in other nations.
Overall QoL among persons with MS did not appear to be influenced by having a pediatric onset of disease. Severe neurologic disability, experiencing a relapse, increased psychological effect of MS, and reduced information processing efficiency were consistent and significant determinants of lower QoL in both POMS and AOMS. The ultimate aim of chronic disease treatment is to improve longevity and QoL. Regardless of onset age, this study highlights that impairments to QoL are common in MS, and that management of disease should incorporate efforts to improve well-being and QoL. These findings should be utilized to assist health care providers in identifying persons who may be at risk of declines in QoL. | 2019-11-17T14:02:50.624Z | 2019-11-15T00:00:00.000 | {
"year": 2020,
"sha1": "83cb68c42f0c1eb3d17e3efe229760414b1b80cb",
"oa_license": "CCBY",
"oa_url": "https://n.neurology.org/content/neurology/94/9/e932.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0da1e25190bf7ed7c7b337bb45fdd1a95d4b8c80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16992580 | pes2o/s2orc | v3-fos-license | Homogenized dynamics of stochastic partial differential equations with dynamical boundary conditions
A microscopic heterogeneous system under random influence is considered. The randomness enters the system at physical boundary of small scale obstacles as well as at the interior of the physical medium. This system is modeled by a stochastic partial differential equation defined on a domain perforated with small holes (obstacles or heterogeneities), together with random dynamical boundary conditions on the boundaries of these small holes. A homogenized macroscopic model for this microscopic heterogeneous stochastic system is derived. This homogenized effective model is a new stochastic partial differential equation defined on a unified domain without small holes, with static boundary condition only. In fact, the random dynamical boundary conditions are homogenized out, but the impact of random forces on the small holes' boundaries is quantified as an extra stochastic term in the homogenized stochastic partial differential equation. Moreover, the validity of the homogenized model is justified by showing that the solutions of the microscopic model converge to those of the effective macroscopic model in probability distribution, as the size of small holes diminishes to zero.
Introduction
Stochastic effects in the multiscale modeling of complex phenomena have drawn more and more attention in many areas such as material science [10], climate dynamics [27], chemistry and biology [21,51]. Stochastic partial differential equations (SPDEs or stochastic PDEs) arise naturally as mathematical models for multiscale systems under random influences. The need to include stochastic effects in mathematical modeling of some realistic complex behaviors has become widely recognized in science and engineering. But implementing this approach poses some challenges both in mathematical theory and computation [44,16,26,27,51,43]. The addition of stochastic terms to mathematical models has led to interesting new mathematical problems at the interface of dynamical systems, partial differential equations, scientific computing, and probability theory.
Sometimes, noise affects a complex system not only inside the physical medium but also at the physical boundary. Such random boundary conditions arise in the modeling of, for example, the air-sea interactions on the ocean surface [42], heat transfer in a solid in contact with a fluid [31], chemical reactor theory [32], and colloid and interface chemistry [56]. Random boundary conditions may be static or dynamical. The static boundary conditions, such as Dirichlet or Neumann boundary conditions, do not involve with time derivatives of the system state variables. On the contrary, the dynamical boundary conditions contain such time derivatives. Randomness in such boundary conditions are often due to various fluctuations.
In this paper we consider a microscopic heterogeneous system, modeled by a SPDE with random dynamical boundary condition, in a medium which exhibits small-scale spatial heterogeneities or obstacles. One example of such microscopic systems of interest is composite materials containing microscopic holes (i.e., cavities), under the impact of random fluctuations in the domain and on the surface of the holes [28,35]. A motivation for such a model is based on the consideration that the interaction between the atoms of the different compositions in a composite material causes the thermal noise when the scale of the heterogeneity scale is small. A similar consideration appears also in a microscopic stochastic lattice model [6] for a composite material. Here the microscopic structure is perturbed by random effect and the complicated interactions on the boundary of the holes is dynamically and randomly evolving. The heterogeneity scale is assumed to be much smaller than the macroscopic scale, i.e., we assume that the heterogeneities are evenly distributed. From a mathematical point of view, one can assume that microscopic heterogeneities (holes) are periodically placed in the media. This spatial periodicity with small period can be represented by a small positive parameter ǫ (i.e., the period). In fact we work on the spatial domain D ǫ , obtained by removing S ǫ , a collection of small holes of size ǫ, periodically distributed in a fixed domain D. When taking ǫ → 0, the holes inside domain D are smaller and smaller and their numbers goes to ∞. This signifies that the heterogeneities are finer and finer.
In other words, we consider a spatially extended system with state variable u ǫ , where stochastic effects are taken into account both in the model equation and in the boundary conditions, defined on a domain perforated with small scale holes. Specifically, we study a class of stochastic partial differential equations driven by white noise on a perforated domain with random dynamical boundary conditions: on ∂S ǫ × (0, T ). This model will be described in more detail in the next section. The goal is to derive a homogenized effective equation, which is a new stochastic partial differential equation (see Theorems 5.1, 6.1, 6.2 and 6.3), for the above microscopic heterogenous system, by homogenization techniques in the sense of probability. Homogenization theory has been developed for deterministic systems, and compactness discussion for the solutions {u ǫ } ǫ in some function space is a key step in various homogenization approaches [12]. However, due to the appearance of the stochastic terms in the above microscopic system considered in this paper, such compactness result does not hold for this stochastic system. Fortunately the compactness in the sense of probability, that is, the tightness of the distributions for {u ǫ }, still holds. So one appropriate way is to homogenize the stochastic system in the sense of probability. It is shown that the solution u ǫ of the microscopic or heterogeneous system converges to that of the macroscopic or homogenized system as ǫ ↓ 0 in probability distribution. This means that the distribution of {u ǫ } ǫ weakly converges, in some appropriate space, to the distribution of a stochastic process which solves the macroscopic effective equation.
It is interesting to note that, for the above system with random dynamical boundary conditions, the random force on the boundary of microscopic scale holes leads, in the homogenization limit, to a random force distributed all over the physical domain D, even when the model equation itself contains no stochastic influence in the domain; see Remark 5.2 in §5. We could also say that the impact of small scale random dynamical boundary conditions is quantified or carried over to the homogenized model as an extra random forcing. Therefore, the homogenized effective model is a new stochastic partial differential equation, defined on a unified domain without holes.
In the present paper, the two-scale convergence techniques are employed in our approach. Two-scale convergence method is an important method in homogenization theory which is a formal mathematic procedure for deriving macroscopic models from microscopic systems. Two-scale convergence method contains more information than the usual weak convergence method; see [2] or §4. Moreover by use of the two-scale convergence, we do not need the extension operator as introduced in [13].
Partial differential equations (PDEs) with dynamical boundary conditions have been studied recently in, for example, [4,20,22,23,25,47] and reference therein. The parabolic SPDEs with noise in the static Neumann boundary conditions have also been considered in [16,17,36]. In [11], the authors have studied well-posedness of the SPDEs with random dynamical boundary conditions. One of the present authors, with collaborators, has considered [18,57] dynamical issues of SPDEs with random dynamical boundary conditions. The homogenization problem for the deterministic systems defined in perforated domains or in other heterogeneous media has been investigated in, for example, [8,39,40,46,48] for heat transfer in a composite material, [8,13,15] for the wave propagation in a composite material and [34,38] for the fluid flow in a porous media. For a systematic introduction in homogenization in the deterministic context, see [12,28,45,35]. In [47], the effective macroscopic dynamics of a deterministic partial differential equation with deterministic dynamical boundary condition on the microscopic heterogeneity boundary is studied.
Recently there are also works on homogenization of partial differential equations (PDEs) in the random context; see [29,37,41,28] for PDEs with random coefficients, and [7,58,59,28] for PDEs in randomly perforated domains. A basic assumption in these works is the ergodic hypotheses on the random coefficients, for the passing of the limit as ǫ → 0. Note that the microscopic models in these works are partial differential equations with random coefficients, so-called random partial differential equations (random PDEs) [9,30,41,34,29,53], instead of stochastic PDEs -PDEs with noises -in the present paper; see also [52]. Another novelty in the present paper is that the microscopic system is under the influence of random dynamical boundary conditions. We first consider the linear system and then present results about nonlinear systems with special nonlinear terms. This paper is organized as follows. The problem formulation is stated in §2. Section 3 is devoted to basic properties of the microscopic heterogeneous system, and some knowledge to be used in our approach is introduced in §4. The homogenized effective macroscopic model for the linear system is derived in §5. In the last section, homogenized effective macroscopic models are obtained for three types of nonlinear systems.
Problem formulation
Let the physical medium D be an open bounded domain in R n , n ≥ 2, with smooth boundary ∂D, and let ǫ > 0 be a small parameter. Let Y = [0, l 1 ) × [0, l 2 ) × · · · × [0, l n ) be a representative elementary cell in R n and S an open subset of Y with smooth boundary ∂S, such that S ⊂ Y . The elementary cell Y and the small cavity or hole S inside it are used to model small scale obstacles or heterogeneities in a physical medium D. Write l = (l 1 , l 2 , · · · , l n ). Define ǫS = {ǫy : y ∈ S}. Denote by S ǫ,k the translated image of ǫS by kl, k ∈ Z n , kl = (k 1 l 1 , k 2 l 2 , · · · , k n l n ). And let S ǫ be the set all the holes contained in D and D ǫ = D\S ǫ . Then D ǫ is a periodically perforated domain with holes of the same size as period ǫ. We remark that the holes are assumed to have no intersection with the boundary ∂D, which implies that ∂D ǫ = ∂D ∪ ∂S ǫ . See Fig. 1 for the case n = 2. This assumption is only needed to avoid technicalities and the results of our paper will remain valid without this assumption [3].
In the sequel we use the notations with |Y | and |Y * | the Lebesgue measure of Y and Y * respectively. Denote by χ the indicator function, which takes value 1 on Y * and value 0 on Y \ Y * . In particular, let χ A be the indicator function of A ⊂ R n . Also denote byṽ the zero extension to the whole D for any function v defined on Now for T > 0 fixed final time, we consider the following Itô type nonautonomous stochastic partial differential equation defined on the perforated do- where b is a real constant, f : [0, T ] × D × R × R n → R satisfies some property which will be described later and ν ǫ is the exterior unit normal vector on the boundary ∂S ǫ , v 0 ∈ L 2 (∂S ǫ ) and u 0 ∈ L 2 (D). Moreover, W 1 (t, x) and W 2 (t, x) are mutually independent L 2 (D) valued Wiener processes on a complete probability space (Ω, F , P) with a canonical filtration (F t ) t≥0 . Denote by Q 1 and Q 2 the covariance operators of W 1 and W 2 respectively. Here we assume that g i (t, x) ∈ L(L 2 (D)), i = 1, 2 and that there is a positive constant C T independent of ǫ such that where {e j } ∞ j=1 are eigenvectors of operator −∆ on D with Dirichlet boundary condition and they form an orthonormal basis of L 2 (D). Here L(L 2 (D)) denotes the space of bounded linear operators on L 2 (D) and L Q i 2 = L Q i 2 (H) denotes the space of Hilbert-Schmidt operators related to the trace operator Q i [16]. We also denote by E the expectation operator with respect to P.
Let S be a Banach space and S ′ be the strong dual space of S. We recall the definitions and some properties of weak convergence and weak * convergence [54].
which is written as s n ⇀ s weakly in S. Note that (s ′ , s) denotes the value of the continuous linear functional s ′ at the point s.
Lemma 2.2. (Eberlein-Shmulyan)
Assume that S is reflexive and let {s n } be a bounded sequence in S. Then there exists a subsequence {s n k } and s ∈ S such that s n k ⇀ s weakly in S as k → ∞. If all the weak convergent subsequence of {s n } has the same limit s, then the whole sequence {s n } weakly converges to s.
If all the weakly * convergent subsequence of {s ′ n } has the same limit s ′ , then the whole sequence {s ′ n } waekly * converges to s ′ .
In the following, for a fixed T > 0, we always denote by C T a constant independent of ǫ. And denote by D T the set [0, T ] × D.
Basic properties of the microscopic model
In this section we will present some estimates for solutions of the microscopic model (2.1), and then discuss the tightness of the distributions of the solution processes in some appropriate space. We focus our argument in the case of linear microscopic systems, where the term f is independent of u ǫ and ∇u ǫ and f (·, ·) ∈ L 2 (0, T ; L 2 (D)). Then we briefly extend this to the case of nonlinear microscopic systems with Lipschitz nonlinearities.
Define by with the usual norm and let γ ǫ : H 1 (D ǫ ) → L 2 (∂S ǫ ) be the trace operator with respect to ∂S ǫ which is continuous [49]. We also denote that H Introduce the following function spaces with the usual product and norm. Define an operator B ǫ on the space H 1 ǫ (D ǫ ) as Associated with the operator A ǫ , we introduce the bilinear form on X 1 and the following coercive property of a holds for some constantsᾱ,β > 0 which are also independent of ǫ. Write the C 0semigroup generated by operator −A ǫ as S ǫ (t).
Then the system (2.1)-(2.4) can be rewritten as the following abstract stochastic evolutionary equation and z 0 = (u 0 , v 0 ). And the solution of (3.5) can be written in the mild sense Moreover, the variational formulation is For the well-posedness of system (3.5) we have the following result.
, which is also a weak solution in the following sense Proof. By the assumption (2.5), we have Then the classical result [16] yields the local existence of z ǫ . By applying the stochastic Fubini theorem [16], it can be verified that the local mild solution is also a weak solution. Now we give the following a priori estimates which yields the existence of weak solution on [0, T ] for any T > 0.
Applying Itô formula to |z ǫ | 2 (3.11) By the coercivity (3.4) of a ǫ (·, ·), integrating (3.11) with respect to t yields Taking expectation on both sides of the above inequality yields Then the Gronwall lemma gives the estimate (3.9). Notice that, by Lemma 7.2 in [16], Therefore by the assumption on f and (3.6) we have the estimate (3.10). The proof is hence complete.
By the above result and the definition of z ǫ we have the following corollary.
We recall a probability concept. Let z be a random variable taking values in a Banach space S, namely, z : Ω → S. Denote by L(z) the distribution (or law) of z. In fact, L(z) is a Borel probability measure on S defined as [16] L(z)(A) = P{ω : z(ω) ∈ A}, for every event (i.e., a Borel set) A in the Borel σ−algebra B(S), which is the smallest σ−algebra containing all open balls in S.
As stated in §1, for the SPDE (2.1) we aim at deriving an effective equation in the sense of probability. A solution u ǫ may be regarded as a random variable taking values in L 2 (0, T ; L 2 (D ǫ )). So for a solution u ǫ of (2.1)-(2.4) defined on [0, T ], we focus on the behavior of distribution of u ǫ in L 2 (0, T ; L 2 (D ǫ )) as ǫ → 0. For this purpose, the tightness [19] of distributions is necessary. Note that the function space changes with ǫ, which is a difficulty for obtaining the tightness of distributions. Thus we will treat {L(u ǫ )} ǫ>0 as a family of distributions on L 2 (0, T ; L 2 (D)) by extending u ǫ to the whole domain D. Recall that the distribution (or law ) ofũ ǫ is defined as: for Borel set A in L 2 (0, T ; L 2 (D)).
First we define the following spaces which will be used in our approach. For Banach space U and p > 1, define W 1,p (0, T ; U) as the space of functions h ∈ L p (0, T ; U) such that And for any α ∈ (0, 1), define W α,p (0, T ; U) as the space of function h ∈ L p (0, T ; U) such that For ρ ∈ (0, 1), we denote by C ρ (0, T ; U) the space of functions h : [0, T ] → X that are Hölder continuous with exponent ρ.
x, u ǫ ) is nonlinear (i.e., it depends on u ǫ ) but is also globally Lipschitz in u ǫ , the results in Theorem 3.1 and Corollary 3.2 still hold. For example, see [11] for such SPDEs with stochastic dynamical boundary conditions. Moreover, by the Lipschitz property, we have |f (t, x, u ǫ )| L 2 (D) ≤ C T . Hence a similar analysis as in the proof of Theorem 3.3 yields the tightness of the distribution for u ǫ in this globally Lipschitz nonlinear case. This fact will be used in the beginning of §6 to get the homogenized effective model when f = f (t, x, u ǫ ) is globally Lipschitz nonlinear. In fact, in §6, we will also derive homogenized effective models for three types of nonlinearities f = f (t, x, u ǫ ) that are not globally Lipschitz in u ǫ .
Two-scale convergence and some preliminary results
In this section we present some basic results about the two-scale convergence [2,12].
In the following we denote by C ∞ per (Y ) the space of infinitely differentiable functions in R n that are periodic in Y . We also denote by L 2 per (Y ) or H 1 per (Y ) the completion of C ∞ per (Y ), in the usual norm of L 2 (Y ) or H 1 (Y ), respectively. We also introduce the space H 1 per (Y )/R, which is the space of the equivalent classes of u ∈ H 1 per (Y ) under the following equivalent relation This two-scale convergence is written as u ǫ 2−s −→ u.
The following result ensures the existence of two-scale limit and for the proof see [2,12]. Lemma 4.2. Let u ǫ be a bounded sequence in L 2 (D T ). Then there exist a function u ∈ L 2 (D T × Y ) and a subsequence u ǫ k with ǫ k → 0 as k → ∞ such that u ǫ k two-scale converges to u.
Remark 4.3.
Taking ϕ independent of y in the definition of two-scale convergence, then u ǫ 2−s −→ u implies that u ǫ weakly converges to its spatial average: So, we see that, for a given bounded sequence L 2 (D T ), the two-scale limit u(t, x, y) contains more information than the weak limit u(t, x): u gives some knowledge on the periodic oscillations of u ǫ , whileū is just the average with respect to y. Another advantage of the usage of two-scale convergence is that we do not need an extending operator such as in [13,15] in the homogenization procedure. For more properties of two-scale convergence we refer to [2].
The following result is useful when considering two-scale convergence of the product of two convergent sequences, see [2,12].
Lemma 4.4.
Let v ǫ be a sequence in L 2 (D T ) that two-scale converges to a limit v(x, y) ∈ L 2 (D T × Y ). Further assume that Then, for any sequence u ǫ ∈ L 2 (D T ), which two-scale converges to a limit u ∈ L 2 (D T × Y ), we have the weak convergence of the product u ǫ v ǫ : Remark 4.5. Condition (4.1) always holds for a sequence of functions ϕ(t, x, x/ǫ), with ϕ(t, x, y) ∈ L 2 (D T ; C per (Y )). Such functions v ǫ are called admissible test functions. With the additional condition (4.1), the two-scale convergence of v ǫ is also called strong two-scale convergence [2].
Let u ǫ be a sequence of functions defined on [0, T ] × D ǫ which is bounded in L 2 (0, T ; H 1 ǫ (D ǫ )). Then we have the following result concerning the two-scale limit of the bounded sequencesũ ǫ and ∇ x u ǫ ; for the proof see [2]. Lemma 4.6. There exist u(t, x) ∈ H 1 0 (D T ), u 1 (t, x, y) ∈ L 2 (D T ; H 1 per (Y )) and a subsequence u ǫ k with ǫ k → 0 as k → ∞, such that x, y)], k → ∞ where χ(y) is the indicator function of Y * (which takes value 1 on Y * and value 0 on Y \ Y * ).
Since we consider the dynamical boundary condition, the technique of transforming the surface integrals into the volume integrals is useful in our approach. For this we follow the method of [55] (see also [14]) for the nonhomogeneous Neumann boundary problem for an elliptic equation.
For h ∈ L 2 (∂S) and Y -periodic, define λ ǫ h ∈ H −1 (D) as Then we have the following result about the convergence of the integral on the boundary.
For the proof we refer to [14].
Homogenized macroscopic model
In this section we derive the effective macroscopic model for the original model (2.1), by the two-scale convergence approach. We first obtain a twoscale limiting model. Then the homogenized macroscopic model is obtained by exploiting the relation between weak limit and the two-scale limit.
For u ǫ in set K δ , by Lemma 4.6 there is u(t, x) ∈ H 1 0 (D T ) and u 1 (t, x, y) ∈ L 2 (D T ; H 1 per (Y )) such thatũ and Then by Remark 4.3 In fact by the compactness of K δ , the above convergence is strong in L 2 (D T ).
In the following, we will determine the limiting equation, which is a two-scale system that u and u 1 satisfy. Then the limiting equation (homogenized effective equation) that u 0 satisfies can be easily obtained by the relation between weak limit and the two-scale limit. Define a new probability space (Ω δ , F δ , P δ ) as Denote by E δ the expectation operator with respect to P δ . Now we restrict the system on the probability space (Ω δ , F δ , P δ ). Replace the test function ϕ in (3.7) by ϕ ǫ (t, ). We will consider the terms in (3.7) respectively .
By the choice of ϕ ǫ and noticing that χ Dǫ ⇀ ϑ, weakly * in L ∞ (D), we have And by the condition (2.5) Integrating by parts and noticing thatũ ǫ converges strongly to ϑu(t, x) in By the choice of ϕ ǫ , Hence by Theorem 4.4, we have Now we consider the integrals on the boundary. First for a fixed T > 0, it is easy to see that And then By the same method as above and the condition (2.5) we have the limit of the stochastic integral on the boundary Combining the above analysis in (5.1)-(5.7) and by the density argument we have for any φ ∈ H 1 0 (D T ) and Φ ∈ L 2 (D T ; H 1 per (Y )/R). Integrating by parts, we see that (5.8) is the variational problem of the following two-scale homogenized system ϑdu = − div x A(∇ x u) − bϑλ 1 u + ϑf dt + ϑg 1 dW 1 (t) + λg 2 dW 2 (t), (5.9) [∇ x u + ∇u 1 ] · ν = 0, on ∂Y * − ∂Y (5.10) where ν is the unit exterior norm vector on ∂Y * − ∂Y and with u 1 satisfying the following integral equation for any Φ ∈ H 1 0 (D T ; H 1 per (Y )). The problem (5.12) has a unique solution for any fixed u, and so A(∇ x u) is well-defined. Furthermore A(∇ x u) satisfies with some α, β > 0 and any ξ, ξ 1 , ξ 2 ∈ H 1 0 (D). For more detailed properties of A(∇u) and (5.12) we refer to [24]. Then by the classical theory of the SPDEs, [16], (5.9)-(5.10) is well-posed.
In fact A(∇u) can be transformed to the classical homogenized matrix by where {e i } n i=1 is the canonical basis of R n and w i is the solution of the following cell problem (problem defined on the spatial elementary cell) Then a simple calculation yields being the classical homogenized matrix defined as Then the above two-scale system (5.9) is equivalent to the following homogenized system, Let U(t, x) = ϑu(t, x). We thus have the limiting homogenized equation And then u * , we have mentioned at the beginning of this section, satisfies (5.21) with W = (W 1 , W 2 ) replaced by a Wiener process W * with the same distribution as W . By the classical existence result [16], the homogenized model (5.21) is well-posed. We formulate the main result of this section as follows.
Theorem 5.1. (Homogenized macroscopic model) Assume that (2.5) holds. Let u ǫ be the solution of (2.1)-(2.4). Then for any fixed T > 0, the distribution L(ũ ǫ ) converges weakly to µ in L 2 (0, T ; H) as ǫ ↓ 0, with µ being the distribution of U, which is the solution of the following homogenized effective equation with the boundary condition U = 0 on ∂D, the initial condition U(0) = u 0 /ϑ and the effective matrix A * = (A * ij ) being determined by (5.19). Moreover, the constant coefficient ϑ = |Y * | |Y | is defined in the beginning of §2 and λ = |∂S| |Y | is defined in (4.2).
Proof. Noticing the arbitrariness of the choice of δ, this is direct result of the analysis of the first part in this section by the Skorohod theorem and the L 2 (Ω δ )−convergence ofũ ǫ on (Ω δ , F δ , P δ ).
Remark 5.2. It is interesting to note the following fact. Even when the original microscopic model equation (2.1) is a deterministic PDE (i.e., g 1 = 0), the homogenized macroscopic model (5.22) is still a stochastic PDE, due to the impact of random dynamical interactions on the boundary of small scale heterogeneities.
Remark 5.3. For the macroscopic system (5.22), we see that the fast scale random fluctuations on the boundary is recognized or quantified in the homogenized equation, through the µ 1 g 2 dW 2 (t) term. The effect of random boundary evolution is thus felt by the homogenized system on the whole domain.
Homogenized macroscopic dynamics for nonlinear microscopic systems
In this section, we derive homogenized macroscopic model for the microscopic system (2.1)- (2.4), when the nonlinearity f is either globally Lipschitz, or non-globally Lipschitz.
As Remark 3.4 has pointed out that if f is a globally Lipschitz nonlinear function of u ǫ all the estimates in § 3 hold. In fact, similar results in § 5 on homogenized model also hold. In fact for f satisfying f (t, x, 0) = 0 and for any t ∈ R, x ∈ D and u 1 , u 2 ∈ R with some positive constant L. Sinceũ ǫ → ϑu strongly in L 2 (0, T ; L 2 (D)) and by the Lipschitz property of f (t, x, u) with respect to u, f (t, x,ũ ǫ (t, x)) → f (t, x, u(t, x)) strongly in L 2 (0, T ; L 2 (D)). (5.1) still hold for f (t, x, u ǫ ). Then we can obtain the same effective macroscopic system as (5.22) with nonlinearity f = f (t, x, U): For the rest of this section, we consider three types of nonlinear systems with f being non-global-Lipschitz nonlinear function in u ǫ . The difficulty is at passing the limit ǫ → 0 in the nonlinear term. These three types of nonlinearity include: Polynomial nonlinearity; nonlinear term that is sublinear; and nonlinearity that contains a gradient term ∇u ǫ . We look at these nonlinearities case by case, and only highlight the difference with the analysis in §5.
Case 1: Polynomial nonlinearity
First we suppose f is in the following form And p satisfies the following condition For this case we need the following Weak convergence lemma from Lions [33].
Let Q be a bounded region in R × R n . For any given functions g ǫ and g in L p (Q) (1 < p < ∞), if |g ǫ | L p (Q) ≤ C, g ǫ → g in Q almost everywhere for some positive constant C, then g ǫ ⇀ g weakly in L p (Q).
Noticing that F ǫ (t, x, z ǫ ) = (f (t, x, u ǫ ), 0) and (F ǫ (t, x, z ǫ ), z ǫ ) X 0 ǫ ≤ 0, the results in Theorem 3.1 can be obtained by the same method as in the proof of Theorem 3.1. Moreover by the assumption (6.3), |f (t, x, u ǫ )| L 2 (D T ) ≤ C T , which by the analysis of Theorem 3.3, yields the tightness of the distribution ofũ ǫ . Now we pass the limit ǫ → 0 in f (t, x,ũ ǫ ). In fact, noticing thatũ ǫ converges strongly to ϑu in L 2 (0, T ; L 2 (D)), by the above weak convergence lemma with g ǫ = f (t, x,ũ ǫ ) and p = 2, f (t, x,ũ ǫ ) converges weakly to f (t, x, ϑu) in L 2 (D T ). Threfore by the analysis for linear system in §5, we have the following result.
Theorem 6.1. Assume that (2.5) holds. Let u ǫ be the solution of (2.1)-(2.4) with nonlinear term f being (6.2). Then for any fixed T > 0, the distribution L(ũ ǫ ) converges weakly to µ in L 2 (0, T ; H) as ǫ ↓ 0, with µ being the distribution of U, which is the solution of the following homogenized effective equation with the boundary condition U = 0 on ∂D, the initial condition U(0) = u 0 /ϑ and the effective matrix A * = (A * ij ) being determined by (5.19). Moreover, the constant coefficient ϑ = |Y * | |Y | is defined in the beginning of §2 and λ = |∂S| |Y | is defined in (4.2).
Case 2: Nonlinear term that is sublinear More generally, we consider f : [0, T ] × D × R → R a measurable function which is continuous in (x, ξ) ∈ D × R for almost all t ∈ [0, T ] and satisfies for t ≥ 0, x ∈ D and ξ 1 , ξ 2 ∈ R. Moreover, we assume that f is sublinear, where g ∈ L ∞ loc [0, ∞). Notice that under the assumption (6.5) and (6.6), f may not be a Lipschitz function.
By the assumption (6.6) we can also have the tightness of the distributions ofũ ǫ and also conclude that χ Dǫ f (t, x,ũ ǫ ) two-scale converges to a function denoted by f 0 (t, x, y) ∈ L 2 (D T × Y ). In the following we need to identity f 0 (t, x, y).
Then by the similar analysis for linear systems in §5, we have the following homogenized model. Theorem 6.2. Assume that (2.5) holds. Let u ǫ be the solution of (2.1)-(2.4) with nonlinear term f satisfying (6.5) and (6.6). Then for any fixed T > 0, the distribution L(ũ ǫ ) converges weakly to µ in L 2 (0, T ; H) as ǫ ↓ 0, with µ being the distribution of U, which is the solution of the following homogenized effective equation x, U) dt +ϑg 1 dW 1 (t) + λg 2 dW 2 (t), (6.12) with the boundary condition U = 0 on ∂D, the initial condition U(0) = u 0 /ϑ and the effective matrix A * = (A * ij ) being determined by (5.19). Moreover, the constant coefficient ϑ = |Y * | |Y | is defined in the beginning of §2 and λ = |∂S| |Y | is defined in (4.2).
(6.15) By (6.14), coercivity (3.4) of a ǫ (·, ·) and the Cauchy inequality, integrating (6.15) with respect to t yields where Λ 1 is a positive constant depending onᾱ. Then by the Gronwall lemma we see that (3.9) and (3.10) hold. Moreover, the fact |h(t, x, u) · ∇u| L 2 ≤ C 0 |z ǫ | X 1 ǫ , together with the Hölder inequality yields E − P t 0 A ǫ z ǫ (s)ds + P where P is defined in Theorem 3.3. Then by the same discussion of Theorem 3.3, we have the tightness of the distributions ofũ ǫ . Now we pass the limit ǫ → 0 in the nonlinear term f (t, x, u ǫ , ∇u ǫ ). In fact we restrict the system on (Ω δ , F δ , P δ ). By the assumption (2) on h and the fact thatũ ǫ strong converges to ϑu in L 2 (D T ), we have lim ǫ→0 D T h t, x,ũ ǫ (t, x) − h t, x, ϑu(t, x) 2 dxdt = 0.
Remark 6.4. All the results in this paper hold when ∆ is replaced by a more general strong elliptic operator div(A ǫ ∇u), where A ǫ is Y −periodic and satisfies the strong ellipticity condition. | 2014-10-01T00:00:00.000Z | 2007-03-19T00:00:00.000 | {
"year": 2007,
"sha1": "0547d1ca11e5b88d4c096b348b2d5a6c537706a6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0703537",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1a61fe762c073b2c9019b95b920885b5bfac28fd",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
205294555 | pes2o/s2orc | v3-fos-license | Regulation of Microtubule Dynamics through Phosphorylation on Stathmin by Epstein-Barr Virus Kinase BGLF4*
Stathmin is an important microtubule (MT)-destabilizing protein, and its activity is differently attenuated by phosphorylation at one or more of its four phosphorylatable serine residues (Ser-16, Ser-25, Ser-38, and Ser-63). This phosphorylation of stathmin plays important roles in mitotic spindle formation. We observed increasing levels of phosphorylated stathmin in Epstein-Barr virus (EBV)-harboring lymphoblastoid cell lines (LCLs) and nasopharyngeal carcinoma (NPC) cell lines during the EBV lytic cycle. These suggest that EBV lytic products may be involved in the regulation of stathmin phosphorylation. BGLF4 is an EBV-encoded kinase and has similar kinase activity to cdc2, an important kinase that phosphorylates serine residues 25 and 38 of stathmin during mitosis. Using an siRNA approach, we demonstrated that BGLF4 contributes to the phosphorylation of stathmin in EBV-harboring NPC. Moreover, we confirmed that BGLF4 interacts with and phosphorylates stathmin using an in vitro kinase assay and an in vivo two-dimensional electrophoresis assay. Interestingly, unlike cdc2, BGLF4 was shown to phosphorylate non-proline directed serine residues of stathmin (Ser-16) and it mediated phosphorylation of stathmin predominantly at serines 16, 25, and 38, indicating that BGLF4 can down-regulate the activity of stathmin. Finally, we demonstrated that the pattern of MT organization was changed in BGLF4-expressing cells, possibly through phosphorylation of stathmin. In conclusion, we have shown that a viral Ser/Thr kinase can directly modulate the activity of stathmin and this contributes to alteration of cellular MT dynamics and then may modulate the associated cellular processes.
The microtubule (MT) 2 cytoskeleton is composed of tubulin heterodimers and is involved in a variety of cellular processes, such as maintaining cell polarity, supporting cell structures, segregation of chromosomes during mitosis, vesicular transportation, and cell motility. MTs undergo rapid transition between polymerized and depolymerized states and this is termed dynamic instability (1)(2)(3). Polymerization of MT is regulated by the MT-stabilizing proteins, a classic superfamily of microtubule-associated proteins (MAPs) (4,5), and depolymerization of MT is regulated by two different families: the KinI family of kinesin-related-proteins (a family of MT motors) (6,7) and the MT-destabilizing proteins (a family of oncoprotein 18/stathmin proteins) (4,8). Stathmin is a ubiquitous cytosolic protein and is highly conserved in vertebrates (9,10). It acts by promoting MT catastrophe (8) or by sequestering free tubulin heterodimers (11). Moreover, all above actions lead to depolymerization of MT. Phosphorylation of one to four of its N-terminal phosphorylatable residues has negative effects on stathmin (12)(13)(14)(15) and contributes to correct assembly of the mitotic spindle and cell cycle progression during mitosis (13, 16 -19).
It is known that some viruses rely on the host cell cytoskeleton for transportation to the site of replication or to the egress progeny virions to the extracellular environment, indicating that this trafficking is essential for their infection (20,21). Accordingly, a variety of viruses have been found to use diverse approaches to regulate cellular MTs or actin cytoskeletons (21,22). For example, adenovirus has been found to activate PKA and P38/MAPK pathways to boost MT-mediated targeting of the virus to the nucleus, and this can enhance virus infection (23). Additionally, vaccinia virus has been shown to induce the formation of actin tails and viral particles are propelled on the tips of the actin tails (24). Moreover, several viruses encode MAP-like proteins which possess MT-stabilizing activity (25,26). Apparently, various viruses exploit different strategies to target and modulate the cellular MT network during infection. However, the approaches used by Epstein-Barr virus (EBV) to regulate the MT dynamics are obscure.
EBV, a human gammaherpesvirus, infects over 95% of the human population (27,28). The infection is associated with many types of malignances (27,28). Recently, increased levels of stathmin expression have been reported in EBV-infected primary B cells and in EBV transformed lymphoblastoid cell lines (LCLs) (29). Moreover, an EBV-encoded latent protein, LMP1, was shown to increase phosphorylation of stathmin in EBV-related nasopharyngeal carcinoma (NPC) (30,31). However, the precise role of stathmin in EBV infection remains unclear. BGLF4 is the only kinase expressed by EBV during the lytic stage and it can phosphorylate a spectrum of viral and cellular factors (32)(33)(34). In this study, we investigated the expression and roles of stathmin in EBV-positive cell lines. We demonstrate that BGLF4, an EBV-encoded Ser/Thr kinase, can phosphorylate and then attenuate the activity of stathmin to alter MT dynamics.
EXPERIMENTAL PROCEDURES
Virus Harvest and Infection-EBV-positive B95.8 cells were cultured in complete RPMI 1640 medium and treated with 40 ng/ml tetradecanoyl phorbol acetate and 3 mM sodium butyrate for 72 h. Cell suspensions were centrifuged at 8,000 rpm for 30 min at 4°C to remove the cell debris. The cell-free supernatant was ultracentrifuged at 15,000 rpm for 90 min at 4°C. The pellet was resuspended in a volume of 1 ml of complete medium per 100 ml starting culture supernatant. The resuspended virus was then filtered through a 0.22-m filter and stored at Ϫ80°C until use.
The immortalized T lymphocyte cell line Jurkat was maintained in RPMI 1640 supplemented with 8% fetal calf serum. The HeLa cell line is derived from human cervical carcinoma, and cells are grown at 37°C with 5% CO 2 in Dulbecco's modified Eagle's medium (HyClone) supplemented with 8% fetal calf serum, 100 units/ml penicillin, and 100 g/ml streptomycin (Invitrogen).
Inducible human embryonic kidney 293 (HEK293) T-REx cells for BGLF4, BGLF4-KD, and vector expression were grown in Dulbecco's modified Eagle's medium with 8% tetracyclinefree serum. These cells were derived by cloning the BGLF4 and BGLF4-KD open reading frames in pLenti4-CPO/V5/His (Invitrogen). The expression plasmids were transfected into HEK293 T-REx cells with Lipofectamine 2000 (Invitrogen) and selected in growth medium with 400 g/ml zeocin and 5 g/ml blasticidin, as reported previously (35). To induce protein expression, these cells were incubated with 10 ng/ml doxycycline (Invitrogen) for the times indicated in the figures. Expression lysates were collected in radioimmune precipitation assay buffer (RIPA, 50 mM Tris/HCl, pH 7.5, 150 mM NaCl, 1% Nonidet P-40, 0.5% sodium deoxycholate, 0.1% SDS) supplemented with a complete protease inhibitor mixture (Roche Applied Science) and then subjected to immunoblotting. NA cells are an EBV harboring NPC cell line infected with a recombinant Akata-EBV strain carrying a neomycin-resistance gene (36). NA cells were seeded at 1 ϫ 10 5 cells per well in 12-well plates and then co-transfected with pSG5-Rta (37) and either si-BGLF4-1, si-BGLF4-2 or control siRNA fragments as described below. Cell lysates were harvested 36 h post-transfection. HeLa cells were seeded at a density of 3 ϫ 10 5 cells per well in 6-well plates and transfected with 0.1, 0.3, 1, or 2.5 g of pSG5-BGLF4 or PSG5-BGLF4-KD individually using Lipofectamine 2000. All cell lysates were harvested in RIPA buffer 24 h post-transfection.
Co-immunoprecipitation-HeLa cells were seeded at a density of 90% in 10-cm Petri dishes and co-transfected with 7 g of pSG5-BGLF4 and 7 g of pSG5-stathmin-Flag with Lipofectamine 2000. At 18 h post-transfection, cell lysates were collected in Nonidet P-40 lysis buffer 50 mM Tris, pH 8.0, 150 mM NaCl, 2 mM EDTA, and 1 mM Na 3 VO 4 ). Cell lysates were centrifuged at 16,000 ϫ g for 20 min at 4°C, and then the supernatant was precleared with 200 l of 20% protein G-Sepharose beads (Amersham Biosciences) with rotation for 1 h at 4°C. After centrifugation, the precleared supernatant was incubated with 3 g of anti-BGLF4 (40), anti-Flag M2 mAb (Sigma), or irrelevant control antibodies (mIgG) at 4°C for 1 h and then 250 l of protein G-Sepharose beads were added to precipitate the immunocomplexes with rotation for 1 h at 4°C. The recovered immunocomplexes were washed extensively in cold phosphate-buffered saline and resolved with sodium dodecyl sulfate (SDS) sample buffer, and subjected to immunoblotting analysis. HeLa cells were also transfected with 10 g of pSG5-BGLF4, and lysates were subjected to co-immunoprecipitation as described above. The anti-stathmin polyclonal Ab (Santa Cruz Biotechnology) was used to precipitate the endogenous stathmin.
ImageQuant software was used to quantify the expression levels of the proteins detected by immunoblotting. Briefly, the value of the band density was quantified and then normalized to its corresponding internal control. Then, the results were shown relative to vector transfectant or cell line of interest.
Two-dimensional Gel Electrophoresis-BGLF4, BGLF4-KD and vector control inducible HEK 293 T-REx cells were grown in Dulbecco's modified Eagle's medium with 8% tetracyclinefree serum and protein expression was induced by incubation with 10 ng/ml doxycycline for 24 h. Total lysates from BGLF4, BGLF4-KD or vector control cells were collected in two-dimensional lysis buffer (40 mM Tris, 7 M urea, 2 M thiourea, 4% CHAOS, pH 9) and subjected to two-dimensional PAGE according to the manufacturer's instructions. Briefly, isoelectric focusing was carried out using pH 3-10 carrier ampholytes and separated in 12.5% polyacrylamide gels. After electrophoresis, proteins were transferred to polyvinylidene difluoride membranes (Millipore) and probed with anti-stathmin polyclonal Ab or anti-phosphorylated Ser-16 stathmin polyclonal Ab. After hybridization with secondary antibodies, the membranes were developed using an enhanced chemiluminescence kit (Amersham Biosciences).
Statistical Analysis-Statistical analyses employed the correlation coefficient, Student's t test, and 2 ϫ 2 Contingency Table of chi-squared test using Microsoft EXCEL. Briefly, the expression levels of BGLF4, phosphorylated and nonphosphorylated stathmin in twenty LCLs were quantified as described above. The correlation coefficients between levels of BGLF4 and stathmin (either phosphorylated or nonphosphorylated) were calculated separately. Student's t test was used to compare the treated samples with the corresponding controls.
Elevated Stathmin Phosphorylation Is Associated with BGLF4
Expression in the EBV Lytic Cycle-To test whether any EBV products affect the activity of stathmin, we compared the results from LCL cells with and without spontaneous lytic cycle progression. Six representative LCLs are shown in Fig. 1A. All LCL lines expressed the EBV latent protein, EBNA2. In addition, expression of the EBV lytic kinase BGLF4 was observed clearly in Zta-expressing LCLs, including P1, P9, P13, and P14, but not in P15 or P7. Based on the relative density of total phosphorylated stathmin, levels of phosphorylated stathmin were apparently correlated with viral lytic protein expression, compared with lower or non-lytic protein-expressing cell lines (P15 and P7 cells). This suggests that stathmin phosphorylation might be regulated by EBV-lytic products. BGLF4 is the only EBV-encoded serine/threonine kinase and has a similar biological function to the cellular cdc2 kinase (33,34), which is a protein kinase for stathmin (46,47). Thus, the correlation coef- ficient was calculated between levels of BGLF4 and phosphorylated and nonphosphorylated stathmin from twenty LCLs in total (data not shown). Levels of BGLF4 and phosphorylated stathmin are positively correlated (the correlation coefficient is 0.59); whereas levels of BGLF4 and nonphosphorylated stathmin are not correlated (the correlation coefficient is 0.12).
To determine whether stathmin is hyper-phosphorylated and can act as a substrate of BGLF4, an IFA was performed in P1, P7, and P13 cells. In Fig. 1B, increasing fluorescent intensity of phosphorylated stathmin was observed in BGLF4positive cells, compared with BGLF4-negative cells. To verify the importance of BGLF4 presence for stathmin phosphorylation, especially in reactivated EBV-positive cells, BGLF4 expression was knocked down in Rta-induced EBV-harboring NA cells. In Fig. 1C, the phosphorylation of stathmin was increased in NA cells with EB viral lytic cycle progression (lanes 1 and 2). However, the phosphorylation was abolished when BGLF4 expression was knocked down by siRNA (Fig. 1C, lanes 3 and 4). These data indicate that BGLF4 could be the protein responsible for stathmin phosphorylation during viral lytic cycle.
BGLF4 Induces Stathmin Phosphorylation in Vivo-As mentioned above, BGLF4 contributes to stathmin phosphorylation during EBV lytic cycle. Next, we used doxycycline-induced BGLF4-expressing 293 cells to confirm the effects of BGLF4 on phosphorylation of stathmin. In Fig. 2A, increasing phosphorylation of stathmin was clearly observed by following BGLF4 expression at 18, 24, 30, and 48 h, compared with the non-induction controls. In contrast, expression of BGLF4-kinase dead (KD) protein, which has a point mutation in the kinase domain of BGLF4, did not increase the phosphorylation of stathmin (Fig. 2B). Thus, this experiment demonstrated that BGLF4 expression is associated with stathmin phosphorylation. Moreover, comparing nondoxycycline-treated cells across all the time points for the level of phosphorylated stathmin (Fig. 2, A and B), we found a cell-cycle dependent increase in stathmin phosphorylation. This finding is reasonable because stathmin is known to be cell cycle-regulated (16,46,48). Further, as shown in Fig. 2C, phosphorylation of stathmin increased with BGLF4 expression in a dose-dependent manner. Again, BGLF4-KD expression did not affect the phosphorylation of stathmin (Fig. 2D). Taken together, the EBV-encoded BGLF4 kinase can mediate stathmin phosphorylation in vivo.
BGLF4 Directly Phosphorylates Stathmin in Vitro-Based on the kinase dead data above and the fact that several cellular proteins are the substrates of BGLF4, we assumed that the kinase activity of BGLF4 is required for phosphorylation of stathmin (33,34). An in vitro kinase assay was performed to determine whether BGLF4 can phosphorylate stathmin directly. GST-tagged stathmin and GST control protein were purified from E. coli as substrates for a BGLF4 kinase activity assay. In Fig. 3A, we clearly demonstrated that BGLF4 can phosphorylate GST-stathmin, but not a GST control protein, directly. It is also shown that BGLF4 can phosphorylate a common substrate, the histone H1 protein, served as a positive control (Fig. 3A). It is well-documented that there are four serine residues that can be phosphorylated on stathmin, residues 16, 25, 38, and 63 ( Fig. 3B) (12)(13)(14)(15). Therefore, in Fig. 3C, to elucidate further the serine residues on stathmin that might be targeted by BGLF4, purified GST-stathmin serine mutants were used in assays similar to that described above. BGLF4 was shown to phosphorylate stathmin on serine residues 16, 25, and 38, but to phosphorylate serine residue 63 weakly. Thus, BGLF4 can indeed phosphorylate stathmin directly in vitro.
BGLF4 Mediates Stathmin Phosphorylation at Serines 16,25,and 38 in Vivo-To determine whether BGLF4 can phosphorylate stathmin in vivo, we first asked whether BGLF4 can interact physically with stathmin in cells (Fig. 4A). Co-immunoprecipitation assays showed that BGLF4 can be detected in immunoprecipitates of Flag-tagged stathmin (lane 3, upper panel).
In the bottom panel of lane 2, stathmin was also detected in immunocomplexes precipitated by BGLF4 antibody. These results from reciprocal co-IP Western blotting demonstrated that BGLF4 interacts with stathmin in cells. Moreover, a co-immunoprecipitation assay was also performed to determine the relationship between BGLF4 and endogenous stathmin (Fig. 4A, right panel). BGLF4 can be detected in immunocomplexes precipitated by stathmin antibody. Thus, BGLF4 indeed can interact with endogenous stathmin.
Previous studies have shown that various kinases can phosphorylate stathmin at different serine residues (49), leading to different degrees of activity. For example, cdc2 phosphorylates serine residues of 25 and 38 (46,47) and calcium-calmodulin dependent protein kinase only phosphorylates serine 16 (50). Thus, it is important to know which serine residue(s) of stathmin are phosphorylated by BGLF4 in vivo. In this study, the phosphorylated forms of stathmin were detected using an antibody against stathmin phosphorylated at Ser-16 ( Figs. 1 and 2). This indicates that BGLF4 can at least target Ser-16 of stathmin in vivo. Moreover, the phosphorylated patterns in our Fig. 2 are similar to those of stathmins resolved by one-dimensional PAGE and known to be phosphorylated at serines 16, 25, 38 Ϯ 63 (51). Therefore, our data suggest that expression of BGLF4 leads to phosphorylation of stathmin at serines 16, 25, 38 Ϯ 63 in vivo. To confirm this, a two-dimensional PAGE analysis was used to resolve the various FIGURE 3. Stathmin is phosphorylated by BGLF4 in vitro. A, recombinant GST-stathmin, GST control protein, and a positive control protein, Histone H1, were used as substrates. HeLa cells were transfected with BGLF4 or BGLF4-kinase dead expression plasmids and BGLF4 (K) and its kinase dead mutant (KD) were obtained by immunoprecipitation with anti-BGLF4 antibody from total cell lysates 24 h post-transfection. IP kinase assays were carried out for 30 min as described under "Experimental Procedures." The amounts of substrates and kinase loaded are shown. The experiment was performed twice, and a representative example is shown. BGLF4 was observed to phosphorylate GST-stathmin and Histone H1 protein but not GST control protein. B, positions of serine phosphorylation sites in wild-type stathmin and purified stathmin mutants are shown. Phosphorylatable residues of GST-tagged stathmin mutants, which can only be phosphorylated at one of four target residues in the N terminus of stathmin, such as SAAA (only phosphorylatable on residue 16), ASAA (only phosphorylatable on residue 25), AASA (only phosphorylatable on residue 38) and AAAS (only phosphorylatable on residue 63) are indicated. C, substrates of GST-tagged stathmin mutants in B were used in the same reaction as described in A. BGLF4 can be seen to phosphorylate GST-stathmin mutants mainly at residues 16, 25, and 38. The experiment was performed twice, and a representative example is shown. molecular forms of phosphorylated or nonphosphorylated stathmin in cells expressing BGLF4 (Fig. 4B). The phosphorylation status of different isoforms is indicated under each panel, as described previously (51). The phosphorylation patterns of phosphorylated isoforms were observed to be similar in vectorand BGLF4-KD-expressing cells (Fig. 4B, upper panels; dots 1-7). However, in cells expressing BGLF4, relatively low levels of low-phosphorylated isoforms of stathmin (upper panel; dots [1][2][3] and relatively high levels of hyperphosphorylated forms of stathmin (upper panel; dots 4 and 5) were observed. Furthermore, the ratio for relative density of hyperphosphorylated stathmin (dots 4 and 5) and hypophosphorylated stathmin ( dots 1, 2, 3, 6, and 7) isoforms are also calculated and shown in Fig. 4C. This indicates that BGLF4 expression indeed led to an increase in at least two phosphorylated isoforms of stathmin, the isoform that is phosphorylated on serine residues 16, 25 and 38 (Fig. 4B, P3; dot 4) and the isoform that is phosphorylated possibly at all phosphorylatable serine residues (dot 5), recognized by two-dimensional PAGE as described previously (51). Therefore, BGLF4 expression can indeed mediate phosphorylation at residues 16, 25, 38 Ϯ 63 of stathmin in vivo, which is consistent with our in vitro kinase assay (Fig. 3). On the other hand, comparing the upper and lower panels, the anti-stathmin antibody is only sensitive to the nonphosphorylated or low-phosphorylated forms of stathmin, as also shown in Figs. 1 and 2.
Microtubule Networks Are Reorganized in Cells Expressing BGLF4-It is shown above that BGLF4 expression increases the number of phosphorylated isoforms of stathmin in cells, suggesting that BGLF4 expression can negate the depolymerization activity of stathmin. Thus, we hypothesized that the balance between MT-destabilizer and stabilizer in cells would be interfered with by BGLF4 expression. To test this, HeLa cells were transfected with various amounts plasmid of control vector, BGLF4, BGLF4-KD, stathmin-4E (phosphor-mimic stathmin), stathmin-4A (non-phosphorylatable stathmin) or stathmin, and then the MT networks in the transfected cells were examined by immunofluorescence assays. As shown in Fig. 5G, the percentages of alteration of MT networks in the various transfected cells were calculated. As expected, MT arrays were observed as normal-organized in most of the vector (Fig. 5A) and BGLF4-KD-expressing cells (Fig. 5C). Of note, increasingly disorganized and diffuse types of MT network were present in BGLF4- Lysates were subjected to two-dimensional PAGE and probed with anti-stathmin or anti-phosphorylated Ser-16 stathmin antibodies. N, nonphosphorylated form of stathmin. P1, stathmin isoforms which are phosphorylated at one phosphorylatable residue (dot 1); P2, stathmin isoforms which are phosphorylated at two phosphorylatable residues (dots 2 and 3); P3, stathmin isoforms which are phosphorylated at three phosphorylatable residues (dots 4 and 5). C, ratio for relatively density of hyperphosphorylated stathmin (dots 4 and 5) and hypophosphorylated stathmin isoforms (dots 1, 2, 3, 6, and 7) from the two experiments are plotted and statistically analyzed. *, significant difference to vector control (p Ͻ 0.05). expressing (Fig. 5B) and phosphomimic stathmin 4E cells (Fig. 5D). Furthermore, 70 -90% of disorganized pattern was seen in stathmin (Fig. 5E) and in nonphosphorylatable stathmin 4A-expressing cells (Fig. 5F). These data indicate that BGLF4 expression can alter the organization of MT networks severely through phosphorylation of stathmin. However, we cannot exclude the possibility that BGLF4 may also target other factors that contribute to modulating the MT networks.
The above data demonstrate that BGLF4 expression can reorganize the MT networks through phosphorylation of stathmin. To confirm that BGLF4 plays a role in MT turnover, another approach was applied to measure the levels of assembled tubulin in BGLF4-expression cells. We found that a slightly higher content of polymerized MT was observed in BGLF4-expressing cells in comparison with that in vector control cells (data not shown). Thus, this also supports that BGLF4 can play a role in the regulation of MT dynamics.
Increasingly Phosphorylated Forms of Stathmin Are Observed in Cells Transfected with UL13 Homologues-Conserved herpesviral protein kinases (CHPKs) is a group of serine/threonine kinases that are conserved in all Herpesviridae. Among the CHPKs, UL13 is encoded by HSV, and similar CHPKs encoded by other herpesvirus are called UL13 homologues (52,53). Members of this group likely play similar roles in infection by targeting common host substrates. Thus, whether these homologues may interact with stathmin was also tested here. In Fig. 6, expression of BGLF4 (positive control), UL13 (HSV-1), UL97 (HCMV), and ORF36 (KSHV), but not mORF36 (MHV68), led to increasing phosphorylation of stathmin in cells. Therefore, these data suggest that UL13, UL97, and ORF36 could phosphorylate the same host substrate, stathmin, and that mORF36 of a murine herpesvirus may not target human cellular stathmin.
DISCUSSION
Stathmin is an important microtubule regulator in cells and phosphorylation of stathmin is required for correct cell cycle progression during mitosis (13, 16 -19). BGLF4 is the only EBV-encoded viral kinase and is expressed as an abundant early lytic protein during EBV reactivation (40,54). It is known that this viral kinase can target many cellular factors (32,33). In this study, we tried to determine the roles of viral kinase to regulate the cellular MT dynamics. Herein, we demonstrate that EBV BGLF4 kinase is responsible for increased phosphorylation of stathmin in reactivated EBV-positive cells (Fig. 1). Further-more, the results from an in vitro kinase assay and in vivo twodimensional electrophoresis revealed that BGLF4 phosphorylates stathmin mainly on serine residues 16,25,and 38 (Figs. 3 and 4). The MT destabilizing activity of stathmin is reduced by certain amounts by phosphorylation of one to four of its target residues and phosphorylation of serines at 16, 25, and 38 of stathmin is sufficient for inactivation of its depolymerization activity (12,50,55). Thus, our findings indicate that BGLF4 expression can inactivate the stathmin, and this also hinted that the balance between MT-stabilizer and MT-destabilizer within cells may be disturbed in cells expressing BGLF4. Indeed, MT networks, as well as the content of polymerized MT, were found to be altered in BGLF4-expressing cells (Figs. 5 and 6). Thus, it is assumed here that BGLF4 expression may phosphorylate stathmin in a localized fashion in vivo, leading to reduction or inhibition of the activity of stathmin in a localized environment (56 -58). Importantly, many viruses have been reported to use diverse approaches to regulate the MT networks, however, to our knowledge, this is the first report that a viral product that can directly regulate the activity of stathmin.
In this study, the activity of stathmin has been shown to be modulated by EBV during its lytic cycle. In addition, expression of stathmin has been reported previously to be up-regulated in EBV-infected primary B cells and in EBV-transformed LCLs (29). Moreover, an EBV-encoded latent protein, LMP1, has been shown to mediate phosphorylation of stathmin in EBVrelated NPC by enhancing cdc2 kinase activity (30). These previous reports indicate that stathmin is regulated during EBV infection but the purpose of this regulation is unclear. Herein, we further demonstrate that the cellular MT dynamics are modulated by phosphorylation of stathmin through BGLF4 mediation, and especially the MT organization is drastically disorganized in most BGLF4-expressing cells. Thus, EBV infection may indirectly affect some MT-associated cellular processes. Cellular cdc2 is known as a proline-directed kinase. Otherwise, BGLF4 has been documented as a cdc2-mimicking kinase because BGLF4 and cdc2 can target the same phosphorylatable site at EF-1␦ (59). Consistently, it is shown here that BGLF4 can phosphorylate sites 25 and 38 of stathmin ( Fig. 3 and 4), which are the target sites for cdc2 in an in vitro kinase assay (48). Indeed, the phosphorylation sequence for sites 25 and 38 of stathmin is [serine-proline] and the serine is phosphorylatable by BGLF4 and cdc2. Thus, our data confirm that BGLF4 is a cdc2-mimicking kinase and can target proline-directed serine residues. Of note, we found that BGLF4 can also phosphorylate on site 16 of stathmin in vitro and in vivo (Figs. 3 and 4), which is not a typical proline-directed sequence in that the phosphorylatable residue is serine in the sequence [arginine-alanineserine-glycine]. In fact, a recent study also indicates that BGLF4 Table of chi-squared test. *, significant difference to vector control (p Ͻ 0.05).
FIGURE 6. UL13 homologues induce phosphorylation of stathmin in vivo.
HeLa cells were transiently transfected with vector and Flag-tagged UL13 (HSV-1), UL97 (HCMV), mORF36 (MHV68), or ORF36 (KSHV)-expressing plasmids. A BGLF4 expression plasmid was transfected as the positive control. Cell lysates were harvested 24 h post-transfection and subjected to immunoblotting analysis. The membrane was probed with specific antibodies against BGLF4, Flag, phosphoserine 16 stathmin, and GAPDH. The blot was then stripped and re-probed with anti-C-terminal stathmin Ab. The band densities of nonphosphorylated or total phosphorylated stathmin of each transfection relative to the vector control were quantified and normalized to the intensity of its corresponding internal control by using ImageQuant software. Relative densities are indicated below the corresponding panels, and relative densities of phosphorylated stathmin from three independent experiments are statistically analyzed and plotted. *, significant difference to vector control (p Ͻ 0.05); **, significant difference to vector control (p Ͻ 0.01).
can target a non-proline directed sequence on the EBV transactivator BZLF1 (60). Thus, this suggests that BGLF4 can recognize not only proline-directed but can also target to non proline-directed residues on its substrates. Also, another report demonstrated that BGLF4 can phosphorylate more residues than cdc2 on the same lamin A protein (39). Consistently, a study using a protein array to identify systematically potential BGLF4 substrates has shown that about 21 of 60 viral proteins are phosphorylated by BGLF4, but approximately half of these proteins can be phosphorylated by cdc2 (32). Taken together, these data clearly indicate that BGLF4 can target more residues or more proteins than the cellular cdc2 kinase. Thus, it is suggested that the ability of BGLF4 to recognize a broader range of residues than cdc2 can enable EBV to modulate its cellular or viral targets more easily.
CHPKs is a group of kinases that are conserved in all Herpesviridae and these kinases are believed to play a conserved role in viral infection by targeting common cellular and viral substrates (33). Therefore, whether these homologues also modulate the activity of stathmin was also investigated in this study. Among them, our data demonstrated that expression of UL13, UL97, and ORF36, but not mORF36, led to elevation of the levels of phosphorylated stathmin in cells (Fig. 6). This finding suggests that most of the CHPKs tested may target to and then modulate the activity of stathmin in cells, and this also implies that these CHPKs may play conserved roles in the regulation of MT networks during herpesvirus infection. These findings are consistent with a previous report that expression of UL13, UL97, ORF36, and BGLF4, but not mORF36, can cause disassembly of nuclear lamina (39).
In conclusion, it has been shown in this study that stathmin is phosphorylated in EBV-positive cells during the lytic cycle, implying reduced activity of stathmin when EBV-positive cells are reactivated. We further provide evidence that an EBV kinase, BGLF4, is responsible for phosphorylating stathmin directly. Furthermore, BGLF4 expression is shown to alter the MT dynamics and also drastically alter the MT organization. Thus, this study shows for the first time that a viral infection can exploit and modulate stathmin to alter MT organization. | 2018-04-03T04:34:11.450Z | 2010-01-28T00:00:00.000 | {
"year": 2010,
"sha1": "23e478722ed340940a81278846d74e1b5fc0ffab",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/285/13/10053.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "cc5c6791fa98df655765481f48c3e395b9206a74",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268596926 | pes2o/s2orc | v3-fos-license | MTCH2 stimulates cellular proliferation and cycles via PI3K/Akt pathway in breast cancer
The MTCH2 protein is located on the mitochondrial outer membrane and regulates mitochondria-related cell death. This study set out to investigate the role of MTCH2 in the underlying pathophysiological mechanisms of breast cancer (BC). MTCH2 expression levels in BC were analyzed using bioinformatics prior to verification by cell lines in vitro. Experiments of over-expression and siRNA-mediated knockdown of MTCH2 were conducted to assess its biological functions, including its effects on cellular proliferation and cycle progression. Xenografts were utilised for in vivo study and signaling pathway alterations were examined to identify the mechanisms driven by MTCH2 in BC proliferation and cell-cycle regulation. MTCH2 was up-regulated in BC and correlated with patients’ overall survival. Over-expression of MTCH2 promoted cellular proliferation and cycle progression, while silencing MTCH2 had the opposite effect. Xenograft experiments were utilised to confirm the in vitro cellular findings and it was identified that the PI3K/Akt signaling pathway was activated by MTCH2 over-expression and suppressed by its silencing. Moreover, the activation of IGF-1R rescued cellular growth and cycle arrest induced by MTCH2-silencing. Overall, this study reveals that expression of MTCH2 in BC is upregulated and potentiates cellular proliferation and cycle progression via the PI3K/Akt pathway.
Mitochondrial carrier homolog2 (MTCH2) has been reported to be associated with several malignant tumors, including BC. MTCH2 is also known as Met-induced mitochondrial protein (MIMP) and inhibits Met-HGF/SF induced scattering and tumorigenicity by altering Met-HGF/SF signaling pathways [4].In malignant glioma, inhibition of MTCH2 suppresses tumor invasion and enhances sensitivity to temozolomide [5].Suppression of MTCH2 was identified to provoke ErbB2-driven BC [6].However, the role and underlying mechanisms of MTCH2 in BC remain largely unknown.
The PI3K/Akt pathway plays a pivotal role in the regulation of cell survival and proliferation [7].The inhibition of Bid expression by Akt results in resistance to apoptosis in ovarian cancer cells [8].Meanwhile, MTCH2/MIMP is a major facilitator of tBID recruitment to mitochondria.Hence, the PI3K/Akt pathway may be associated with expression of MTCH2 [9].
In this investigation, the expression of MTCH2 in BC was observed to be upregulated.Furthermore, we identified that MTCH2 potentiates cellular proliferation and cycle progression via the PI3K/Akt pathway.
Data mining in GEO and TCGA datasets
Four relevant gene expression datasets were retrieved from GEO (https://www.ncbi.nlm.nih.gov/geo):GSE7377, GSE54002, GSE45827 and GSE26459.RNA-seq data from the TCGA-BRCA project were extracted from the Genomic Data Commons (https:// portal.gdc.cancer.gov/).For those genes requiring multiple probes, the expression level was referred to as the maximum.Standardization according to percentiles was conducted if necessary.Enrichment analysis of gene ontology (GO) and KEGG analysis of DEGs were performed using the clusterProfiler R package (v 3.12.0).Terms with P < 0.01, minimum count>3, and enrichment factor >1.5 were assigned as statistically significant [10].Gene set enrichment analysis (GSEA) was employed to compare high and low groups, cut by the medium of expression level.The dataset used for GSEA is TCGA-BRCA.The analysis was processed by the clusterProfiler R package [10], with c2.cp.v7.0.symbols of MSigDB Collections being used as the reference gene sets [11].
Quantitative RT-PCR
After the total RNA of aforementioned cell lines (5 original ones and 2 lentivirus infected ones) was isolated using Trizol (Thermo Fisher Scientific, Waltham, US), cDNAs were generated from 2 μg RNA samples using a reverse transcription kit (Genecopoeia, Guangzhou, China).Sequencing by RT-PCR was performed at 95 • C for 2 min initial degeneration, then 40 cycles of 95 • C for 15 s degeneration, followed by 60-68 • C for 30 s of annealing and extension.The primers designed to match MTCH2 mRNA for the RT-PCR were forward: CATGTACGTGAAAGTGCTCATCC and reverse: TCACTCTCCTGGTAATGCTGT.Quantifications were normalized using GAPDH as an internal reference and results were calculated via the 2 − ΔΔCT method [12].
After three washes with TBST, the membranes were incubated with anti-rabbit IgG (1:1000, CST, US) for 2 h at room temperature.The blot bands were visualized with an enhanced chemiluminescence detection (ECL) kit (Xinsaimei, Soochow, China) and imaged using a chemiluminescent imaging system (Tanon, Shanghai, China).The band intensities were analyzed using ImageJ software.The background grayscale level was set with a default value of 50.The desired measurements were typically area, mean gray value and integrated density.The rectangular selection tool was used to carefully outline each band of interest.By comparing the intensity of the target protein band to that of the internal control band, the relative expression level of the target protein can be determined.
Flow cytometry (FCM) for cellular cycle assay
Cells were digested by trypsin (NCM Biotech, Soochow, China), washed with cold PBS (Gibco, US) and re-suspended.Next, cells were fixed with ethyl alcohol at 4 • C for 24 h.After being washed with cold PBS and centrifuged again, cells were stained with 500 μl propidium iodide staining solution (PI, Beyotime, Shanghai, China) before being scanned by flow cytometry (ACEA Biosciences, Hangzhou, China) within 24 h.Red fluorescence was examined at 488 nm excitation wavelength in addition to laser scattering.
Xenograft model of tumor growth model
Twenty-four female BALB/c nude mice at 5-6 weeks of age were acquired from Hunan SJA Laboratory Animal Co., Ltd.(Changsha, China) and bred in laboratory conditions.All animal-relevant procedures were approved by the Animal Experiments Ethics Committee of our University.Nude mice were subcutaneously injected into the right flank with tumor cells (5*10 6 cells/ml, 100 μl cell suspension).
Considering that Lentiviral transfection itself may have an effect on cell proliferation, it was important to have a control group without lentiviral transfection in the experiments enable distinction between the effect of lentiviral transfection itself on cell proliferation and the effect of MTCH2 gene expression on cell proliferation.Therefore, mice were assigned into 4 groups of 6 mice per group: MCF-7+LV-NC, MCF-7+LV-MTCH2 OE, MCF-7+LV-shNC, and MCF-7+LV-shMTCH2.Beginning on the 8th day, the length and width of subcutaneous tumor volumes were monitored every 4 subsequent days.The size of the tumors were calculated as follows: volume (cm 3 ) = 0.5*length*width 2 .
After all mice were euthanized at day 24 after implantation, tumors were dissected and weighed prior to further evaluation of the expression of MTCH2 by WB.Finally, tumor tissues were fixed, embedded in paraffin and sectioned for immunohistochemical (IHC) staining by antibodies anti-Ki67, PCNA, CDK1 and 6 (Proteintech, Wuhan, China).
Statistics
The comparison between two groups was performed using a t-test, while comparisons between multiple groups were undertaken by one-way analysis of variance (ANOVA) followed by Dunnett's test.The Wilcoxon rank sum test was used to compare MTCH2 expression levels between subgroup samples.R (v3.6.3) was used for bio-informatics, plots and statistics.P < 0.05 was considered statistically significant.
W. Jiang et al.
MTCH2 expression is upregulated and related to survival in BC
According to GSE7377, GSE54002, GSE45827 and TCGA-BRCA, the expression levels of MTCH2 were up-regulated in hyperplastic enlarged ducts and tumorous tissues (P < 0.001, Fig. 1A-D).Moreover, MTCH2 expression levels were higher in Tamoxifen resistant cell lines than their counterparts (P < 0.001, Fig. 1E).Investigating TCGA-BRCA OS data of 1104 BC patients, MTCH2 was observed to be significantly associated with OS in BC patients (HR = 1.1393,P = 0.039, Fig. 1F).
The survival difference of MTCH2 on microarray data (https://kmplot.com/) of BC patients were depicted.The survival rate of the MTCH2 high group was significantly lower than that of the low-expression group (HR = 1.621,P < 10 − 16 , Fig. 1G).Subgroup analysis of different molecular types were also conduct.The survival rate of the group with high MTCH2 expression was significantly lower in the Luminal A, B and Her2 + subtypes than in the low-expression group (HR = 1.52, 1.31, and 1.47, P = 0.00017, 0.0023, and 0.0019, Fig. 1H-J), which were not significant in the basal subtype (P = 0.063, Fig. 1K).
MTCH2 expression is upregulated in BC cell lines
To investigate the role of MTCH2 in BC as indicated by bio-informatics, its expression was detected by qPCR and WB.Compared to the normal breast epithelial cell line MCF10A, cancerous T-47D was associated with the highest level (P < 0.0001), while the levels of MTCH2 in MCF-7 samples were not significantly different from healthy tissue (P = 0.4625, Fig. 2A-C).Although BT474 and MDA-MB-231 cells also expressed more MTCH2 compared to MCF10A (P < 0.01 and <0.05), T-47D and MCF-7 were chosen for RNAi and overexpression manipulation, respectively.The subsequent silencing and overexpressing efficiency was via qPCR and WB (Fig. 2D-F).
MTCH2 escalates cellular proliferation and cycle progression in vivo
In-vitro cellular experiments were further verified by xenograft models.Mice were divided into 2 paired groups: MCF-7+LV-MTCH2 OE vs MCF-7+LV-NC, and MCF-7+LV-shMTCH2 vs MCF-7+LV-shNC.As illustrated in Fig. 4A&B all nude mice had observable solid tumors at the injection loci.It was clear that overexpression of MTCH2 promoted tumor growth (Fig. 4C), while its silencing suppressed tumor growth (Fig. 4D); these findings were confirmed by measurements depicted in Fig. 4E-G (P < 0.001, <0.001, = 0.0048, 0.0039, respectively).The size and weight of tumors in the overexpression group were 3.085 and 2.253 times of the ones in the control group, respectively.Meanwhile, the size and weight of tumors in the silencing group were 0.319 and 0.213 times of those in the control group, respectively.Moreover, the volume and weight of the tumors in the MCF-7+LV-sh-NC group were found to be larger than those in the MCF-7+LV-NC group.
For molecular dissection, WB was utilised to confirm the corresponding levels of MTCH2 protein in over-expression and silencing models (Fig. 4H&I).Furthermore, IHC was applied for analysis of biomarkers of cellular proliferation and cycling (Ki67, PCNA, CDK1 and 6), all of which demonstrated up or down -regulation in models of over-expression or silencing, respectively (Fig. 4J).Collectively, these data corroborated in vivo our hypothesis that MTCH2 plays an essential role in cellular proliferation and cycle regulation in BC.
MTCH2 exerts its oncogenic role in BC via PI3K/Akt pathway
GSEA revealed that the PI3K/Akt pathway, G2M checkpoint and DNA repair were activated by MTCH2, while the early estrogen response and p53 pathways were suppressed (Fig. 5A&B).To elucidate the underlying mechanism of MTCH2 in BC, as indicated by GSEA analysis, the PI3K/Akt pathway was examined for its involvement in the MTCH2-induced adjustment of proliferation.As revealed by WB, phosphorylation of PI3K and Akt were enhanced in the MTCH2 over-expressing MCF-7 line and suppressed by MTCH2 silencing in the T-47D line (Fig. 5C&D).
According to published study, insulin-like growth factor-1 receptor (IGF-1R) activates PI3K/Akt signaling [13].Therefore, IGF-1R was employed as an activator of PI3K/Akt signaling, to further test the vital role of the PI3K/Akt signaling pathway in MTCH2's tumorigenesis.As such, the CCK-8 assay was employed to characterize T-47D cells overexpressing IGF-1R and revealed rescuing of the anti-proliferative effect of MTCH2 silencing (Fig. 6A).These findings were further confirmed by FCM analysis of MTCH2 silenced T-47D arrested at the G0/1 phase, a state that could be rescued by IGF-1R overexpression (Fig. 6B-E).Altogether, these data provided concrete evidence that MTCH2 induces cellular proliferation and cycle progression in BC via activation of the PI3K/Akt pathway.Schematic diagram of the molecular mechanism was shown in Fig. 6F.
Discussion
Precision oncology relies on the knowledge of genetic alterations in patients to guide targeted therapy.Proteins and pathways currently considered to be involved in BC in clinical practice include the estrogen receptor (ER), progesterone receptor (PR), HER2, Cyclin-dependent kinase 4 and 6 (CDK4/6), poly-ADP-ribose-polymerase (PARP), histone deacetylase (HDAC) and PI3K/Akt/mTOR signaling pathways.Globally, 20-30% of incurable breast cancer cases with distant metastasis call for novel candidates from different aspects of cancer cell pathophysiology.
Mitochondria act as power generators in healthy cells and serve as a storage compartment for apoptogenic factors.Dual functions are maintained at the level of individual proteins, balancing cellular life and death.Mitochondrial malfunction has been reported as an important factor in BC pathogenesis and development.MTCH2 is reported to reside in a complex consisting of tBID and BAX [14].BID phosphorylation induced by DNA damage is important in cell cycle arrest.Moreover, induction of MTCH2 arrested cellular growth in response to hepatocyte growth factor/scatter factor (HGF/SF), arresting cells at S phase of the cell cycle [4], however, induction of MTCH2 or its transient expression did not lead to apoptosis.
Besides mitophagy, the ATM-BID-MTCH2 pathway plays a critical role in DNA damage response (DDR) via regulation of mitochondrial metabolism [15].Furthermore, MTCH2 was reported to inhibit the action of estrogen, a known regulator of metabolic homeostasis [16], as well as a known culprit in BC.The ESR1 gene encodes estrogen's receptor which is a ligand-activated transcription factor located in the nucleus.Mass spectrometry revealed the presence of both ESR1 and MTCH2 in the presence of estradiol, indicating direct binding between the two proteins [17].Loss of MTCH2 in hematopoietic stem cells (HSCs) promotes a metabolic switch from glycolysis to OXPHOS [18,19].Consistent with this, knockdown of MTCH2 with small interfering RNA in embryonic stem cells (ESCs) resulted in down-regulation of glycolysis and elevation of OXPHOS.Moreover, MTCH2 emerged from a genetic screen as one of six new loci whose polymorphic variants are associated with increase body mass index (BMI).Among these newly identified associated genetic loci, MTCH2 was the only gene whose mRNA was not detected in the hypothalamus, suggesting that it could impact the regulation of body mass by action in the periphery [14].The role of MTCH2 interplay with BC oncogenes is still largely unknown, yet it is worthy of further investigation.
In our study, we verified that MTCH2 regulates cellular proliferation and cycling in BC cell lines and xenograft models.Firstly, analysis of GEO and TCGA data demonstrated that MTCH2 was over-expressed in BC tissue compared normal control and correlated with worse prognosis for BC patients.This finding was confirmed in BC cell lines and xenograft models.Subsequently, experiments of suppression and over-expression of MTCH2 provided concrete evidence for its potentiation of BC cell growth and cycle progression.Furthermore, MTCH2 action on cellular growth and the cell cycle was observed in vivo.Both the weight and volume of xenografts positively correlated with MTCH2 expression levels and IHC staining of proliferative and cell cycle biomarkers further demonstrated the same pattern.These results are consistent with previous studies mentioned above.
Recently, Guna et al. have demonstrated that MTCH2 functions as a mitochondrial outer-membrane insertase, and certain MTCH2 mutants either reduce or increase its insertase activity [20].MTCH2 overexpression leads to a commensurate decrease in mitochondrial tail-anchored proteins mistargeting to the endoplasmic reticulum.MTCH2 is a central 'gatekeeper' for the mitochondrial outer membrane: MTCH2 levels and activity dictate the cytosolic reservoir of mitochondrial tail-anchored proteins, which can be re-routed to the endoplasmic reticulum if successful integration into mitochondria does not occur.Given that insertion of several MTCH2-dependent tail-anchored proteins are important in apoptosis, MTCH2 activity may affect cellular sensitivity to apoptotic stimuli.We hypothesise that there may be some relationship between MTCH2's insertase activity and cancer progression, which requires further study.
The PI3K/Akt/mTOR signaling pathway is commonly deregulated in many human tumors, including breast cancer [21].Activation of the PI3K/Akt/mTOR pathway occurs frequently in breast cancer that is resistant to endocrine therapy [22].The mTOR inhibitor everolimus has been applied in clinical practice for many years.Approved mTOR inhibitors effectively inhibit cell growth and proliferation but elicit PI3K/Akt phosphorylation via a feedback activation pathway, potentially leading to resistance to mTOR inhibitors [22,23].Thus, specific PI3K inhibitors, such as alpelisib, are indicated in luminal A and B metastatic BC [24], and Akt inhibitors have shown utility, such as Ipatasertib [25].
Met is a heterodimeric receptor tyrosine kinase and Met-induced mitochondrial protein is an alternative name for MTCH2.Met's docking site recruits signaling transducers, such as PI3K [4].In a previous study, it was reported that the level of PI3K was upregulated following MTCH2 induction, while phosphorylated PI3K in response to HGF/SF was unaffected by the exogenous induction of MTCH2.
According to our GSEA analysis, the PI3K/Akt pathway is involved in the downstream regulation of MTCH2.In vitro study of BC cell lines has revealed that phosphorylation of PI3K and Akt was significantly regulated by MTCH2 expression.Based on the results revealing that MTCH2 silencing-induced cellular growth and cycle arrest could be rescued by the PI3K/Akt pathway activator IGF-1R, we suggest that these signaling pathways are essential to MTCH2's action on cell proliferation and cycle.Considering that signaling pathways are often targeted by many activators and inhibitors, further study of the activation of PI3K/Akt is required.
There are several limitations to this study.First of all, the connection between MTCH2 and PI3K/Akt signaling need further verification, given that there might be other potential mechanisms by which MTCH2 may impact BC progression.Secondly, in addition to the PI3K/Akt pathway, MTCH2 was implicated in the regulation of G2M checkpoint, DNA repair, p53, and early estrogen response pathways.Moreover, other pathways may contribute to the observed changes in cell proliferation and cell cycle regulation, which require further study in addition to the phosphorylation of PI3K/AKT after overexpression of IGF-1R.
Conclusions
Overall, our findings have exhibited that overexpression of MTCH2 in BC provokes cellular proliferation and cycle progression via the PI3K/Akt pathway.Given its unique role in mitochondrial metabolism and apoptosis, MTCH2 makes for a good candidate for therapeutic manipulation in treatment of BC.
W. Jiang et al.
Fig. 2 .
Fig. 2. MTCH2 expression was upregulated in BC cell lines: (A) qPCR, (B) Western blot and (C) quantitation of Western blot band intensities; It was successfully suppressed and activated by corresponding lentiviral systems: (C) qPCR, (D) Western blot and (F) quantitation of Western blot band intensities.
Fig. 3 .
Fig. 3. MTCH2 promotes cellular growth and cycle progression.CCK-8 assay revealed increased cell viability in MTCH2 overexpressing cell lines (A), and suppressed in silenced cells (B); The represented graph of cell cycle analysis by PI and flow cytometry (C-E), transition was arrested in silenced cells (F-I).MCM2, PCNA, Cyclin E1 and CDK2 were up-regulated in MTCH2-overexpressing cells and suppressed in MTCH2-silenced lines (J).Abbreviations: CCK-8, cell counting kit-8; PI, propidium iodide.
Fig. 4 .
Fig. 4. Xenografts in nude mice.Compared to the control group on the right side, overexpression of MTCH2 promoted tumor growth (A&C); silencing of MTCH2 suppressed tumor growth (B&D).Volumes of xenograft tumors were up and down regulated by MTCH2 over-expression and silencing, respectively (E&F).Weights were also up and down regulated by MTCH2 over-expression and silencing (G).(H&I) Protein levels of MTCH2 were confirmed by Western blot in over-expression and silenced models.(J) Immunohistochemical for biomarkers of cellular proliferation and cycle.Scale bar, 100 μm.
W
. Jiang et al. | 2024-03-22T15:37:14.215Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "5739b2b2a824043392ac81be2ab1a44f0641b422",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S2405844024042038/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bfd87d14f7f9702b0b762a1e4481767f105bd49",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
11477503 | pes2o/s2orc | v3-fos-license | Home-based voluntary HIV counselling and testing found highly acceptable and to reduce inequalities
Background Low uptake of voluntary HIV counselling and testing (VCT) in sub-Saharan Africa is raising acceptability concerns which might be associated with ways by which it is offered. We investigated the acceptability of home-based delivery of counselling and HIV testing in urban and rural populations in Zambia where VCT has been offered mostly from local clinics. Methods A population-based HIV survey was conducted in selected communities in 2003 (n = 5035). All participants stating willingness to be HIV tested were offered VCT at home and all counselling was conducted in the participants' homes. In the urban area post-test counselling and giving of results were done the following day whereas in rural areas this could take 1-3 weeks. Results Of those who indicated willingness to be HIV tested, 76.1% (95%CI 74.9-77.2) were counselled and received the test result. Overall, there was an increase in the proportion ever HIV tested from 18% before provision of home-based VCT to 38% after. The highest increase was in rural areas; among young rural men aged 15-24 years up from 14% to 42% vs. for urban men from 17% to 37%. Test rates by educational attainment changed from being positively associated to be evenly distributed after home-based VCT. Conclusions A high uptake was achieved by delivering HIV counselling and testing at home. The highest uptakes were seen in rural areas, in young people and groups with low educational attainment, resulting in substantial reductions in existing inequalities in accessing VCT services.
Background
Voluntary HIV counselling and testing (VCT) has strongly been promoted as essential in reaching universal access to HIV prevention, care, support and treatment, and the services have been scaled up in many low-and middle-income countries. However, access and uptake is still considered to be very low, and where VCT is readily available demands have often been surprisingly low [1][2][3][4][5][6][7]. The striking gap between what people say they would like to do and what they actually do when services are offered is indicating that the way the services are provided has a low acceptability in the population. It has also been shown that in many settings uptake of VCT has been positively correlated with factors such as male gender, higher educational attainment, and urban residence [1-3, [8][9][10]. Such differences in use of HIV testing and counselling might be indicative of inequalities in access. However, reasons for differential use is poorly understood [7]. Numerous studies have concluded that there are serious barriers to use which are related to the way services are offered, particularly indicated by the disappointingly low acceptability of facility-based testing [11][12][13]. Barriers of this kind are likely to be socially patterned, and investigating inequalities in use raises important methodological considerations. Measuring acceptability will be critical in this regard, and the most valid methods will be through actually offering VCT to assess its acceptability [3,11].
The low demand of VCT calls for innovate ways of offering VCT [7,14]. Several alternative service designs for VCT have been explored, such as workplace VCT [13], mobile VCT [15] and home-based VCT [7,11,12,[16][17][18][19], with substantially increases in acceptability compared to regular clinic-based VCT. However, there has been a strong movement within the international AIDS community to shift from voluntary to routine testing. Routine testing is now recommended for all individuals * Correspondence: knut.fylkesnes@cih.uib.no 2 Centre for International Health, University of Bergen, Norway Full list of author information is available at the end of the article attending any health facility in countries with generalized epidemics, and evidence of some increase in the proportion ever tested has been documented [7,20,21]. However, routine testing seems to be based on the belief that testing is the key tool for HIV prevention, and concerns have been raised from a preventive perspective due to the low emphasis being placed on counseling for risk reduction [22].
Almost 25 years after the first case of AIDS was reported in Zambia, the country still faces severe epidemics. The HIV prevalence is estimated to be bout 15% among year olds [23], but great geographical differentials in magnitude and trends have been revealed [24]. However, there is evidence of overall declines in HIV prevalence being associated with reduction in risk behaviours among young people. These declines are largely being associated with educational attainment [25,26]. Despite rapidly improving availability of VCT in Zambia, the proportion reporting being tested for HIV is still low [27,28]. A randomised trial on the acceptability of VCT revealed home-based VCT to be highly acceptable in an urban setting [12]. We offered home-based VCT to all participants in a population-based HIV survey conducted in selected urban and rural areas, and we investigated the intention of being tested for HIV, acceptability and to what extent home-based VCT affected inequalities in HIV test rates in rural and urban settings.
The population survey
The data stem from a population-based survey conducted in Zambia in 2003, and details of participation rates and detailed overall methodology have been reported elsewhere [29]. Similar population-based surveys were conducted in the same areas in 1995 and 1999 [12]. The survey employed stratified random cluster sampling of selected communities in selected urban and rural areas of Lusaka and Kapiri Mposhi districts respectively. Ten clusters in each district were selected using 'probability proportional to size. All household members aged 15-49 years who lived in the selected clusters were invited to participate in the study. The number of participants was 5035. The survey used structured questionnaires and face-to-face interviews to collect information from the participants on socio-demographic factors, health and sexual behaviour. All interviews were conducted at the household level.
Voluntary counselling and testing
At the end of the interview being conducted as part of the population-based survey participants were asked if they were willing to have HIV testing arranged for them at their home or at any convenient place. All who expressed willingness (intention) to test for HIV were then followed up by trained counsellors who were part of the study, and two senior counsellors acted as supervisors during the period of service provision. The counsellors visited consenting participants (those expressing willingness) at home for pre-test counselling shortly after the interview, i.e. at the same day or the following day. When participants gave their consent, blood for HIV testing was collected and taken to the nearest VCT laboratory for testing. In the urban area post-test counselling and HIV results were offered the following day at home, whereas in rural areas this process could take longer and often 1-2 weeks due to long distances. More than 90% of those accepting VCT preferred to be counselled and to receive the result at home, and only a few preferred to receive the services at the local VCT centre. It was essential to maintain confidentiality at all times during the counselling sessions. The counsellors reported challenges in some households in terms of finding a convenient place where privacy could be secured.
All HIV testing was carried out at the local clinic using the same testing strategies as the national guidelines for VCT. BIONOR HIV-1 & 2 (BIONOR AS, Skien, Norway) paramagnetic particle assay was used as the first test. All reactive samples were tested again using a rapid test Capillus HIV-1/HIV-2 (Cambridge Biotechnology, Galway, Ireland). Services were offered free of charge, but no particular strategy was instituted in terms of long-term follow-up services to HIV-infected individuals other than providing information about existing support and care opportunities. The counsellors recorded outcome information (persons being tested and received the result). This information was then added (as a new variable) to the data from the population-based survey.
Analytical strategy
Intercooled Stata version 8 for windows (Stata Corporation, College Station, Texas, USA) was used for data analysis. All tests for statistical significance took into account the sampling design effect by using the survey data analysis function in Stata.
We employed five measures in the analyses: 1) "Intention" (or willingness) was measured based on the question "Would you like us to arrange for you to be HIV tested?"; 2) "Before" was defined as the proportion reporting to have ever been tested for HIV before offering home-based VCT and is thus equal past exposure to testing as measured in the survey (ever tested); 3) "Uptake" was measured as the proportion tested and receiving the result as a result of offering home-based VCT; 4) "Acceptability" was defined as the proportion of individuals who intended to be tested and received their results [3]; 5) "After" was measured as the proportion of all survey participants ever tested after having being offered homebased VCT, i.e. exposure to testing among all participants in the population-based survey after the home-based VCT intervention, i.e. calculated by updating the survey data with the data on uptake. Logistic regression was used to test differences between groups and changes in the distribution of exposures to HIV testing by selected socio-demographic characteristics (age, sex, marital status, residence, educational attainment) comparing the situation before and after offering home-based VCT.
Ethics
The survey protocol received clearance from the University of Zambia Research and Ethics Committee. Participation in the study was based on written consent. Table 1 gives an overview of the past experience with HIV testing (before), intention to be tested and the decisions to use the home-based services.
VCT intention and acceptability
The counsellors did not report negative life events following their services. They reported to be very well received by the household and the community, and this is in agreement with the data showing that 76.1% (95%CI 74.9-77.2 of those who indicated their willingness (intention), i.e. 32%, accepted the services they offered (Table 2). There was no difference in acceptability by past testing exposure, i.e. whether tested or not tested in the past (Table 2). Acceptability did not differ by sex but was higher in rural compared to urban areas (83.6% vs. 70.7%; age-adjusted odds ratio (AOR) 0.5, 95%CI: 0.32-0.68). VCT intention was somewhat higher in those reporting being tested in the past vs. not tested (37.5% vs. 30.4%, AOR 1.4, 95%CI: 1.16-1.68) and tended to be higher among men than women (AOR 1.4, 95%CI: 1.16-1.63). Among those who accepted home-based VCT, 20.6% had been previously tested for HIV.
Change in exposure to testing
Before home-based VCT was offered, HIV testing exposure was generally low with significantly higher levels in urban than rural areas, i.e. 20.4% vs. 14.2% (AOR 1.7, 95%CI: 1.41-2.04). Exposures were particularly low in rural participants aged 15-19 years (Figure 1). After offering home-based VCT there was no difference in test rates between urban and rural areas (AOR 1), and the increase in exposure was substantial regardless of age-group.
The urban-rural stratified analyses in Table 3 show higher likelihood of being exposed to testing among the married than the unmarried and among women than men before home-based VCT was offered. After the services had been offered, these differences were not statistically significant (Table 3). In the rural areas exposures were not associated with sex and marital status in either of the two situations. Finally, the likelihood of exposure to testing tended to be biased towards the highest educated before services offered (only significant in the rural setting). The offering of home-based services appeared to have reduced this inequality as seen in the loss of significant difference between the extremes of educational levels both in urban and rural areas.
Discussion
The aim of equal access to HIV prevention, care, support and treatment is an important objective of national HIV programmes worldwide, and VCT is seen as the critical entry point in this regard. High acceptability was achieved when VCT was offered at home to all participants of a population-based survey, i.e. 76% of those expressing willingness to be tested were actually counselled and tested shortly after being offered the services as part of a population-based survey. Importantly, the home-based model of offering counselling and testing was found to have substantial effects in terms of reducing differences in HIV test rates, and this was observed as a reduction or disappearance in differences according to gender, residence and educational attainment. The findings of high acceptability reaffirm results from a previous randomised trial [12]. The trial was conducted in an urban setting, and participants were randomly allocated to VCT at the local clinic or at an optional location which was for most participants the home. Acceptance was strikingly different, i.e. 4.7 times higher uptake among the group allocated for home-based compared with clinicbased. The present findings showed comparable acceptability effects in rural settings. Similar findings have been reported in other countries in sub-Saharan Africa where offering home-based VCT has lead to increased use [11,18,19,30,31]. Health care facilities are the most frequently used location for VCT, and these findings are indicating strong acceptability barriers of clinic-based VCT and thus might be an important explanation for low HIV testing demands in sub-Saharan Africa.
Our assumption was that offering HIV testing at home would not be an attractive option for young people. However, the home-based model appeared particularly acceptable to young people as indicated by the tenfold increase in the proportion ever tested for HIV among those aged 15-19 years in rural areas (from 3% to 25%). This finding seems to agree with a recent study conducted in Zambia showing that young people ask family members for advice before seeking VCT and that disclosure is common to family members [32]. Similarly, a study in South Africa found that adolescents were ready to disclose their HIV status to family members and that they judged clinic-based VCT services to be inappropriate youth services [33]. These are indications of home-based VCT to be the youth-friendly model being searched for so long.
HIV surveys conducted in Zambia has shown higher exposures to HIV testing in urban than in rural populations [3]. In this study home-based VCT led to a marked reduction in the rural/urban differences in test rates. High acceptability was also achieved in remote areas in spite of experiencing relatively long waiting time from the pre-test counselling and HIV testing to bringing back the result, i.e. 1-2 weeks for the most remote areas vs. the next day for those in the urban setting. It is likely that this reflects the unmet need for VCT in the rural areas caused by the substantial geographical inequality in the availability still persisting. It should be noted that this striking result was achieved as part of a population survey and not as an ordinary programme. To achieve similar effects when scaling up such services is likely to depend on the extent to which capacities and resources are evenly distributed. This point was illustrated in a study in South Africa showing an accentuation of urban-rural inequalities after scaling up HIV services (including VCT services) [14].
Research indicates that gender shapes attitudes toward HIV testing in many ways, but there are no studies from high prevalence countries trying to penetrate gender differences in this regard. Our consistent finding, regardless of age and residence, was that a higher proportion of men intended to be HIV tested than women. This is consistent with the findings from a survey conducted in 1995 in the same area [3]. Before offering the home-based services, urban women appeared with 1.5 times higher likelihood of being tested than men, whereas no difference in this regard appeared in the rural setting. As an effect of higher uptake of home-based VCT, the differences disappeared in the urban area and were reversed in the rural area. This seems to be in accordance with some studies showing that women worry more about HIV and fear testing more than men [7]. A likely explanation of the sudden higher test rate among urban women in the before data is that women have been offered testing as part of prevention of mother-to-child transmission programmes. This is supported by the observation of substantially higher test rates among men than women in the mid 1990 s before such programmes had been initiated [3]. A population-based survey conducted in 1995 in the same areas as covered by the present survey revealed the likelihood of being HIV tested to be strongly associated with educational attainment, i.e. higher educated were 3 times more likely than the least educated [3]. Before our home-based services were offered, the likelihood of ever been tested also tended to be biased towards the highest educated, but the differences were reduced after the intervention. Many studies in sub-Saharan Africa have shown that HIV infections were more common among individuals with higher levels of educational attainment. However more recent data suggest that this pattern has changed and new infections are concentrating among less educated individuals [29,34,35]. In Zambia this has been observed in young adults, in whom differential survival according to level of education is unlikely, suggesting that these trends may reflect HIV incidence patterns and behaviour change [26,29], i.e. stable HIV prevalence among less educated whereas marked declines among higher educated. This evidence supports a strategy of putting high priority on preventive efforts to reaching the least educated and poor. The observed indications that home-based VCT is reducing inequalities suggest that this model could be an important part of a HIV preventive package given that strong focus is being kept on preventive counselling. From a preventive perspective, concerns have been raised related to routine testing, particularly due to the limited emphasis placed on counselling [22]. Findings from a prospective cohort study in Zimbabwe of very serious unintended increased risk taking following receipt of a negative test result might be seen as a particular warning sign with regards to effects of lost focus on preventive counselling [1].
Conclusion
In summary, this alternative strategy of offering VCT was confirmed to be highly acceptable also in rural settings. Moreover, the home-based strategy appeared to substantially reduce existing inequalities in access. The consistency of findings of exceptionally high acceptability in other high prevalence countries indicates high level of generalization in the context of southern Africa. However, to what extent communities are accepting this home-based model might differ from a situation whereby it is offered as part of survey compared with the situation when these services are scaled up. Large-scale implemen- Rural "before" Urban "before" Rural "after" Urban "after" Percent ever HIV tested Time "Before": % ever tested before offering home-based VCT; "After": % ever tested after offering home-based VCT
Age-group
tation of home-based VCT models might thus be premature, and there is an urgent need for further research efforts to examine the feasibility, acceptability, preventive effects, cost-effectiveness and negative life events of home-based VCT in community randomised trials. | 2017-04-02T14:57:01.817Z | 2010-06-17T00:00:00.000 | {
"year": 2010,
"sha1": "c37853c88c5b1a748d9acfad2c446cffc51b24d5",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-10-347",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cb1864d109a4db3a6ba396280be6741d0f2ddf6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12780077 | pes2o/s2orc | v3-fos-license | Assessment of usefulness of synchrotron radiation techniques to determine arsenic species in hair and rice grain samples
The arseniasis in Southwest Guizhou, China has been identified as a unique case of endemic arseniasis caused by exposure to indoor combustion of high As-content coal. Present investigation targeted the microdistribution and speciation of the element arsenic in human hair and environmental samples collected in one of the hyper-endemic villages of arseniasis in the area. Analyses were performed by micro-beam X-ray fluorescence (μ-XRF) and X-ray absorption fine structure (XAFS). The total As level in hair samples of diagnosed patients was detected at almost the same level as in their asymptomatic neighbors. Concentrations in the lateral cut of hair samples were high-low-high (from surface to center). XAFS revealed the coexistence of both the As+3 and As+5 states in hair samples. However, the samples from patients displayed a tendency of higher As+3 / As+5 ratio than the asymptomatic fellow villagers. The μ-XRF mapping of rice grains shows that arsenic penetrates the endosperm, the major edible part of the grain, when rice grains were stored over the open fire of high As-content coal. Synchrotron radiation techniques are suitable to determine arsenic species concentrations in different parts of hair and rice grain samples. As arsenic penetrates the endosperm, rinsing the rice grains with water will remain largely ineffective.
INTRODUCTION
Due to various geological mechanisms or anthropogenic activities, an altered distribution of certain chemical elements occurs on the surface in some parts of the earth. The over-or under-abundance of certain chemical elements will result in an imbalance of element exchange between human bodies and the environment. If the exchange exceeds the normal range that the organism is able to buffer or to tolerate, certain kinds of health disorders will emerge. The major biogeochemical abnormality-related endemics known so far in China include endemic arseniasis (which covers two kinds of As exposure routes: one via As polluted drinking water sources and another one via indoor combustion of high As-content coal), fluorosis (via polluted water or via indoor air pollution from high fluorine-content coal burning and from brick tea), iodine deficiency disorders (IDD), and other diseases.
Several villages in Southwest Guizhou Bouyei (Buyi) and Hmong (Miao) Ethnic Minority Autonomous Prefecture, Southwest China represent a unique case of endemic arseniasis, which is related to indoor combustion of high arsenic-content coal, not with As polluted drinking water sources (Jin et al., 2003;Zheng et al., 2005;Liu et al., 2002).
Since the early 1960s, as local woods and bushes have been depleted, villagers in the above mentioned area burned local high As containing coal in poorly or unventilated stoves (without chimneys) for cooking, heating and drying crop and food. The highest As concentration in local coal was once detected as 3.2-3.5 % (Ding et al., 2001). Since the early 1970s thousands of arseniasis cases emerged. The area where the target village of the present investigation is located was the first one reported (Zhou et al., 1993). Most of the cases diagnosed by the end of the millennium (1,386 out of 2,241 cases) are clustered in the township where the target village of the present investigation is located (Jin et al., 2003).
Most of the work about this endemic population reported so far was focused on environmental causes and confirmed the causality between indoor burning of high arsenic-content coal and the excess prevalence of arseniasis cases in rural population (Jin et al., 2003;Zheng et al., 2005;Liu et al., 2002;Zhou et al., 1993). Variation in individual susceptibility to the chronic poisoning of inorganic arsenic exposure has been suggested (Vahter, 2000). In our early field survey conducted in 2002, a remarkable ethnicitydependent (or clan-dependent) difference of arseniasis prevalence has been observed in the residents with different ethnic origin and various clan relationships in the same village. This was surprising as the families of different ethnicity and of different clans have been living together in the same village for generations (Lin et al., 2003) and were exposed likewise to indoor combustion of local high As coal for quite similar time duration (Lin et al., 2006). An array of host factors has been proved to be related to the different impact of susceptibility to arseniasis risk under this unique As exposure scenario (Lin et al., 2006(Lin et al., , 2007(Lin et al., , 2010a. New techniques for the in situ detection of micro-distribution and speciation of the element arsenic in specimens of exposed persons and in polluted environment are required for further expanding our knowledge of various risk factors. This may pave the way for a quantitative understanding of all the factors which might influence the excess risk of arseniasis, irrespective of whether or not exposure-related, host-related, or related to a combination of both. The utilizing of synchrotron radiation techniques, such as micro-beam X-ray fluorescence (μ-XRF) and X-ray absorption fine structure (XAFS), might be a promising option (Gault et al., 2008). Both, μ-XRF and XAFS are nondestructive physical procedures and have been widely used to detect, quantify and map the element content and the speciation of samples in their natural state. In this work, distribution and speciation of the element arsenic within a single human hair or rice grain was determined using the well-established μ-XRF mapping procedure and XAFS technique in Shanghai Synchrotron Radiation Facility (SSRF, Shanghai, China).
Sample collection
Human hair samples as well as rice and corn samples were collected in one of the hyper-arseniasis-endemic villages exposed to indoor combustion of local high-As content coals for decades.
A cross-sectional epidemiologic field study was conducted on all members of both clans in the target village in April 2004. Only the members related by blood and their spouses were included. The arseniasis cases in both ethnic clans were diagnosed according to "Diagnosis guideline for arseniasis, WS/T 211-01" issued by the Chinese State Ministry of Health. On the basis of the epidemiologic field study an Excel-based database was created. The samples for this research project were collected from the target village considering the representation of the population investigated (ethnicity, lineage, gender, diagnosed disease, etc.).
The hair samples were donated by diagnosed arseniasis patients as well as by their asymptomatic fellow villagers. All the participants gave their formal consent. The participants included: 10 diagnosed arseniasis patients (7 males and 3 females) 47.6±13.0 years old (mean±SD) and 6 arseniasisasymptomatic fellow villagers (3 males and 3 females) 49.5±17.4 years old. The rice and the corn samples which were kept over the open fire of high As-content coal for drying were collected in the farmer families in the same endemic village. The control samples of rice and corn (which were never baked over an open fire of high As coal) were taken from a non-arseniasis endemic township in the same county where the As concentration of the coal farmers there used for domestic purposes proved to be within the normal range. Hair samples were also collected from a unique thallotoxicosis-endemic village in the same county (n=6, incl. 3 males and 3 females; thereof 3 diagnosed thallotoxicosis patients and 3 thallotoxicosis-asymptomatic individuals, 49.5±17.4 years old). In this village the only case worldwide of thallotoxicosis caused by natural exposure (the biogeochemical abnormality of the element thallium in soils near the village), instead of pure and direct anthropogenic reasons, such as poisoning or accident has been reported (Xiao et al., 2004b). The exposure route for thallium was eating crops grown on thallium enriched soil. There is no statistically significant age difference among all subgroups of rural residents (P=0.846).
Synchrotron radiation micro-beam X-ray fluorescence spectrometry (μ-XRF) and X-ray absorption fine structure (XAFS) analysis
μ-XRF experiments were carried out on the BL15U beamline station at the SSRF (Shanghai Synchrotron Radiation Facility). Monochromatic light was obtained using a Si (111) double crystal and then focused to a specified beam size using a K-B mirror. A silicon drift detector (SDD) was used to record the characteristic fluorescence spectra of elements in the samples. Samples were mounted on a stage, which can drive the sample step by step with a step resolution of 3 μm.
The X-ray absorption fine structure spectra were carried out on the BL14W station at the SSRF. Monochromatic light was obtained using a Si (111) double crystal monochromator, with a scanning energy step of 0.5 eV. The XAFS spectra were recorded with fluorescence mode using 4-elements SDD. A filter was placed between the detector and the sample to suppress light scattering.
Inductively coupled plasma mass spectrometry (ICP-MS) analysis
Human hair samples were firstly cut into 1 cm long fragments and rinsed with ethanol twice. Samples were dried at room temperature, with a relative humidity of about 30 %. The sample digestion procedure was based on the method published by Uchino et al. (2006) with minor modification: samples were loaded into a Teflon digestion vessel with 3 ml of 35 % nitric acid and 1 ml of hydrogen peroxide. The digestion vessels were then placed in a high-pressure microwave (Ethos 320; Milestone, Italy). A four-stage temperature program with a maximum temperature of 180 °C and a total digestion time of 31 min was used.
The total As measurements were carried out on an X series 7 ICP-MS instrument (Thermo Scientific, USA) equipped with a concentric nebulizer and hexapole collision cell technique (CCT). ICP-MS operation conditions were as follows: radiation frequency (RF) power was set to 1350 W, carrier gas flow and peristaltic pump rates were 1 ml x min -1 and 25 rpm, respectively. The dwell time was set to 10 ms for assay quality control. To validate the measurement of As in hair samples, human hair master standard GBW9101b was employed as certified standard reference material (SRM) for quality assurance.
Statistical analysis
To analyze differences between diagnosed arseniasis patients (reference), asymptomatic fellow villagers, and residents of an endemic village of chronic thallotoxicosis, an ANOVA analysis (STATISTICA 6.0, StatSoft Inc.) was performed on the data of As 75 content of ICP-MS and on the ratio of As +3 /As +5 data of XAFS as well. P values <0.05 were regarded as significant.
Element distribution along the human hair
The arsenic distribution along the hair samples from diagnosed arseniasis patients were mapped along the axial direction. A part of the spectrum is displayed in Figure 1. The distribution of arsenic in the hair is largely homogenous. It indicates the exposure environment in this village prior to sampling was, basically, stable.
Arsenic distribution and species in hair of symptomatic patients
The arsenic distribution in hair from diagnosed patients was studied by synchrotron radiation micro-beam X-ray fluorescence spectrometry (μ-XRF) mapping and is presented in Figure 2. A tendency of "high-lowhigh" pattern (from the surface to the center) was observed. The spectrum shows that As was concentrated at the surface and more pronounced at the core (medulla of the hair). The XAFS spectra of human hair collected from an arseniasis hyper-endemic village as well as four reference chemicals with different chemical valences of arsenic are displayed in Figure 3. The samples 5, 6, and 10 are from diagnosed arseniasis patients while samples 3, 9, and 16 represent arseniasisasymptomatic individuals from the same endemic village. The dominant As speciation displayed among the patients, mostly, was As +3 , while among the asymptomatic fellow villagers, on the opposite, dominated As +5 . From the toxicological point of view, As +3 compounds are more toxic than As +5 (Hirano et al., 2003;Styblo et al., 2000). Left vertical line: Energy for As 3+ , right vertical line: Energy for As 5+ . As 2 O 5 , NaHAsO 4 : References for As 5+ ; NaAsO 2 , As 2 O 3 : Reference for As 3+ .
The ratio of As +3 /As +5 in human hair samples
The XAFS data of hair samples from all the individuals, both arseniasis patients and their asymptomatic neighbors are displayed in Table 1. The data show the ratio in ethnic Hmong individuals was slightly higher than in their ethnic Han neighbors (0.90±0.45 vs. 0.77±0.68, F=0.092, p=0.767). Patients show a higher dominance of As +3 state over the As +5 (0.94±0.66 vs. 0.58±0.54, F=1.037, p=0.330). If comparison is made among the ethnic Han villagers, the most arseniasissusceptible ethnic groups in this area (Lin et al., 2006;Chen et al., 2009), the deviation further increases, though the statistical significance still is not reached (0.99±0.69 vs. 0.26±0.26, F=2.995, p=0.122). Table 2 displays the ICP-MS data of As 75 in the hair samples of villagers, both diagnosed patients and asymptomatic individuals living in the same hyper-endemic villages. The As 75 levels were almost equal in the two different groups of the villagers (F=0.001; P=0.981). A comparison was made with a group of farmers of a chronic thallotoxicosisendemic village (caused by a rare kind of biogeochemical abnormality of the element thallium) in the same county (Xiao et al., 2004b). The specimens from thallotoxicosisendemic villagers show the same arsenic level as in arseniasis-endemic villagers in the same county (F=0.018; P= 0.896). It is worthwhile to note that no typical arseniasis symptoms, e.g. dermatological symptoms, have been diagnosed so far, although the residents in that thallotoxicosis-endemic village have been exposed, at the same time, to very high levels of As and unusual body burdens of As were recorded, too.
Arsenic distribution in rice grain
The arsenic micro-distributions in rice grains collected in the hyper-endemic village (A) and control sites (B) in the same county are displayed in Figure 4. It can be seen that an unexpected high density of the element arsenic was easily found in the husk of the rice grains, which were baked over fire in the hyper-endemic village. Surprisingly, a very high density of As was also shown in grain endosperm, the major edible part. However, As was not found at a detectable level in the embryo. Please notice that the scales at the y-axis for both mapping spectra are not identical. In contrast, the element As was hardly detected in any parts of the control samples, which were collected from a non-arseniasis area in the same county which was never exposed to any kind of As pollution. Similar pictures were also obtained when μ-XRF mapping was applied on the corn samples from the same areas (the mapping spectra of corn samples are not shown).
DISCUSSION
This is the first report on a study applying synchrotron radiation techniques, previously applied by Finkelman and colleagues on high arsenic coals in this area (Finkelman et al., 1999), on food and hair samples on the rare case of endemic arseniasis related to indoor exposure to As. μ-XRF and XAFS are non-destructive physical approaches which need no chemical preparation procedures prior to the assay. The non-destructive, in situ assays provided more new, stimulating data which cannot be gained by any other conventional, neither chemical nor biochemical technique. The present approach will also presumably promote the investigation of other population health problems triggered by biogeochemical abnormality, such as endemic thallotoxicosis (Xiao et al., 2004b) or mercury poisoning (Feng and Qiu, 2008) observed in the Southwest Guizhou area.
Usually, arsenic levels in hair, especially the hair in females, can serve as a biomarker for short-term (<1 year) internal dose. In forensic medicine practice, variation of As levels in different sections of the same hair has been successfully used to recapitulate the chronic poisoning process and the approximate dosage applied. An example is the scientific deduction of possible historical events that happened in St Helena island during 1815-1821 when Napoléon Bonaparte was assumed to be gradually murdered by deliberate arsenic poisoning (Weider and Fournier, 1999). In our case the As levels along the hair samples were found largely homogenous. It might be suggested that the exposure environment in this endemic village was, basically, stable around the time of sampling, so that any part of the hair samples would be suitable for the assay.
A B Figure 4: Micro-beam X-ray fluorescence spectrometry (μ-XRF) mapping of a rice grain A: once stored over the fire of high As-content coal. x-axis: length of rice grain, y-axis: width of rice grain B: not baked over fire. Note: Quite different As concentration detected (A: range 500-2500, B: range 4-12) In our preliminary work in cooperation with SSRF, the X-ray fluorescence spectra of hair samples from the same endemic village suggested that there were no noticeable As peak area differences between two different groups of the residents (patients and their asymptomatic neighbors, preliminary data not shown). Furthermore, ICP-MS assays have been performed with all human hair samples collected, resulting in almost identical results for both groups of residents (F=0.001, P=0.981). This is totally different from observations reported so far from the arseniasis endemic areas related to As contaminated drinking water sources in this country and along the world as well. For final excluding of any possible interfering factors, we carefully checked the decades-long medical surveillance records archived in the Prefecture Centre for Disease Control (CDC) and made quite clear that the last medical dearsenization intervention with dimercaprol in this endemic population was conducted in 1990-1991 and covered only about 500 local people, only a small portion of the local population. Based on all the information available, it is reasonable to assume that the dearsenization intervention in part of the local residents in the early 1990s does not have a long-lasting impact on the As body burden in this exposed population.
The "hypernormal" phenomena observed in this investigation is well consistent with our previous observation. In our early work conducted in 2000 and in 2004 in the same population (Lin et al., 2003(Lin et al., , 2006(Lin et al., , 2010a, we found that As internal dosage expressed as total As level in hair and in urine in the subgroup of diagnosed arseniasis patients was not higher than in the subgroup of asymptomatic neighbors. The internal As dosage of ethnic Hmong subjects was found significantly higher than of their ethnic Han neighbors (p<0.001 for As in hair and p< 0.01 for urine samples), though the arseniasis prevalence in Hmong farmers was significantly less profound than among the Han farmers (5.9 % vs. 32.7 %, OR: 0.12, 95 % CI: 0.06-0.27, P=3 x 10 -10 ) (Lin et al., 2006).
Quite similar As levels in the hair samples collected from a thallotoxicosis-endemic village which is only about 10 km away from our target arseniasis-endemic village were confirmed by our current work. The exposure route for thallium is via locally produced food (Xiao et al., 2004a). Furthermore, these authors stated that local food is also the source for elevated urinary levels of arsenic and mercury. Thus, the impact of mercury should not be ignored. However, to our best knowledge no data on the local coal is available -in contrast to coals of the most eastern part of that province which is about 400 to 500 km from the investigated target village. However, chemical analysis of a coal sample used in another village (approximately 40 km from the investigated village) showed a mercury concentration of 55 ppm (Finkelman et al., 1999). Interestingly, no typical arseniasis symptoms have ever been diagnosed in addition to the thallotoxicosis among the residents so far. All of these observations might suggest that the exposure to high level As in the environment or high As internal dosage is not inevitably related to an increased prevalence of arsenic-related skin symptoms.
The X-ray absorption fine structure (XAFS) analysis revealed the coexistence of both the As +3 and As +5 states in every hair sample tested. However, the ratio of As +3 / As +5 varied greatly from person to person. Generally, the diagnosed patients displayed a clear tendency of higher As +3 / As +5 ratios than the asymptomatic fellow villagers. This finding is in line with the basic concept of toxicological science that As +3 is more toxic than As +5 .
A variance in the individual susceptibility to chronic arsenic poisoning has been suggested by Vahter (2000). A remarkable clan aggregation was found in our preliminary survey in this village in April 2002 (Lin et al., 2003). In a field investigation conducted in 2004, the ethnicity-dependent (or clandependent) variation in arseniasis prevalence was confirmed in this village, though all the families had been living together in the same village for generations and were exposed to similar indoor combustion of local high As coal for a quite similar time duration (Lin et al., 2006). A few of the host factors, such as the polymorphisms at several loci of the genome were proved to be associated with the modulation of arseniasis risk in this exceptionally exposed rural population (Lin et al., 2006(Lin et al., , 2007(Lin et al., , 2010a. Recent data might add more power to support the pivot role of host factors.
For decades, the villagers in the endemic area have been taught persistently by local medical personnel and by local government officials that the grain or the vegetable must be water washed or rinsed thoroughly to remove the As before it can be cooked or ingested. The μ-XRF mapping displayed, for the first time, that the arsenic pollution of rice grains is not only restricted to the surface but has also penetrated deeply inside the grain and cumulated in the embryo of the grain, which is the major part of daily nutrition at a level which cannot be ignored any longer. Only a limited number of grain samples have been mapped by μ-XRF technique since the machine time of SSRF for the present study was limited. Therefore, our observation can only be considered as "preliminary". More work is required. Our new data of micro-distribution of As deeply inside the grain may explain why the major item of the teaching program to the local residents, i.e. rinsing the grain with water several times, is largely ineffective. The results of this study should promote studies to investigate, whether rinsing with water remains also largely ineffective or not when applied to chili or corn, food which is very important for the arsenic load of the villagers in this area.
It must be mentioned that exposure conditions in this village have been improved in the last decades or so, since a series of countermeasures has been put forward in the endemic area by local authorities and by some other agencies as we have seen in our field investigation in April 2004 (Lin et al., 2006(Lin et al., , 2010a. Our data of 2004 also witnessed a significant decrease of internal dose of total arsenic in the villagers. The hair As content in each subject obtained in the present study does no longer represent the real internal As dose of the subjects at the time of skin lesion onset or at the time of diagnosis. The 1991 survey, the first and the only overall field investigation ever held in the whole endemic area, including the target village of the present investigation was targeted only towards different exposure environments, not focused on specific individuals or subject subgroups. The retrospective assessment of As internal load in each individual remains impossible.
CONCLUSION
Synchrotron radiation techniques, such as micro-beam X-ray fluorescence (μ-XRF) and X-ray absorption fine structure (XAFS), are suitable techniques to determine arsenic species concentrations in different parts of hair and rice grain samples. When rice is stored over the open fire of high As-content coal for drying, arsenic penetrates the endosperm, the major edible part of the grain. Thus rinsing the grain with water several times will remain largely ineffective. | 2017-10-10T23:04:11.842Z | 2017-01-02T00:00:00.000 | {
"year": 2017,
"sha1": "9e6e5fa8d4004d6941c2de0f10118b35fba35821",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "213878127e27af36d77eef6053893f803b44ecaf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
138875856 | pes2o/s2orc | v3-fos-license | Variation of energy absorption build-up factor with penetration depth for some titanium compounds
The buildup factor is an important characteristic that need to be studied and determined prior to using a material clinically in radiation treatment and protection. Energy absorption buildup factors for some titanium compounds like Titanium dioxide (TiO2), Titanium Carbide (TiC), Titanium Nitride (TiN) and Titanium Silicate (TiSi2) has been calculated using the G.P. fitting technique upto penetration depth of 40 mean free path (mfp) and in an energy range from 0.015 to 15.0 MeV. The variation of absorption buildup factors with penetration depth for the selected compounds of titanium has been studied. It has been found that the maximum value of energy absorption buildup factors shifts to the intermediate incident photon energy regions with the increase in penetration depth of the selected compounds of titanium.
INTRODUCTION
Gamma rays are electromagnetic radiations having wavelength range of about 10 -14 to 10 -1 m.These are the most energetic electromagnetic radiations.Due to their greater penetrating power, interactions of gamma ray photons with matter achieved prime importance and interest.Also titanium is the 9 th most abundant element in the nature having properties of high hardness, brightness and strongness.It is widely used in aerospace, sports and medicine fields.Titanium compounds are also found useful in various fields.They have applications in paint, coloring food and cosmetics, crayons, UV protection fields, lubricants, wear resistant tools and many more.Due to so many applications of compounds of titanium, an attempt has been made to study their interaction with gamma rays and hence to check their possibility in radiation shielding.Radiation physicists face a main problem of leakage of radiation due to the Compton multiple scattering events.This multiple scattering is the main reason for violation of the Lambert-Beer law i.e.I = I o e μx .In order to maintain this law, a correction factor B is used.This B is called the buildup factor which measures the degree up to which the Lambert-Beer law is violated.Then the intensity equation after modification becomes I = BI o e μx where B is the buildup factor.Buildup factor is always equals to or greater than unity.There are two types of buildup factors -energy absorption buildup factor and exposure buildup factor.In energy absorption buildup factor, the quantity of concern is absorbed or deposited energy in the material and detector response function [DRF] is that of the absorption in the material whereas in exposure buildup factor, the quantity of concern is the exposure of energy and detector response function [DRF] is that of absorption in air.
There are various methods available to compute buildup factors such as G.P. fitting method given by Harima et al. [1], invariant embedding method given by Shimizu and Hirayama [2] and Shimizu et al [3].American National Standards ANSI/ANS 6.4. 3 [4] (American National Standard, 1991) used G.P. fitting method and provided buildup factor data for 23 elements, one compound and two mixtures viz.water, air with suitable interval up to the penetration depth of 40 mean free paths.M.J. Berger and J. Hubbell [5] provided for the first time the database of mass attenuation coefficient as well as cross-sections for about 100 elements in the form of software package named as XCom, which is also capable of generating mass attenuation coefficients for compounds and mixtures.Y. Harima [6] has given an historical review and current status of buildup factor calculations and applications for the materials water, concrete and elements Fe, Pb, Be, B, C, N, O, F, Na, Mg, Al, Si, P, S, Ar, K, Ca, Cu, Mo, Sn, La, Gd, W, U in the energy range 0.01 to 0.3 MeV and 0.5 to 10 MeV with penetration depth up to 40 mfp, using various codes ADJ-MOM, PALLAS and ASFIT.H. Hirayama & K. Shin [7] had used EGS4 Monte Carlo code to study multilayer gamma ray exposure buildup factors up to 40 mfp for water, iron and lead at energies 0.1, 0.3, 0.6, 1.0, 3.0, 6.0 and 10 MeV.G. S. Sidhu et al. [8] ICAET 2016 absorption buildup factor for some biological samples viz.Cholesterol, chlorophyll, hemoglobin, muscle, tissue, cell and bone in the energy range of 0.015 to 15.0 MeV with penetration depth up to 40 mfp, using G.P. fitting method.Shimizu et al. [9] compared the buildup factor values obtained by three different approaches (G.P. fitting, invariant embedding and Monte Carlo method) and only small discrepancies were observed for low-Z elements up to 100 mean free path.K. Trots et al. [10] proposed vector regression model for the estimation of gamma ray buildup factors for multi-layer shields for Al, Fe, Pb, water and concrete in the energy range of 5 to 10 MeV with penetration depth more than 10 mfp.P.S. Singh et al. [11] measured variation of energy absorption build up factors with incident photon energy and penetration depth for some commonly used solvents.T. singh et al [12] worked on Chemical composition dependence of exposure buildup factors for some polymers After going through the above literature, it has been observed that with the ever increasing of gamma ray photons in medicine and bio-physics, there is a dire need of proper investigations concerning gamma rays multiple scattering effects on Titanium compounds.
In the present work, multiple scattering effects of gamma rays in some compounds of titanium has been studied in terms of some photon interaction parameters viz.mass attenuation coefficient, equivalent atomic number and energy absorption buildup factor in the energy range of 15.0 keV to 15.0 MeV and penetration depth upto 40 mfp.
COMPUTATIONAL WORK
The computational work i.e. calculation of energy absorption buildup factor has been divided into three parts.In the first step, equivalent atomic numbers are calculated in the incident photon energies ranging from 15.0 keV to 15.0 MeV.The second step concerns with the calculation of G.P. fitting parameters using values of equivalent atomic numbers of the selected titanium compounds and finally in the third step, these G.P parameters are then used to calculate absorption buildup factor for the selected titanium compounds.
Equivalent atomic numbers (Z eq )
Equivalent atomic number is a quantity quite similar to general atomic number of elements which can be defined as the number assigned to a compound or mixture by considering the Compton multiple scattering processes.
As we know the buildup factor is a direct consequence of multiple scattering, hence equivalent atomic number (Z eq ) is used to calculate the buildup factors.In order to calculate (Z eq ), the values of Compton partial attenuation coefficient (P comp ) and the total attenuation coefficients (P total ) were obtained in cm 2 /g fistly in the energy value from 0.015 to 15.0 MeV, using WinXCom computer program (Gerward et al., 2001) The values of Z eq for the titanium compounds are calculated using following formula: Where Z 1 and Z 2 are the atomic numbers of elements corresponding to the ratios of P comp and P total , R 1 and R 2 respectively.R (P Comp /P total ) is the ratio for the selected titanium compounds at a particular energy value, which lies between ratios R 1 and R 2 such that R 1 < R < R 2 .where Z 1 and Z 2 are the atomic numbers of elements between which the equivalent atomic number Z eq of the selected titanium compounds lies.Here P 1 and P 2 are the values of G.P. fitting parameters corresponding to the atomic numbers Z 1 and Z 2 respectively at a given energy.
Computations of buildup factors
The computed G.P. fitting parameters (b, c, a, Xk and d) were then used to calculate the energy absorption buildup factors for the selected compounds of titanium at some standard incident photon energies in the range of 0.015-15.0MeV and upto a penetration depth of 40 mfp, with the help of G.P. fitting formula, as given by following equations (Harima et al., 1986) for Kz 1 for K= 1 where for X d 40 mfp
RESULTS AND DISCUSSION
Fig. 3.1-3.4shows the variation of energy absorption buildup factor with penetration depth of the titanium compounds at incident photon energies (0.05, 0.5, 1, 5, 10 and 15 MeV).For all the titanium compounds, energy absorption buildup factor increases with the increase in penetration depth of the building material.ICAET 2016 - It is found that energy absorption buildup factor for the selected compounds of titanium in the energy region of 0.015-15.0MeV up to the penetration depth of 40 mean free path is always greater than one.This is because with the increase in penetration depth, thickness of the interacting material has been increased which results in increasing the scattering events within the selected titanium compounds.Hence, it results in large energy absorption buildup factor values.For the penetration at large penetration depths, almost similar trend has been observed for the energy absorption buildup factor with incident photon energy.The dependence of energy absorption buildup factor on penetration depth has been discussed as following:-Figs.3.1 -3.4 show the variation of energy absorption buildup factor with incident photon energy for all the selected compounds of titanium at 1, 5, 10, 20, 30 and 40 mean free path respectively., the increasing rate of energy absorption buildup factor for titanium compounds was found to be slow for lower and higher incident photon energies and rapid increase was observed in case of intermediate energy region.The slower increasing rate in the lower and higher energy regions was due to the dominance of different photon absorption processes in these energy regions (photoelectric effect in the lower energy region and pair production in the higher energy region) which results in the complete absorption of gamma photons in the interacting medium, whereas in the intermediate energy region the dominant process is the Compton scattering, which results only in the energy degradation of photons.Hence, there is finite possibility of the photon to reach the detector even for the large penetration depths of the ceramics and hence maximum violation of Lambert-Beer equation has been observed.Further, the increasing rate of energy absorption buildup factor with the penetration depth is more rapid up to the certain incident photon energy (0.1 MeV), where the Compton scattering process is most dominant process and after this increasing rate of energy absorption buildup factor becomes slower for higher energies.
Figure 3 . 1 :
Figure 3.1: Variation of energy absorption buildup factor with penetration depth in case of titanium carbide.
Figure 3 . 2 :
Figure 3.2: Variation of energy absorption buildup factor with penetration depth in case of titanium Nitride.
2 Figure 3 . 3 :
Figure 3.3: Variation of energy absorption buildup factor with penetration depth in case of titanium dioxide.
2 Figure 3 . 4 :
Figure 3.4: Variation of energy absorption buildup factor with penetration depth in case of titanium silicate.
had computed energy | 2018-12-22T09:44:54.085Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "879fce37bd1ab7fde86c1ff66e1583e40893c996",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/20/matecconf_icaet2016_05005.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "879fce37bd1ab7fde86c1ff66e1583e40893c996",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4672154 | pes2o/s2orc | v3-fos-license | Active monitoring of formaldehyde diffusion into histological tissues with digital acoustic interferometry
Abstract. The preservation of certain labile cancer biomarkers with formaldehyde-based fixatives can be considerably affected by preanalytical factors such as quality of fixation. Currently, there are no technologies capable of quantifying a fixative’s concentration or the formation of cross-links in tissue specimens. This work examined the ability to detect formalin diffusion into a histological specimen in real time. As formaldehyde passively diffused into tissue, an ultrasound time-of-flight (TOF) shift of several nanoseconds was generated due to the distinct sound velocities of formalin and exchangeable fluid within the tissue. This signal was resolved with a developed digital acoustic interferometry algorithm, which compared the phase differential between signals and computed the absolute TOF with subnanosecond precision. The TOF was measured repeatedly across the tissue sample for several hours until diffusive equilibrium was realized. The change in TOF from 6-mm thick ex vivo human tonsil fit a single-exponential decay (Radj2≥0.98) with rate constants that varied drastically spatially between 2 and 10 h (σ=2.9 h) due to substantial heterogeneity. This technology may prove essential to personalized cancer diagnostics by documenting and tracking biospecimen preanalytical fixation, guaranteeing their suitability for diagnostic assays, and speeding the workflow in clinical histopathology laboratories.
Introduction
Histological staining of tissue samples obtained from resections or biopsies is the gold standard for detecting and identifying cancer or disease states. Modern histology techniques are built around the concept of fixing tissues with chemicals that cross-link the biostructure of the tissue. This process inhibits metabolism to prevent specimen degradation and preserves biomolecules and tissue structure. 1,2 The most commonly used fixative in clinical settings is 10% neutral buffered formalin (NBF), which is an aqueous solution of 3.7% w/v formaldehyde in a buffer. 3 Currently, histology laboratories process tissue samples in NBF for various periods of time ranging from hours to several days. 4 In these unstandardized methods, sample quality is empirically determined by a pathologist who examines hematoxylin and eosin (H&E) stains under a microscope for manifestations of improper fixation, although no quantitative or objective data about fixation quality is available. This subjectivity and lack of quality data is particularly worrisome considering that the majority of mistakes with clinical samples (up to twothirds) occur during the preanalytical phase, meaning that significant errors in diagnosis are potentially being made because of our current inability to detect faulty fixation. 5,6 New technologies are therefore needed to monitor and optimize tissue fixation in real time so that each tissue sample is properly fixed and the most accurate diagnosis is delivered from every sample, from every patient, every time.
Most fixation protocols involving formalin employ the fixative at room temperature, although these protocols can have durations spanning from several hours to days depending on a tissue's thickness, the types of tissue present within a sample, and preanalytical variables such as reagent purity. As demand intensifies for reduced turnaround times, more rapid protocols have been introduced. 7-9 One such technology is simply to raise the temperature of the fixative to increase the cross-linking rate. While this approach can reduce turnaround time, the use of elevated temperatures alone has led to many reports of unsatisfactory tissue morphology and variability in other molecular assays, including routine immunohistochemistry (IHC) stains. 9 Alternatively, processing with microwaves has exhibited progress recently, but the technique can cause uneven fixation and tissue damage. 10,11 Another recent method reported to have superior tissue fixation qualities first incubates tissue in cold NBF, followed by a short incubation in warm NBF. The cold step suppresses enzymatic actions associated with analyte degradation while simultaneously facilitating diffusion of formaldehyde throughout the tissue, and the ensuing warm step rapidly forms formaldehyde linkages to complete the fixation process. 12,13 Because sparse cross-linking takes place at cold temperatures, this method absolutely requires sufficient concentrations of formaldehyde to have diffused into the tissue so that the subsequent warm step does not simply heat tissue in the absence of fixative. This preservation method was shown to preserve protein epitopes with notorious preanalytical sensitivity, such as phosphorylated protein kinase B (pAKT) and phosphorylated epidermal growth factor receptor, which are not currently clinically evaluated but are known to be key biomarkers associated with several forms of cancer. [14][15][16][17][18] Additionally, clinically assessed biomarkers like estrogen receptor, Ki-67, hormone receptor, and human epidermal growth factor receptor 2 are also known to be sensitive to improper fixation, indicating a broad applicability of this technique. [19][20][21] At present, there are no technologies capable of quantifying the concentration of formaldehyde or the subsequent formation of cross-links in tissue specimens. Diffusion of an exogenous chemical into a tissue is a complicated process that is markedly influenced by temperature, tissue heterogeneity, the molecular size and shape of the penetrating chemical, and the specific type and relative composition of the tissue sample. 22 Some researchers have soaked tissues in radioactive formaldehyde and used photography film as a measure of diffusion rates. 23,24 However, long exposure times and low radioactivity actually incorporated into the tissue generate confusing and unreliable results, and a radiologic technique would be incompatible with routine application in the clinical laboratory. Others have used ultrasound (US) detection to investigate cross-linking by comparing the acoustic properties of samples before and after room temperature fixation. 25 Ultimately, neither of these techniques would enable real-time monitoring when changes could be implemented to guarantee proper tissue fixation and ideal biomarker preservation. For these reasons, we sought to develop a real-time method of monitoring formaldehyde diffusion into tissue.
We report here the development of a semiautomated realtime diffusion monitoring technology based on measuring a change in the speed of US waves through ex vivo tissues. 26 Time-of-flight (TOF) measurements were acquired as formaldehyde passively diffused into tissues and changed their physical composition in a way that altered the overall acoustic transit time. The US thus accumulated a transit time differential proportional to the amount of fluid exchange (i.e., the formaldehyde concentration), which was resolved with digital acoustic interferometry with subnanosecond sensitivity. With this technology, we were able to track and quantify formaldehyde diffusion dynamically until the tissue and bulk fixative became isotonic, resulting in maximum formaldehyde concentration in the tissue. Ultimately, this technology could be integrated into commercial tissue processors to standardize and optimize tissue fixation and thereby speed the diagnosis and classification of pathologic processes in tissue by rapidly preserving critical yet labile biomarkers.
Theory
Pairs of 4 MHz focused transducers were spatially aligned and a sample was placed close to their common foci. One transducer, designated the transmitter, sent out an acoustic pulse that traversed both the coupling fluid, which was typically formalin, and the tissue sample before being detected by the receiving transducer. The TOF through only the formalin (reference channel) was subtracted from the TOF with the tissue sample present to isolate the phase retardation from the tissue and to compensate for environmentally induced fluctuations in the reagent (Fig. 1). This process was repeated to detect the changing transit time through the tissue during passive diffusion.
Due to the distinct sound velocities of formalin and exchangeable fluid within the sample, as formaldehyde diffused into the tissue, the overall transit time was slightly altered. The form of the change in TOF from the tissue (ΔTOF tissue ) can be written E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 5 2 6 where D is the tissue's diameter, ρ is the tissue's porosity, r bulk is the sound velocity of the bulk reagent, r o is the sound velocity of the undiffused tissue, and c is the concentration of the exogenous cross-linking agent, which varies in time (t) and space (r). The change in the speed of sound was scaled by the porosity of the tissue, which varied between 0 and 1 and represents the volume fraction of the tissue that was eligible for diffusion. Equation (1) models the speed of sound in the tissue as a linear combination of the tissue's original sound velocity and the sound velocity of the bulk fluid. The detected change in TOF will thus be inversely proportional to the speed of sound differential of the two fluids. As an example, an US pulse will take 2666.6 ns to traverse a 4-mm specimen whose sound velocity is 1500 m∕s. Assuming that the bulk media's sound velocity was 10% higher and the tissue had a porosity of 10%, after reaching equilibrium, an US pulse will take 2640.5 ns to traverse the tissue, generating 26 ns of TOF differential when osmotic equilibrium is achieved. Equation (1) thus predicts diffusion will change the acoustic transit time across a tissue on the order of tens of nanoseconds. Fig. 1 Schematic representation of the acoustic TOF diffusion monitoring system. Transit times of acoustic pulses traversing the formalin and tissue (top row) and a reference acquisition through only the formalin (bottom row) were calculated. Reference channel subtraction eliminated environmentally induced noise and isolated the TOF contribution from the tissue.
Experimental Setup
A commercial dip and dunk tissue processor (Lynx II, Electron Microscopy Sciences) was retrofitted with custom acoustic hardware; see Fig. 2(a). A mechanical head was designed in Solidworks ® to fit around and seal a standard Lynx II reagent canister. A cassette holder was designed for use with a biopsy compatible cassette (Leica Biosystems, CellSafe Biopsy Capsules) to securely hold smaller tissue samples (dia. ≤4 mm) and prevent them from moving. Alternatively, a separate holder was designed for a standard-sized cassette for larger specimens up to 7 mm thick. The cassette holder held the tissue perpendicular to the propagation axis of the US. The cassette holder was attached to a vertical translation arm that enabled the tissue to be spatially mapped by changing where acoustic beams traversed the tissue sample. Two metal brackets on either side of the tissue cassette [ Fig. 2 A two-dimensional (2-D) scan was completed by calculating the TOF between all five transducer pairs. The cassette was then translated ∼1 mm vertically and TOF values were calculated at the new position. This process was repeated with 13 or 21 vertical acquisitions for the biopsy and standard-sized tissue cassettes, respectively. The lateral locations of TOF acquisitions from all transducer transmitted-received pairs are displayed in Fig. 2(d). After each translation, the orthogonal reference sensors measured the TOF value through the bulk fluid to account for thermal fluctuations. Additionally, at the end of a 2-D acquisition, the cassette was raised up out of the path of the transducers and a second reference acquisition was acquired. These TOF values were used to detect spatiotemporal variations in the fluid. The process was repeated over the course of the experiment until the tissue reached equilibrium. Acquisition of a 2-D TOF dataset required about 100 s, although without spatial scanning, TOF data could be acquired at 3 Hz.
Data Acquisition and Time-of-Flight Calculation Algorithm
The TOF measurements were calculated using a developed postprocessing algorithm. Initially, the transmitting transducer was set with a programmable waveform generator (AD5930, Analog Devices) to transmit a sinusoidal signal with a frequency of 3.7 MHz for 600 μs. That pulse was detected by the receiving transducer after traversing the fluid and tissue [ Fig. 3(a)]. The received and transmitted sinusoids were compared electronically with a digital phase comparator (AD8302, Analog Devices). The phase comparator had an integration time of 165 μs and thus required the transmitted and received pulses to temporally coexist for at least this long to determine the phase relationship between the two signals. After the output of the phase comparator stabilized, it was queried 100 times with an 8-bit analog-to-digital converter (AT91SAM, Atmel), and the average was recorded. The output voltage from the phase comparator had a standard deviation of less than 2 mV, which was significantly less than the bit depth. In this manner, the phase relationship between transmitted and received signals was calculated for a sinusoidal signal with a frequency of 3.7 MHz. This process was continually repeated by increasing the frequency of the transmitted sinusoidal signal and again recording the phase relationship between the transmitted and received pulses. The phase relationship was calculated for acoustic signals with frequencies ranging from 3.7 to 4.3 MHz with a discrete phase value recorded every 600 Hz (e.g., ν ¼ 3.7; 3.7006; 3.7012: : : 4.3 MHz). This range was chosen because the output of the transducers was sufficiently large between 3.7 and 4.3 MHz. The output of this recording, referred to as a phase-frequency sweep, repeated itself every time the accumulated phase differential completed a cycle. An experimentally acquired phase-frequency sweep is displayed in Fig. 3(b), where 0 volts corresponds to a phase difference of AEπ radians and 1.8 volts corresponds to 0 radians. The voltage from the phase comparator was converted to a temporal phase shift, referred to as the experimental phase (φ exp ). Next, a brute-force simulation was used to calculate phase-frequency sweeps for different TOF values. Candidate temporal phase values, as a function of input sinusoid frequency, were calculated according to (2) where TOF cand is the candidate TOF in nanoseconds, T is the period of the input sinusoid in nanoseconds, rnd is the "round to the nearest integer" function, and j: : : j is the absolute value operation. For a given candidate TOF and frequency value (i. e., period), the term on the right represents how long it takes for the nearest number of cycles to occur. This value was subtracted from TOF cand to calculate the temporal phase, into or up to the next complete cycle. Phase values were thus computed for multiple candidate TOF values initially ranging from 10 to 30 μs with 200-ps spacing. The error between experimental and candidate frequency sweeps was calculated in a leastsquares sense for individual candidate TOF values by where N is the total number of frequencies in the sweep. A normalized error function is displayed in Fig. 4(a) and resembles an optical interferogram. For example, each feature has a width of one acoustic period (T ¼ 1∕4 MHz ¼ 250 ns). The maximum error function indicates the candidate phase-frequency sweep had equal wavelength but was out of phase with the experimental phase-frequency sweep. Conversely, when the error was minimized, the two were completely harmonized, thus the reconstructed TOF is registered as the global minimum of the error function The technique of digitally comparing acoustic waves produced high-precision results due to the sharpness of the center trough [ Fig. 4(b)]. The error function had a minimum value of 0.0033 at 18175.56 ns, indicating exceptionally well-matched candidate and experimental phase-frequency sweeps. Note that the precision of this method could be increased by filtering the error function, although no signal processing was performed at this step.
Environmental Mitigation and Data Processing
The speed of sound in fluid has a large temperature dependence that is exacerbated because the absolute TOF is an integrated signal over the path length between the transducers. For instance, the time for US to traverse 1 mm of 4°C water will change 2.3 ns∕°C. Two mechanisms were employed to mitigate these environmental fluctuations: a proportional-integralderivative (PID) algorithm on the hardware used to cool the bulk fixative solution, and TOF reference compensation through the bulk media. The PID temperature control was based on a pulse width modulation (PWM) algorithm that continually read the temperature of the reagent from a thermistor (Omega TH-10-44007), and once per second adjusted how long the cooling hardware was on in 392-μs increments. The PWM algorithm was found to stabilize the temperature of the fluid with a standard deviation of roughly 0.05°C about the set point. Further correction for temperature variance was realized by reading the TOF through only the bulk reagent. This TOF value was subtracted from the TOF through the reagent and tissue to mitigate contributions from environmentally induced fluctuations in the fluid. Best results were achieved with relatively slow low-amplitude transients in the fluid, so the PWM algorithm was programmed to stabilize low-frequency temperature fluctuations while reference compensation eliminated high-frequency variations.
TOF acquisitions with an empty cassette are shown in Fig. 5. The absolute TOF of the formalin slowly changed 40 ns over the 8-h experiment due to thermal drift within the fluid. However, after reference compensation, the change in TOF (ΔTOF) was essentially flat and only varied AE500 ps (σ ¼ 247 ps) from its baseline of 123 ns, which was due to the retardation from the plastic mesh in the cassette. Low-order median and smoothing filters, in addition to a third-order Butterworth filter, were used to eliminate stochastic noise while preserving the low-frequency components from the tissue. Filtering typically reduced the noise an additional 25 to 50%.
Histologic Imaging and Image Processing
Calu-3 mouse xenograft tumors were harvested from severe combined immunodeficiency (SCID) mice. Calu-3 is a human airway carcinoma-derived cell line that overexpresses pAKT.
Animals were cared for in accordance with standards established by the International Association for the Assessment and Accreditation of Laboratory Animal Care, and experiments were approved by Roche's Institutional Animal Care and Use Committee. For IHC analyses, tissue samples went through routine processing on a standard tissue processor (Leica ASP300). They were embedded in paraffin, sectioned at 4-μm thick, and placed on microscope slides. Samples were stained with an antibody to the phosphorylated form of the AKT protein that is highly fixation sensitive. 16,17 Imaging of each slide was performed on a microscope (Nikon Eclipse 80i) with a 2× objective (Nikon Plan Apo). Images from the microscope were compensated for the illumination pattern of the light source by dividing the transmitted image with an a priori flat image acquired with no tissue sample. The intensity image was log transformed and spectrally unmixed to quantify the relative amount of pAKT at each pixel. A reference tissue stained for pAKT was used to calibrate the color spectrum of staining. A segmentation algorithm was used to identify the tissue border, and the Euclidean distance to the nearest edge pixel was calculated for each pixel within the tissue. Average stain level (in arbitrary units) as a function of distance to the edge was then assessed for each sample versus time in NBF.
Time of Flight from Ex Vivo Tissues
The TOF system was used to monitor changes within a 6-mm thick human tonsil placed in 6°C 10% NBF. The reference-compensated TOF from different regions are shown in Fig. 6. The change in TOF displayed a monotonically decreasing signal that was best correlated with a single-exponential function of the form where C is a constant offset in nanoseconds, A is the amplitude of the decay in nanoseconds, τ is the decay constant in hours, and the spatial dependence ( r ⇀ ) is explicitly stated. The amplitude and decay rate were quantified by fitting to Eq. (5) using nonlinear regression. The TOF signal from the tissue's periphery exhibited a ΔTOF change of 8 ns and reached equilibrium in ∼8 h (top plot). Just 1 mm toward the center, the ΔTOF had a larger decay amplitude of 27 ns and required nearly 16 h to reach equilibrium (middle plot). The center of the tissue had the largest amplitude and was still changing after 16 h (bottom plot). All three signals had adjusted R 2 values greater than 0.99 and standard errors (SEs) less than 500 ps. The tissue's center would be expected to experience slower diffusion because it had less surface area exposed to the bulk fixative, and increased amplitude because the tissue sample used was thicker toward the middle.
To verify that the change in TOF derived contrast from reagent diffusion, and not from an ancillary effect such as cross-linking, an additional experiment was performed by incubating a sample in differing concentrations of formalin. This experiment exploited the fact the TOF monotonically decreased versus percent formaldehyde [see Fig. 7(a)] to test the reversibility of the TOF signal. The hypothesis of this experiment was that chemical modification (cross-linking) within the tissue would lead to irreversible TOF changes in conditions where cross-links cannot be undone, whereas simple diffusion of formaldehyde into and out of tissue would produce TOF changes exhibiting minimal hysteresis. A 4-mm human tonsil was therefore scanned in a series of cold NBF solutions: 10% formalin, 40% formalin, 10% formalin, and finally 40% formalin over a multiday experiment. The TOF signal, averaged over 12 recorded positions, for each stage is shown in Fig. 7(b). Initially, the ΔTOF decreased about 10 ns as the formalin penetrated into the tissue (see blue curve). An immediate and larger change in the ΔTOF was observed when this tissue was placed into 40% formalin (see magenta curve). Conversely, when the tissue was returned to 10% formalin, the ΔTOF increased (i.e., the speed of sound slowed), but the ΔTOF almost entirely reverted to its lower value when returned to 40% formalin (see red and green curves). The tissue sample at diffusive equilibrium with 10% and 40% formalin were 4 to 9 ns and 30 to 37 ns faster than undiffused tissue, respectively. Additionally, the low spatial variability in each TOF signal (σ ¼ 2 to 7 ns) provides evidence that the signal was not merely detecting tissue deformations such as expansions or contractions but rather identifying a continuous physical effect occurring throughout the specimen. The results from this experiment were therefore consistent with diffusion of formaldehyde into or out of the tissue both in terms of the polarity and magnitude of each TOF signal.
Quantification of Diffusion Variability
The TOF system was used to characterize formaldehyde diffusion into tonsil samples 2, 4, and 6 mm thick [ Fig. 8(a)]. As the size of the sample increased, the magnitude of the TOF shift also increased. Decay amplitudes were 5.0, 12.06, and 24.95 ns for 2, 4, and 6 mm thick samples, respectively. This was expected because thicker samples had larger volumes of fluid to exchange, resulting in a changed speed of sound over a larger distance, producing shorter cumulative TOF changes. Furthermore, diffusion time also increased dramatically with tissue thickness. The decay constants were 0.55, 2.5, and 6.6 h for 2-, 4-, and 6-mm samples, respectively. This increase in diffusion time was predicted by Fick's second law, which, to a firstorder approximation, predicts a squared dependence between particle penetration depth and time.
The scanning capability of this technology also enabled probing the spatial variability of formaldehyde concentration changes within the tissue. To examine this, a 6-mm tonsil biopsy core, ∼10 mm in length, was placed into 6°C 10% formalin and scanned with the TOF system. Decay constants from the ΔTOF throughout the sample are displayed in Fig. 8(b). In general, the center portion of the sample had slower diffusion rates (larger decay constants) than the end portions. This is likely because the ends of the tissue had more surface area for active fluid exchange. However, significant differences in the diffusion rates in the middle of the sample were observed (4 to 10 h), despite all regions having nearly equal exposed surface area. These large differences in diffusion rates demonstrate how variable fixative diffusion rates can be and why monitoring diffusion could be critical for ensuring successful preservation in all samples.
In a final experiment, our detection system was used to visualize diffusion over time in a 6-mm tonsil. The TOF trends for each spatial location were fit to Eq. (1) and fit amplitudes and decay constants were interpolated to create 2-D mappings of the spatially dependent diffusion process. The time dependence of the diffusion process can be seen in Fig. 9(a), which displays a photograph of the tissue superimposed with contour lines indicating how long regions take to reach 63% of their maximum formaldehyde concentration. Conversely, Fig. 9(b) depicts a photograph of the tonsil sample overlaid with contour lines labeling what percentage of diffusion had yet to occur after 5 h. A large dependence on distance from the tissue edge was observed because formaldehyde penetrated from the outer surfaces of the tissue toward the interior. However, a large degree of tissue heterogeneity was also observed. For example, the region of tissue at x ¼ 6 mm, y ¼ 0 had a decay constant similar to the tissue's center, likely resulting from physical differences in tissue microheterogeneity or thickness.
Biomarker Staining Results
Given that fixatives infiltrate tissue from the edges and outer surfaces, we sought to investigate this effect by observing the preservation of analytes that are especially sensitive to the quality of fixation. 16,27,28 Calu-3 mouse xenograft tumors were harvested from SCID mice and sliced to no more than 4 mm thick. Calu-3 is a human airway carcinoma-derived cell line that overexpresses pAKT. Samples were placed in cold formalin, with less than 10 min of cold ischemia, for 1 to 3 h before being cross-linked in heated formalin for 2 h. With only 1 or 2 hours of diffusion time in cold formalin, the tissues were almost completely devoid of stain toward their interiors [ Fig. 10(a)]. More uniform staining was observed throughout both samples that were subjected to 3 h of cold formalin, although the center of these samples continued to show less pAKT than was evident at the periphery.
To quantify these subjective observations, image processing was employed to quantify the relative concentration of pAKT versus distance to the nearest edge of the tissue. The staining intensity functions for samples exposed to formalin for 1, 2, and 3 h are plotted in Fig. 10(b), where the error bar width represents the standard deviation of the two images for each diffusion time. Staining intensity substantially increased with longer diffusion times. Furthermore, augmented stain intensities were observed at all radial edge distances as diffusion time increased, demonstrating that longer diffusion times are necessary to preserve pAKT at the center and periphery of the tissues. On average, samples exposed to formalin for 3 h preserved 48% and 117% more pAKT than 2-h and 1-h samples, respectively [ Fig. 10(c)]. We hypothesize that the gradient in pAKT signal was a consequence of inadequate formaldehyde diffusion into the specimen resulting in diminished preservation of this Fig. 8 protein. This result would indicate that pAKT preservation is indeed largely dependent on the localized concentration of formaldehyde in the tissue.
Discussion
Previous researchers used US technology to interrogate tissue specimens before and after days to weeks of formalin fixation. 25,29 However, these techniques lacked the ability to distinguish the effects of cross-linking and reagent diffusion and had coarse temporal resolution, which prevented measuring the rate of these processes. In the present work, tissue samples were placed in cold formalin to suppress cross-linking, thus enabling formaldehyde molecules to diffuse throughout the tissue before cross-link formation was completed with heat. Due to the distinct sound velocities of formalin and free fluid within the tissue, as the cross-linking agent penetrated the tissue, the overall transit time changed relative to the fixative's concentration, generating tens of nanoseconds of acoustic phase retardation. This small time interval was resolved with a developed TOF calculation algorithm capable of resolving temporal differentials on the order of hundreds of picoseconds. The measurement was sensitive and fast enough to actively monitor diffusion in real time. The change in TOF through tissue regularly increased when the tissue was incubated in a solution with a slower speed of sound, and conversely decreased when the tissue was incubated in a solution with a faster acoustic velocity. The reversibility of the TOF's amplitude and polarity in response to different formalin concentrations was consistent with diffusion being the primary phenomenon observed and demonstrated the signal was not significantly impacted by cross-link formation. Additionally, tissue sample decay amplitudes, decay rates, and polarities were all consistent with expectations from Eq. (1) and Fick's law. With this in mind, we conclude that the TOF signal was primarily, if not completely, derived from passive diffusion and the resultant change to the tissue's cumulative speed of sound and not a secondary effect such as tissue expansion/contraction or cross-linking activity.
With this acoustic TOF technology, one can now study and quantify the spatially and temporally varying diffusion process in tissues undergoing fixation. Active monitoring of fixative concentration could be used to ensure the presence of sufficient fixative throughout a sample despite large disparities between tissue types, tissue heterogeneity, or intersample variability. In particular, one might expect significant deviation in diffusion rates between different types of tissues due to their unique properties and compositions. Additionally, because TOF is an integrated signal, this approach could be used to quantify diffusion into a sample composed of multiple distinct tissue types. This technique could be instrumental to standardizing and optimizing tissue fixation by ensuring all tissues receive the optimal amount of cross-linking agent and documenting the preanalytical processes each sample was subjected to. Furthermore, it could be critical to the preservation of certain labile biomarkers that are highly sensitive to the local fixative concentration in the tissue. As an example, the preservation of pAKT was shown to be highly dependent on diffusion time. Verification of adequate fixative across all areas of a clinical tissue sample with this technique would enable a clinician to interpret a loss of pAKT staining as a true biologic phenomenon rather than a preanalytical artifact. In future work, we will therefore study the diffusion rates of formaldehyde in other clinically and biologically relevant tissues, as well as the TOF characteristics of other steps in tissue processing protocols, to better understand which diagnostic assays will most benefit from standardized and quantifiable tissue fixation. | 2018-04-03T01:18:37.079Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "531b7020e4643b20fa862687a89046a532a00fb7",
"oa_license": "CCBY",
"oa_url": "https://www.spiedigitallibrary.org/journals/Journal-of-Medical-Imaging/volume-3/issue-1/017002/Active-monitoring-of-formaldehyde-diffusion-into-histological-tissues-with-digital/10.1117/1.JMI.3.1.017002.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9ca9a5a02cd6c16a45f9018e95d75a2bd1d4ea4f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Engineering"
]
} |
32319377 | pes2o/s2orc | v3-fos-license | HIV-1 DNA predicts disease progression and post-treatment virological control
In HIV-1 infection, a population of latently infected cells facilitates viral persistence despite antiretroviral therapy (ART). With the aim of identifying individuals in whom ART might induce a period of viraemic control on stopping therapy, we hypothesised that quantification of the pool of latently infected cells in primary HIV-1 infection (PHI) would predict clinical progression and viral replication following ART. We measured HIV-1 DNA in a highly characterised randomised population of individuals with PHI. We explored associations between HIV-1 DNA and immunological and virological markers of clinical progression, including viral rebound in those interrupting therapy. In multivariable analyses, HIV-1 DNA was more predictive of disease progression than plasma viral load and, at treatment interruption, predicted time to plasma virus rebound. HIV-1 DNA may help identify individuals who could safely interrupt ART in future HIV-1 eradication trials. Clinical trial registration: ISRCTN76742797 and EudraCT2004-000446-20 DOI: http://dx.doi.org/10.7554/eLife.03821.001
Introduction
Renewed interest in exploring avenues for curing Human Immunodeficiency Virus Type 1 (HIV-1) infection (Henrich et al., 2013;Persaud et al., 2013;Saez-Cirion et al., 2013;Denton et al., 2014;Tebas et al., 2014) has resulted in the investigation of interventions to eradicate cells in which HIV-1 persists despite antiretroviral therapy (ART). Once HIV-1 has infected a cell and integrated its genome into the cellular DNA, that cell may revert to a resting state, only producing replication competent virions when activated at a later date. These cells have been labeled the 'HIV reservoir'. There is, however, a lack of clarity relating to the cell types that might harbour the 'reservoir', as well as the tissues in which these cells might be located. For clarity, we will use the term 'reservoir' to describe the population of HIV-1-infected cells that persist during ART and which are the source of rebound viraemia on stopping therapy. Current understanding is that the majority of cells comprising this reservoir are CD4 T memory cells of a resting phenotype Liszewski et al., 2009;Eriksson et al., 2013).
Many assays have been developed to quantify the HIV-1 reservoir, ranging from simple quantitative PCR (qPCR) estimation of cell associated HIV-1 DNA to labour-intensive viral outgrowth assays (VOA) (Siliciano and Siliciano, 2005;Avettand-Fenoel et al., 2009;Liszewski et al., 2009;Eriksson et al., 2013). Whereas measurement of plasma viraemia ('viral load') and CD4 T cell count are documented surrogate markers of HIV clinical progression, the clinical relevance and utility of measuring the reservoir-regardless of assay-remains less clear. As cell-associated HIV-1 DNA precedes plasma viraemia in the viral life cycle, it is tantalizing to speculate whether measuring HIV-1 DNA (as a surrogate for reservoir size) might have significant clinical relevance.
It is well documented that HIV-1 DNA persists in patients on antiretroviral therapy (ART) even when the plasma viral load is undetectable using the most sensitive assays (Palmer et al., 2008;Saez-Cirion et al., 2013). Much of this detectable HIV-1 DNA has been found to be mutated and replicationincompetent calling into question its biological relevance . However, as a simple eLife digest HIV is a virus that can hide in, and hijack, the cells of the immune system and force them to make new copies of the virus. This eventually destroys the infected cells and weakens the ability of a person with HIV to fight off infections and disease. If diagnosed early and treated, most people with HIV now live long and healthy lives and do not develop AIDS-the last stage of HIV infection when previously harmless, opportunistic infections can become life-threatening. However, there are still numerous hurdles and challenges that must be overcome before a cure for HIV/AIDS can be developed.
Treatment with drugs called antiretrovirals can reduce the amount of the HIV virus circulating in an infected person's bloodstream to undetectable levels. However, when HIV infects a cell, the virus inserts a copy of its genetic material into the cell's DNA-and, for most patients, antiretroviral treatment does not tackle these 'hidden viruses'. As such, and in spite of their side-effects, antiretroviral drugs have to be taken for life in case the hidden viruses re-emerge.
As research into a cure for HIV/AIDS gathers momentum, patients who might be candidates for new experimental treatments will need to be identified. Although it is not recommended as part of standard clinical care, the only way to test if a patient's viral levels would remain suppressed without the drugs would be to temporarily stop the treatment under the close supervision of a physician. As such, a new method is needed to identify if there are patients who might benefit from stopping antiretroviral therapy, and more importantly, those who might not.
Williams, Hurst et al. have now tested whether measuring the levels of HIV DNA directly might help to predict if, and when, the virus might re-emerge (or rebound). In a group of HIV patients participating in a clinical trial, those with higher levels of HIV DNA at the point that the treatment was stopped were found to experience faster viral rebound than those with lower levels of HIV DNA. This method could therefore identify those patients who are at the greatest risk of HIV viral rebound, and are therefore unlikely to benefit if their treatment is interrupted.
Williams, Hurst et al. also found that measuring the levels of HIV DNA could help to predict how the disease would progress in treated and untreated patients. Furthermore, these predictions were more accurate than those based on measuring the amount of the virus circulating in a patient's body.
The next challenge is to identify other methods to distinguish patients who may remain 'virusfree' for a period without treatment, from those who would not. With this achieved, it might be possible to identify the mechanisms that determine why the virus comes back and so develop new treatments to stop this happening. This would make developing a cure for HIV/AIDS a much more tangible prospect. DOI: 10.7554/eLife.03821.002 surrogate measure of the reservoir it may still have a role to play. As new interventions to cure HIV-1 infection are developed and taken into clinical trials, a means to measure their efficacy is needed. Stopping ART to await the return of viraemia would be the 'gold standard' approach, but has been associated with risk in certain (Strategies for Management of Antiretroviral Therapy (SMART) Study Group et al., 2006), although not all studies (SPARTAC Trial Investigators et al., 2013). Ideally, the clinician would have access to an algorithm of biomarker assays to help identify those patients who might (or, alternatively, should not) be candidates for a safe treatment interruption (TI). The best way to assess the patient successfully managed on ART is unclear but, with the viral load rendered undetectable, it is plausible that HIV-1 DNA might be an alternative biomarker for disease progression. For example, compared with individuals with uncontrolled viraemia, HIV-1 DNA levels are much lower in cohorts such as VISCONTI in which apparently persistent aviraemia has been reported following TI (Saez-Cirion et al., 2013), and in the case of the Mississippi baby extremely low DNA levels were associated with a prolonged period of virological remission. However, this contrasts with cases in which undetectable DNA on ART was associated with prompt rebound viraemia on stopping (Chun et al., 2010;Henrich et al., 2014). We therefore wished to gain a broader picture of the utility of measuring HIV-1 DNA levels by studying participants in a large, randomized trial of primary HIV-1 infection.
We measured both Total and Integrated HIV DNA levels in peripheral blood CD4 T cells in participants in the Short Pulse Antiretroviral Treatment at HIV-1 Seroconversion (SPARTAC) trial (SPARTAC Trial Investigators et al., 2013)-the largest randomized clinical trial of short-course ART in primary HIV-1 infection (PHI). Studying individuals recruited at PHI, randomized to no treatment or ART, and who subsequently underwent treatment interruption, allowed us to ask two questions. Was HIV-1 DNA independently predictive of clinical progression, and did HIV-1 DNA predict the time taken for viraemic rebound on stopping therapy, advocating its role in future treatment interruption protocols?
SPARTAC trial participant characteristics
154 participants across all the SPARTAC trial arms were studied based on infection with subtype B HIV-1 and sample availability. All 154 patients were sampled at the pre-therapy baseline at trial enrolment. The demographics of the 154 participants are shown in Table 1. Participants who were randomised to receive no therapy or 48 weeks of ART and for whom samples were available (n = 51 and n = 47, respectively; Supplementary file 1) were studied in separate analyses described below. Assays of both Total and Integrated HIV-1 DNA were conducted at pre-therapy 'baseline' (trial week 0) and then at weeks 12, 48, 52, 60 and 108, where samples permitted. As detailed in Supplementary file 2, not all patients were assayed at all time-points, dependent on the analyses being conducted and sample availability.
Pre-ART HIV-1 DNA associates with surrogate markers of disease progression Traditionally, plasma viral load (VL) (Mellors et al., 1996) and CD4 cell count are the only validated surrogate markers of progression used in the HIV-1 clinic. We therefore measured these biomarkers as well as HIV-1 DNA in 154 SPARTAC participants at enrolment to the trial and prior to any ART being given. The median (interquartile range) values of Total and Integrated HIV-1 DNA values in PHI (Figure 1-figure supplement 1) were 7707 (2477-18187) and 3830 (1563-6325) copies of HIV-1 DNA per million CD4 T cells, respectively. Total and Integrated HIV-1 DNA levels were closely associated (p < 0.0001; r 2 = 0.72; Pearson correlation) (Figure 1-figure supplement 2) in these pre-therapy samples. Total and Integrated HIV-1 DNA were significantly associated with plasma viral load (both p < 0.001; r 2 = 0.48 and 0.64, respectively; linear regression) ( Figure 1A), and inversely with CD4 T cell count (both p < 0.001; r 2 = 0.20 and 0.27, respectively; linear regression) ( Figure 1B). Interestingly, the estimated time since seroconversion at recruitment did not correlate with HIV-1 DNA (both Integrated and Total) (Figure 1-figure supplement 3).
HIV-1 DNA in untreated patients predicts disease progression
For this analysis, disease progression was defined according to the primary end-point of the SPARTAC trial, that is, a composite end-point of either a CD4 T cell count of 350 cells/µl or the commencement of long-term ART (for any clinical determined decision) (SPARTAC Trial Investigators et al., 2013). We carried out Kaplan-Meier survival analyses with patients randomised to receive no ART, and stratified according to median HIV-1 DNA level at time of recruitment (n = 51 for Total HIV-1 DNA, and n = 38 for Integrated [due to limited sample availability]) (patient demographics detailed in Supplementary file 1). There was a significant delay in clinical progression in those with lower Total and Integrated HIV-1 DNA at baseline (p = 0.0016 and 0.0022, respectively; log-rank test) ( Figure 2). The median time from randomization to primary endpoint stratified by low and high Total HIV-1 DNA levels was 187.0 (IQR 127.0-222.0) and 77.9 (IQR 35.0-172.8) weeks, respectively, and for low and high Integrated levels was 187.7 (IQR 132.7-214.9) and 52.0 (IQR 32.4-161.3) weeks, respectively.
HIV-1 DNA decline on ART
One third of the participants recruited to SPARTAC were randomised according to the trial protocol to receive 48 weeks of ART before undertaking a treatment interruption (SPARTAC Trial Investigators et al., 2013). This allowed us not only to study the impact of ART on HIV-1 DNA levels in this cohort (which has been reported in different cohorts [Siliciano et al., 2003;Chun et al., 2007]), but also to characterise what happens on stopping therapy after treatment initiated during PHI.
Prior to starting ART, Total and Integrated HIV-1 DNA levels were significantly different (p < 0.0001; Students t test) (Figure 3), most likely explained by the presence of unintegrated circular and linear DNA forms. As expected, HIV-1 DNA levels after 48 weeks of ART were significantly lower than those measured at baseline (p < 0.0001 for all comparisons; Students t test) by 0.63 log copies/million CD4 cells for Total, and 0.59 log copies/million CD4 cells for Integrated ( Figure 3). After 48 weeks of ART, Total DNA levels remained significantly greater than Integrated levels in patients despite undetectable viraemia (0.027; paired t test) ( Figure 3). This is consistent with other reports of residual unintegrated HIV-1 DNA up to a year after ART initiation (Agosto et al., 2011).
Having ascertained that in untreated individuals HIV-1 DNA was a predictor of progression, we now asked whether the lower HIV-1 DNA levels following ART would predict progression if therapy was stopped. This has potentially greater utility, as the majority of individuals on successful ART will have undetectable plasma viraemia using standard assays.
HIV-1 DNA at the point of stopping ART predicts clinical progression
We measured DNA levels in participants who received a median of 48 (IQR 47.7-48.7) weeks of ART with successfully suppressed viraemia (VL < 50 copies/ml plasma), immediately prior to treatment interruption. The demographics of the subset of individuals (n = 47) studied in this analysis are detailed in Supplementary file 1. Kaplan-Meier survival analyses were undertaken in which participants were again divided into two groups (low and high) based on median HIV-1 DNA levels at TI. Both low Total and Integrated HIV-1 DNA levels associated with a longer time to trial endpoint (p = 0.039 and 0.031, respectively; log-rank test) ( Figure 4). The median time from TI to primary endpoint stratified by low and high Total HIV-1 DNA levels was 159.2 (IQR 111.9-200.6) and 117.8 (IQR 67.8-173.8) weeks, respectively, and by low and high Integrated levels was 166 (IQR 124.9-200.6) and 101.1 (IQR 65.5-156.8) weeks, respectively. In univariable Cox regression analyses, Total and Integrated HIV-1 DNA both predicted clinical progression from TI, determined by time to reaching the trial primary endpoint (Total HR 3.52 [1.32-9.37]; p = 0.012; Integrated HR 3.01 (1.13-7.95); p = 0.027). Multivariable cox regression models were constructed with HIV-1 DNA and CD4 cell count at TI. Viral load was not included as it was undetectable at TI. Both Integrated (HR 2.81 CI (1.05-7.55) p = 0.04) and Total (HR 3.42 CI (1.29-9.05) p = 0.013) HIV-1 DNA retained significance, and in both cases CD4 T cell count at TI was not a significant predictor (HR 1.04 CI (0.83-1.11) p = 0.58 and HR 0.94 CI 0.825-1.08 p = 0.4). At TI, HIV-1 DNA was the only predictor of the primary end point.
HIV-1 DNA increases on stopping ART
One of the concerns around the viral rebound following a TI is the risk of 're-seeding' the reservoir in individuals who might have extremely low HIV-1 DNA levels, and who might be candidates for 'posttreatment control' of viraemia (Hocqueloux et al., 2010). We therefore measured HIV-1 DNA in those participants who had received 48 weeks of ART at the point of TI and then again 4, 12 and 60 weeks post TI, where samples were available. Total and Integrated HIV-1 DNA levels were not significantly greater than at the time of ART cessation for up to 12 weeks post TI, although had significantly increased 60 weeks after TI (p < 0.0001 for Total and Integrated DNA; Students t test), returning approximately to the Week 0 pre-therapy levels ( Figure 3). The increase in Total and Integrated HIV-1 DNA 4 weeks after TI was not significant (p = 0.30), in contrast to the rebound in plasma viraemia (p < 0.001), which may be re-assuring for those implementing a TI strategy in which ART would be re-introduced when plasma VL became detectable.
Of note, in an analysis of those individuals who subsequently restarted ART after the TI-and for whom we had samples (n = 15)-there was no significant difference between the HIV-1 reservoir size pre-TI and at least 6 months after re-starting ART (p = 0.58; paired students t test; Figure 3-figure supplement 1), suggesting that any increase in HIV-1 DNA on stopping ART may be reversible if therapy is re-commenced. However, larger studies will be needed to confirm these data.
HIV-1 DNA at ART cessation predicts time to plasma viral load rebound
Although almost all participants in SPARTAC experienced VL rebound on stopping ART, we have previously shown that of those who received >12 weeks of therapy, 14% still had undetectable viraemia 12 months later (Stohr et al., 2013). We therefore wished to establish-albeit in this different, although overlapping, sub-group of SPARTAC participants-whether HIV-1 DNA predicted the return of plasma viraemia post-TI. As our previous findings included participants in centres using both 50 and Univariable and multivariable cox regression models were used to determine predictors of clinical progression in untreated individuals followed up from Primary HIV-1 Infection. Progression was determined according to reaching the SPARTAC trial primary endpoint (Chun et al., 2010). Co-variables analysed were baseline (i.e. first pre-therapy trial sample) Total HIV-1 DNA, baseline plasma viral load and baseline CD4+ T cell count. DOI: 10.7554/eLife.03821.009 400 copies/ml as the lower limit of detection for plasma viral load assays, we studied both cut-offs for the HIV-1 DNA analyses. No patients were censored before viral rebound was detected and all were aviraemic (<50 copies/ml plasma) at the point of stopping ART. Levels of Total (but not Integrated) HIV-1 DNA at TI predicted time to viral rebound to 400 copies/ml by univariable Cox regression analysis (HR 2.43 (1.23-4.79) p = 0.010). CD4 T cell count at TI was not predictive (HR 0.92 (0.78-1.08) p = 0.32) (Supplementary file 3). In a multivariable Cox regression model including Total HIV-1 DNA and CD4 count, both sampled at the point of TI, only Total HIV-1 DNA significantly predicted time to viral rebound to 400 copies (HR 2.68 [1.31-5.48] p = 0.0069) (Supplementary file 3). When using values from pre-therapy baseline rather than at the time of TI in the model, neither plasma viral load nor CD4 T cell count predicted time to viral rebound (>400 copies/ml) from TI (HR 1.38 [0.96-1.99] p = 0.080) and (HR 1.03 [0.92-1.12] p = 0.60), respectively. Kaplan-Meier survival analyses showed similar results, with a low Total HIV-1 DNA (based on stratification around the median level) associated with a slower time to a viral rebound of 400 copies/ml (p = 0.0038; log-rank test) but not to 50 copies per ml (p = 0.18) ( Figure 5).
It was unclear why Total HIV-1 DNA should predict rebound to 400 copies but not to 50. In an attempt to explain this we studied those individuals with data available at both cut-offs. In this small post-hoc analysis (n = 45), we found that rebound varied according to the HIV-1 DNA level at the time of TI. Patients with high Total HIV-1 DNA levels were more likely to have a first detectable VL greater than 400 copies/ml, whereas those with lower HIV-1 DNA levels were more likely initially to rebound below 400 but above 50 copies/ml (p = 0.0074; Fisher's exact test) (Supplementary file 4). In summary, we find evidence that HIV-1 DNA is a significant predictor of the duration of viral remission and magnitude of the initial rebound following TI. This, if confirmed in larger studies, would have implications for those designing protocols for ART-reintroduction following viral rebound in TI studies.
Discussion
Since first described nearly two decades ago a persistent reservoir of HIV-1-infected cells remains the main reason that HIV-1 infection cannot be cured (Chun et al., 1997;Finzi et al., 1997;Wong et al., 1997). The simplest measure of the reservoir is a qPCR assay that detects all intracellular HIV-1 DNA regardless of whether it is integrated into host chromosomes or is in unintegrated linear or circular forms. A modification of this assay incorporates an initial step to prime host Alu repeats in order to quantify only viral DNA that has been integrated into host DNA. These assays are open to criticism as the vast majority of intracellular HIV-1 DNA is thought to be replication incompetent, and qPCR is not able to discriminate between replication competent and incompetent viral DNA genomes. This has led to the development of alternative approaches such as viral outgrowth assays (which are considered the gold standard, but are expensive and time-consuming, even with recent improvements to their protocols [Laird et al., 2013]) and assays to measure intracellular HIV-1 RNA, which may more accurately reflect an infected cell's ability to produce new virions, especially under conditions where viral transcription is stimulated (Bullen et al., 2014).
Despite the debate over the biological relevance of measuring HIV-1 DNA-and bearing in mind that none of these assays have been standardised for clinical use-a number of reports have attributed clinical meaning to HIV-1 DNA assays. Over a decade ago Tierney and colleagues suggested that proviral DNA in PBMCs from 111 participants receiving limited nucleoside analogue therapy was an independent predictor of clinical progression, although it is unclear how suppressive the ART regimes were in this study (Tierney et al., 2003). Havlir et al studied 100 individuals with chronic HIV-1 infection and viral suppression on ART and showed that HIV-1 DNA independently predicted residual viraemia on ART (Havlir et al., 2005). However there has not been a comprehensive analysis of both HIV-1 Total and Integrated HIV-1 DNA in individuals randomised to treatment or no treatment soon after seroconversion. We applied both Total and Integrated DNA measures to a unique cohort of individuals with evidence of PHI randomised to immediate interrupted ART or no therapy with longitudinal follow-up for a median of 4.5 years. As participants were randomised to different short course ART therapies prior to TI, we were able to determine how well HIV-1 DNA correlated with accepted surrogate markers of progression such as VL and CD4 count, and also whether HIV-1 DNA was an independent predictor of disease progression within the SPARTAC trial in both treated and untreated participants.
Our first finding that HIV-1 DNA associated closely with both plasma VL and CD4 cell counts (Figure 1) was not surprising as this is reported elsewhere (Chun et al., 2010;Parisi et al., 2012). Our findings that both baseline and pre-TI HIV-1 DNA strongly predicted the trial primary endpoint (Figure 4 and Table 2) are supported by data from other smaller, discrete observational studies, in which low HIV-1 DNA levels associated with a longer time to clinical progression (Goujard et al., 2006;Minga et al., 2008;Piketty et al., 2010), a lower viral set point and reduced chance of virological failure on ART re-initiation (Yerly et al., 2004) at PHI. We are aware of one other report associating HIV-1 DNA with time to viral rebound on stopping ART (Yerly et al., 2004). In this study Yerly and colleagues studied chronically-infected individuals with sequential treatment interruptions and reported that DNA was a predictor of the peak of viraemia following therapy cessation and failure to reach undetectable viraemia on re-starting ART-they do not report on the actual duration of viral suppression after TI. In a smaller study at PHI, Lafeuillade et al also associated HIV-1 DNA with time to rebound, however this study is complicated by other interventions such as IL-2 and hydroxyurea in addition to ART (Lafeuillade et al., 2003). We measured plasma viral load in the pre-therapy 'baseline' sample closest to the estimated time of infection. One possible criticism-and explanation for why plasma VL was less predictive in this study-is that other studies have associated progression with the 'set-point' viral load, the value at which the VL stabilizes following the dynamic PHI stage. However, in our untreated participants we found that the 'baseline' and 'set-point' VL values were highly correlated, although the former was higher, as would be expected (data not shown). From a clinical perspective, it is worth noting that if individuals with PHI are commenced on ART immediately, then their 'set-point' VL will not be known, potentially placing greater impact on the less dynamic HIV-1 DNA measure.
After TI, we observed a period of at least 12 weeks where no significant increases in the HIV-1 reservoir level were detected by both assays (Figure 3). However, we found little evidence of longer term post-treatment control (Persaud et al., 2013;Saez-Cirion et al., 2013), as levels of HIV-1 DNA 1-year after therapy interruption were not significantly different to that seen at pre-therapy baseline. Nevertheless, the potential for there to be a short window period during which plasma viraemia has rebounded but HIV-1 DNA levels have not risen significantly is encouraging, if future closely-monitored TI studies are to be undertaken. Concerns around 're-seeding' the reservoir are very real, and it is important that any possible harm associated with a TI is limited. It is therefore also re-assuring that in our admittedly small sub-study, re-initiation of ART subsequently restored HIV-1 DNA to pre-TI levels.
Finally, a low 'Total' HIV-1 reservoir at TI resulted in a longer time to a viral rebound to 400 copies/ml ( Figure 5). In univariable and multivariable Cox regression models Total HIV-1 DNA at TI predicted time to rebound to 400 copies/ml, whereas CD4 T cell count did not (Supplementary file 3). Of interest, the baseline VL and CD4 prior to therapy were also not predictive of time to rebound. In contrast to other studies exploring TI, we have a larger and randomly allocated patient group who have received similar durations of ART at PHI and hence can be directly compared. Although viral rebound was observed in all individuals after TI ultimately, this is the first report of a randomised cohort that has shown that time to viral rebound and primary study end point could be predicted by HIV-1 DNA measurement at TI.
Our findings of an association with HIV-1 DNA and time to viral rebound raise a number of other questions. Why was Total DNA predictive of rebound but not Integrated? Why was Total DNA predictive for rebound to 400 copies/ml but not 50 copies/ml? Much larger studies will be needed to answer most of these questions, however our sub-analysis of rebounding patients suggested that a high Total DNA at TI was more indicative of a higher VL rebound (i.e. >400 copies), whereas a low DNA level was not associated with a lower rebound. These data might indicate that a Total DNA level at TI is better at predicting the patients who will be quick to rebound rather than those who will maintain suppression. A question for larger studies to answer will be to define what the viral load cut-off should be for considering rebound, rather than just assuming the assay with the lowest limit of detection is best. Data from at least one other study (Riabaudo et al., 2009) indicate that a level greater than 50 copies/ml may be more relevant. The difference between the Integrated and Total DNA is also interesting. Integrated DNA should be the most biologically relevant marker, based on the assumption that unintegrated HIV-1 DNA forms are thought not to contribute to rebound viraemia. However, the assay for Total HIV-1 DNA is much simpler and with tighter coefficients of variation, possibly due to the lack of a pre-amplification PCR stage. Another important factor impacting our data is that the median estimated time from seroconversion was 73.8 days, and so most of our patients would be starting therapy at Fiebig stage IV or later. It is possible that earlier identification of PHI and initiation of ART would have a greater impact on the reservoir and post-treatment control, and it is important that large studies are undertaken to determine this.
In light of observational cohorts such as VISCONTI (Saez-Cirion et al., 2013) where treatment cessation revealed individuals who remain aviraemic post TI, there is increasing interest in undertaking closely monitored treatment interruption studies in which ART would be re-started based on a detectable plasma VL. These do not, however, have an encouraging history with previous studies set in the context of therapeutic vaccination or CD4 T cell restoration, resulting in rapid viraemic rebound and even harm (Strategies for Management of Antiretroviral Therapy (SMART) Study Group et al., 2006;Angel et al., 2011;Garcia et al., 2013). Additionally, the recent report of viral rebound in the case of the Mississippi baby, means that greater understanding of mechanisms behind post-treatment control is needed. The potential, therefore, to develop an algorithm to combine various biomarkers to help predict individuals suitable for such studies is appealing. These data are evidence that such an algorithm may be possible, and that a marker as simple as HIV-1 DNA could be an important component.
Participants and trial design
The design of the SPARTAC trial is reported elsewhere (SPARTAC Trial Investigators et al., 2013). In brief, SPARTAC was an international open Randomised Controlled Trial enrolling adults with PHI within 6 months of a last negative, equivocal or incident HIV-1 test. All participants gave written informed consent. Research ethics committees in each country approved the trial. Time of seroconversion was estimated as the midpoint of last negative/equivocal and first positive tests, or date of incident test. Participants were randomised to receive ART for 48 weeks (ART-48), 12 weeks (ART-12) or no therapy (standard of care, SOC). The primary endpoint was a composite of two events: if participants either reached a CD4 count of <350 cells/mm 3 (>3 months after randomization and confirmed within 4 weeks) or initiated long-term ART. This provided an immunological surrogate of clinical progression, but also allowed inclusion of those participants who commenced ART at CD4 cell counts greater than 350 cells/mm 3 . Time to virological failure of participants randomized to ART-48 (two analyses using both 50 and 400 HIV-1 RNA copies/ml as the cut-off [two consecutive readings]) was a secondary endpoint.
Participants for this sub-study of SPARTAC were those infected with subtype B HIV-1 and for whom adequate samples were available. For those in the analysis of progression and viral rebound at TI, we only selected participants who had viral load suppression (<50 copies/ml; Chiron bDNA) at point of stopping ART (Table 1 and Supplementary file 1). CD4 T cells isolated from peripheral blood mononuclear cells (PBMC) were sampled for HIV-1 DNA in all participants at baseline, regardless of trial arm. Participants randomised to the ART-48 arm were sampled at week 48 at the point of stopping ART and at a further 4, 12 and 60 weeks post ART interruption (52, 60 and 108 weeks post-ART initiation). Participants who were viraemic using the Chiron bDNA, (Bayer, Leverkusen, Germany) (LLD 50 copies/ml) at the point of TI were excluded.
Measurement of HIV-1 DNA
CD4 T cells were enriched from frozen PBMC samples by negative selection (Dynabeads, Invitrogen, Carlsbad, CA) to a purity of >97%. CD4 T cell DNA was extracted (Qiagen, Venlo, Netherlands) and used as input DNA for PCR. Cell copy number and total HIV-1 DNA levels were quantified both in triplicate using previously published assays (Duncan et al., 2013;Jones et al., 2014).
Integrated HIV-1 was measured using an assay based on that previously published (Liszewski et al., 2009) but with some minor modifications. 40 repeated integration measurements per patient sample were performed along with five PCR reactions to which no Alu primer was added, to serve as a background control for determination of sample positivity. The first round master mix contained 1.5 U platinum taq per 50 µl reaction. The second round qPCR reaction was the same as the Total reaction described above, but with 10 µl of first round product being the input DNA.
To quantify patient samples, one standard curve was generated by plotting the average cycle threshold (Ct) values for all integration signals at each Integration Standard (IS) dilution (70-0.2 copies of IS standard per well, diluted in 2 µg/ml PBMC DNA), so long as at least one integration signal was significantly different (two standard deviations) to the average gag-only background signal. The IS was a kind gift from Una O'Doherty. Ln(Copy number) was plotted vs Ln(average Ct) and each point on the standard curve was repeated in duplicate. The standard curve fitted extremely well to a line of best fit (r 2 = 0.987), which was then used to calculate copy numbers in patient samples. Each patient sample replicate was quantified individually using the standard curve to generate error. Plate to plate variation was assessed using quadruplicate replicates of 8e5 cells, which have one copy of HIV-1 per cell, diluted to 100 copies per well as a first round PCR DNA input. The average coefficient of variance was 8.31%.
Statistical analysis
HIV-1 DNA (Total and Integrated) distributed normally following log 10 transformation. The association between Total and Integrated HIV-1 DNA levels was tested using Pearson correlations. Linear regression was used to examine the association between continuous clinical baseline covariates and HIV-1 DNA. Tests between grouped variables and DNA levels were tested with Mann-Whitney, Kruskal-Wallis and t tests where appropriate.
For the association between baseline DNA and the SPARTAC primary endpoint, Kaplan-Meier plots and univariable Cox models were constructed, and subsequently adjusted for baseline covariates. Where participants received ART, the time to primary endpoint was calculated from the time of TI.
Association with time to rebound was also assessed using Kaplan-Meier plots and Cox models. All statistics were calculated using R version 3.1.0. Plots were drawn using Prism version 5.0. randomised to receive either no therapy from PHI (first column) and those randomised to receive 48 of weeks of ART from PHI (second column). Data as indicated were: † determined at pre-therapy baseline (trial week 0), * determined at week 48, prior to TI or + median (interquartile range). SOC: Standard of Care trial arm. DOI: 10.7554/eLife.03821.014 • Supplementary file 2. Sample numbers available at each time-point by trial randomization. Numbers of samples available at each time-point are presented. Participants from all three trial arms were included at week 0 as they were all treatment naïve at this point. Not all patients at any one time-point are always represented at other time-points due to variation in sample availability. Trial arms: SOC: Standard of Care (untreated); ART-48: 48 weeks of ART after randomization; ART-12: 12 weeks of ART after randomization. DOI: 10.7554/eLife.03821.015 • Supplementary file 3. Cox regression models for variables associated with time to rebound of 400 copies/ml and sampled at wk48. Table to show results of Cox regression analysis for time to virological rebound of 400 copies/ml of plasma with Total DNA and CD4 T cell count as covariables. Univariable and multivariable data are presented with Hazard Ratios (HR) with 95% Confidence Intervals (CI) and associated P values. • Supplementary file 4. 2 × 2 table comparing the number of patients with different times to 50 and 400 copy/ml rebound and their total and integrated pre-TI HIV-1 DNA levels. Table to compare the association between HIV-1 DNA levels at TI (both Total and Integrated) and time to a plasma viral load of either between 50-400 copies/ml or greater than 400 copies/ml. HIV-1 DNA levels were split into 'high' and 'low' by the median value. The proportions are significantly different by Fisher's exact test for Total (p = 0.0074) but not Integrated HIV-1-DNA levels (p = 0.091). DOI: 10.7554/eLife.03821.017 | 2017-06-07T04:28:43.336Z | 2014-09-12T00:00:00.000 | {
"year": 2014,
"sha1": "3a339a176e329c74a0e790909d7eef4887c4a96e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.03821",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f2296b6fe0c0af301807f4f002d22d2deb30db8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
202573978 | pes2o/s2orc | v3-fos-license | A Novel Mutation in ACAT1 Causing Beta-Ketothiolase Deficiency in a 4-Year-Old Sri Lankan Boy with Metabolic Ketoacidosis
Beta-ketothiolase (mitochondrial acetoacetyl-CoA thiolase, T2) deficiency is a rare genetic disorder of ketone utilization and isoleucine catabolism caused by mutations in the ACAT1 gene. Here we report the first Sri Lankan case of T2 deficiency confirmed by genetic analysis. A 4-year-old boy presented with the first episode of severe metabolic ketoacidosis after a febrile illness. On admission, the child was drowsy and had circulatory collapse needing intubation. Initial investigations were not detective of a cause and symptomatic management did not improve the condition. During the acute episode, his urine organic acid profile revealed elevations in 3-OH-2-methyl-butyric acid and tiglylglycine whilst 2-methylacetoacetic acid was not detected. The differential diagnoses for the urine organic acid profile included deficiency in T2 or 2-methyl-3-OH-butyryl-CoA dehydrogenase enzymes. Genetic analysis using polymerase chain reaction and DNA sequencing of ACAT1 gene revealed that the proband is homozygous for the novel missense likely pathogenic variant c.152C > T p.(Pro51Leu) confirming the diagnosis of T2 deficiency. This case highlights the importance of suspecting T2 deficiency in the differential diagnosis of pediatric metabolic ketoacidosis in preventing life threatening consequences of an otherwise benign disorder.
Introduction
Beta-ketothiolase (mitochondrial acetoacetyl-CoA thiolase, T2) is a key enzyme needed for ketone metabolism and isoleucine catabolism [1]. T2 deficiency is a rare autosomal recessive disorder with an incidence of less than one per 1,000,000 newborns (1]. Owing to diagnostic challenges and lack of awareness, many cases have been missed during their initial presentation [2]. It typically manifests between 6-18 months of age as acute and recurrent ketoacidotic episodes triggered by ketogenic stress [1,2]. Patients are typically asymptomatic between episodes, and the episode frequency decreases with age [2]. The characteristic laboratory finding is the elevation of urine organic acids; tiglylglycine (TIG) 2-methylacetoacetic acid (2MAA) and 3-OH-2-methyl-butyric acid (2M3HB). More than 70 different mutations of the mitochondrial ACAT1 gene have been identified to date as causative for T2 deficiency [3]. Only one case of T2 deficiency has been reported from Sri Lanka, detected by gas chromatography/mass spectrometry (GC/MS), but the diagnosis was not confirmed by genetic studies or enzyme analysis [4]. In this study, we report on the first Sri Lankan case of T2 deficiency confirmed by molecular analysis and characterize a novel mutation in ACAT1 gene.
A 4-year-old boy who is the second child of healthy, consanguineous parents, presented with a four days' history of vomiting, loose stools and low-grade fever. He has been previously well with uneventful birth and neonatal periods and normal development. Fever settled with medicine prescribed by a general practitioner, but the child's level of consciousness deteriorated. On admission to the pediatric intensive care unit in a tertiary care hospital, he was afebrile, drowsy, unresponsive to painful stimuli and hypotonic, and had decreased reflexes and sluggish, but equally reactive pupils. Acidotic breathing and circulatory collapse were noted. Clinical examination was negative for skin rashes and neck stiffness.
Initial investigations revealed severe high anion gap metabolic acidosis (pH 7.28, pCO 2 9.4 mmHg, HCO 3 Radiographic imaging of the chest and abdomen were unremarkable. Non contrast computed tomography of the brain showed multiple cerebral infarctions. The child required intubation, and was symptomatically managed with intravenous fluids, repeated doses of intravenous bicarbonate therapy, inotropes and broad-spectrum antibiotics because of concerns of sepsis. As the child's condition did not improve with the initial management, a urine sample collected in the acute stage was sent to our laboratory at Lady Ridgeway Hospital for Children for organic acid analysis. GC/MS analysis revealed very high levels of 2M3HB and TIG (see Fig. 1) which was suggestive of T2 deficiency. However, in the absence of an increase in 2MAA, a deficiency of 2-methyl-3-OH-butyryl-CoA dehydrogenase (MHBD) caused by mutations in HSD17B10 gene was also a possibility.
To confirm the diagnosis, ACAT1 and HSD17B10 genes were analyzed by polymerase chain reaction and by sequencing of both DNA strands of the entire coding region and the highly conserved exon-intron splice junctions. The test was performed on dried blood spots on a filter paper at Centogene AG, Germany. The child was homozygous for the novel variant c.152C [ T p.(Pro51-Leu) of ACAT1 NM_000019.3 gene which confirmed the diagnosis of T2 deficiency.
The child was ventilator bound for nearly four months. Total parenteral nutrition was later converted to feeding via a jejunostomy with mild protein restriction. Though the biochemical parameters normalized with treatment, the child entered a continuous vegetative state and expired after another 4 months.
Discussion
T2 deficiency is a rare genetic disorder that results from biallelic pathogenic variants of ACAT1 gene located on the chromosome 11q22.3 [2,3]. T2 enzyme cleaves 2-methylacetoacetyl-CoA in isoleucine metabolism, and is also responsible for the last step in ketogenesis in liver and in ketolysis in extra-hepatic tissues [1] (see Fig. 2). Therefore, T2 deficiency leads to ketosis and accumulation of metabolites of upstream reactions.
The typical presentation is in early childhood, with vomiting, hyperpnoea, drowsiness, lethargy and coma triggered by a ketogenic stress such as fasting, infection and physical exertion [3]. Some may have atypical presentations like metabolic stroke and metabolic encephalopathy and delayed onset as in our case [1]. Though rare, neonates can present with vomiting, poor suckling and lethargy [5]. T2 deficiency may mimic central nervous system infection, diabetic ketoacidosis, if associated with stress hyperglycemia; or even salicylate [3]. Patients are reported to be asymptomatic between episodes [6].
Patients tend to have severe ketoacidosis outweighing the associated illness (pH \ 7.3 or HCO 3 -\ 15 mmol/L, blood total ketone bodies [ 7 mmol/L) with normal or slightly elevated plasma ammonia [5]. When associated with high plasma ammonia, organic acidemias should be suspected. Normoglycemia is usual, but hyperglycemia and hypoglycemia have been reported [5].
Urine organic acid analysis in the acute stage is necessary to exclude other organic acidemias like methylmalonic, propionic and isovaleric acidemia [3]. The characteristic organic acid profile in T2 deficiency is an elevation in 2M3HB, TIG and 2MAA, both during the acute episode and in between [6]. As 2MAA is volatile, it may not be detected in some laboratories [4,7]. Batch analysis of the stored urine sample and instability during sample transport might have been the reasons for absent 2MAA in our case.
Individuals with succinyl CoA:3-oxoacid CoA transferase (SCOT) deficiency can mimic attacks of T2 deficiency, but owing to its neonatal onset, permanent ketosis and non-specific urinary organic acid profile, SCOT deficiency was effectively excluded [1]. MHBD is an X-linked disorder having a similar clinical picture as in T2 deficiency, and urine will contain increased level of 2M3HB and TIG with no increase in 2MAA [1]. Therefore, MHBD was a differential diagnosis in this case.
Definitive diagnosis of T2 deficiency is by enzyme assay and genetic studies. Enzyme analysis for beta-ketothiolase on skin fibroblasts is recommended than using blood mononuclear cells [5,8]. An abnormal potassium-dependent acetoacetyl-CoA thiolase assay will exclude MHBD deficiency and confirm T2 deficiency [1].
The proband was homozygous for the c.152C [ T p.(Pro51Leu) of ACAT1 gene, classified as a missense likely pathogenic (class2) variant according to the American College of Medical Genetics and Genomics guidelines. This gene mutation is not previously reported in the Exome Aggregation Consortium population database. Mutations of the mitochondrial ACAT1 gene are highly diverse [3]. The only common ACAT1 mutation pattern identified up to date is p.Arg208 * among the Vietnamese [3].
There has been no obvious concordance between the severity of the disease and the genotype [7,9]. A high urine TIG level is considered as the most promising predictor of severe metabolic phenotype and a severe block at the T2 enzyme level [7]. A very high level of urine TIG was indeed noted in our case. However, low levels of urine TIG does not exclude T2 deficiency, as mutations with retained residual activity (e.g. H144P mutation) can give rise to atypical urine organic acid profiles [6,7]. This child's clinical outcome turned out to be a one with poor prognosis. However, T2 deficiency tends to be a benign condition in most, provided that it is diagnosed early and managed aggressively to prevent complications [3,9].
The management of an acute episode includes hydration with normal saline and dextrose, intravenous sodium bicarbonate bolus followed by infusion if plasma pH \ 7.1, correcting hypoglycemia and peritoneal dialysis in severe acidosis [5]. A further ketoacidotic event can be prevented by avoiding fasting, taking meals rich in carbohydrates during an infection, glucose infusion in states of poor feeding, restricting excess fat intake, mild protein restriction and L-carnitine supplementation in individuals with low carnitine levels [5]. Screening the family members is necessary to identify asymptomatic individuals and genetic counseling should be provided [1]. Newborn screening by tandem mass spectrometry will be a good option to identify T2 deficient cases early, but is currently not available in Sri Lanka [1].
Conclusion
T2 deficiency should be suspected in early childhood, presenting with severe metabolic ketoacidosis preceded by an acute infection or fasting. T2 deficient patients can have more favorable outcomes with timely diagnosis and judicious management. | 2019-09-16T14:20:19.182Z | 2019-09-16T00:00:00.000 | {
"year": 2019,
"sha1": "290cc974a8d51baa7196b7c4686f146c5a7d408c",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12291-019-00851-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "290cc974a8d51baa7196b7c4686f146c5a7d408c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55241018 | pes2o/s2orc | v3-fos-license | The Effect of Corporate Social Responsibility Disclosures on Share Prices in Japan and the UK
This paper investigates whether corporate social responsibility disclosure (CSRD) is associated with firms’ market values in order to assess whether CSRD provides incremental value relevant information to investors. A modified Ohlson (1995) model is used, which is a widely accepted equity valuation model in accounting research. The findings suggest that investors in the UK consider CSRD information in the total information set they use for their investment decision-making, whereas Japanese investors do not appear to find that CSRD provides incremental information over and above financial information to assist in their valuations of firms. These findings have implications for investors and regulators, specifically around the control and governance of firms.
INTRODUCTION
Increasingly, stakeholders call for corporations to take responsibility for the impact of their activities on the environment and society by disclosing information on how these impacts are being managed (De Villiers & Van Staden, 2010). These corporate social responsibility disclosure (CSRD) have increased along with concerns for the environment and society. CSRD provides mainly non-financial information about environmental, social, and governance aspects of an organisation. CSRD has been provided in stand-alone reports, alongside the traditional financial information in annual reports, and more recently in integrated reports Atkins & Maroun, 2015;Stent & Dowler, 2015). However, unlike financial reporting, CSRD tends to be a voluntary reporting practice (Kolk, 2008). As firms have the choice to provide CSRD, logical economic thinking says that they will only do so if they derive some benefit from it. By providing additional disclosures via CSRD, firms can reduce the information asymmetries between the company and its external shareholders (Myers & Majluf, 1984). This benefits firms because it can lead to a reduced risk of adverse selection by investors and higher market valuations of firms' shares (Healy & Palepu, 2001). If investors consider CSRD with the financial information they use in their investment decision-making process, then the two types of information together should better explain market valuations. Therefore, the objective of this study is to investigate whether CSRD is associated with firms' market values in order to assess whether CSRD provides incremental value relevant information to investors. We use a modified Ohlson (1995) model, which is a widely accepted equity valuation model in accounting research. Hassel, Nilsson, and Nyquist (2005) found that environmental performance is value relevant but that investors reduce market values as they follow the cost concerned school of thought. Moneva and Cuellar (2009) investigate the value relevance of environmental information, finding that financial environmental information is value relevant, but non-financial environmental information is not. Schadewitz and Niskala (2010) find that CSRD prepared using the GRI reporting framework has incremental value to investors in Finnish companies. Therefore, these three studies, that all use a modified Ohlson (1995) model, report mixed results, suggesting the need for further investigation. KPMG (2008) reports that the majority of the top 100 companies in the 22 countries examined in their survey use the GRI reporting framework when preparing CSRD. Japan and the United Kingdom (UK) are identified as the leading countries where firms have implemented CSRD. Ninety-three percent of the top 100 Japanese companies and ninety-one percent of the top 100 UK companies provided CSRD in 2008. Reporting on environmental, social, and governance aspects is becoming an established practice for the companies in these countries. Thus, the UK and Japan offer an interesting context to study the value relevance of CSRD. The level of CSRD has been relatively high in these countries for some time (KPMG, 2008). Agency theory arguments suggest that these companies must derive some benefit from CSRD to justify the continued high level of voluntary reporting. Non-financial CSRD information can lessen the information asymmetries that exist between these firms and their investors. With more information, investors' uncertainty about the future economic benefits and risks of the company can be reduced (Healy & Palepu, 2001). Investors can use the information to make better estimates of the company's value and the price they are willing to pay for the company's shares. Thus, in this investigation it is expected that there will be an association between the level of CSRD and the market values of the top companies in both the UK and Japan, where CSRD is an established practice. Prior studies have also considered the effect that a company's industry has on its reporting incentives. Companies operating in environmentally sensitive industries face greater public policy concern and pressure. This induces more extensive disclosure practices in order to appease the public's concern about the environmental and social impacts of the organisation's activities (Cho & Patten, 2007;Cormier & Magnan, 2007). Therefore, the association between market values and CSRD by companies operating in environmentally sensitive industries is also tested. Two samples are used in this study. The first consists of 91 of the UK's largest companies. The second consists of 85 of Japan's largest companies. The top 100 largest companies from each country (from the KPMG (2008) survey) provided the base for the two samples, however some companies were eliminated because their corresponding financial information could not be identified. The two samples were tested separately, with the results of the UK sample discussed first (see Tables 2 to 4), followed by the results of the Japan sample (see Tables 4 to 7). Two measures of CSRD are used. The first is a composite score measuring several aspects of CSRD and the second is an indicator of whether or not the GRI reporting framework was used in preparing CSRD. Both measures are taken from the KPMG database for CSRD (KPMG, 2008). We use the price specification Ohlson (1995) model to test if CSRD increases financial information's explanatory power of share prices and to test whether CSRD is significantly related to share prices. P-values and adjusted R 2 values are used to assess the significance of the variables' coefficients and the explanatory power of the models, respectively. Some quite surprising results are obtained. It seems that only investors in the UK consider CSRD information in their total information set used for their investment decisionmaking. Whereas, investors in Japanese firms do not appear to find that CSRD provides incremental value to their valuations of the firms.
Corporate Responsibility Reporting
There are increasing calls for companies to take accountability for their environmental and societal impacts (De Villiers, 1998 Cormier and Magnan (2007) provide mixed evidence that environmental information is decision useful to investors. They investigate the impact of voluntary environmental reporting on the relationship between a firm's earnings and its market valuation. The authors assess countryspecific factors that may affect the impact of environmental reporting. Canada, France, and Germany are considered, because of their differing reporting and governance regimes. Canadian firms represent the North American context, whereas French and German firms represent differing continental European contexts. Canada is seen as having more extensive financial reporting disclosure regulations. Also, the common-law legal origin of Canada tends to indicate that the reporting environment is more shareholder-orientated. The European countries are viewed as having less comprehensive reporting requirements and a reporting environment that is more stakeholderorientated. Thus, the authors expected firms' environmental reporting to affect the market valuation more so in Europe than in Canada. The results for German firms suggest that environmental disclosures have a moderating impact on market valuation of firms' earnings. However, investors in French and Canadian firms do not use environmental reporting to value earnings. In comparing the results from Canadian firms with the European firms, it is found that environmental reporting has a greater impact on the market value of German firms than it does on Canadian firms. Yet there was no difference found between French firms and Canadian firms with regard to the impact of environmental reporting on market value (Cormier & Magnan, 2007). Banghoj and Plenborg (2008) studied the value relevance of voluntary disclosures made in annual reports of Danish firms. They argued that investors and analysts may find additional information that is voluntarily disclosed by management useful in valuing firms' future earnings. The reasoning behind their argument is driven by economic theory, which suggests that additional disclosures provide information about the amount, timing and uncertainty of future earnings. Consequently, investors and analysts should be able to make more accurate estimates of firms' future earnings, thus enhancing the association between market valuations and future earnings. However, the results do not support this notion. The authors do not find an association between current returns and future earnings. The authors speculate that investors may not be capable of incorporating voluntary information in their firm value estimates, rather than the disclosures lacking value relevance.
A common form of analysis is to test the relationship between CSRD and the level of market value of equity. The Ohlson (1995) Equity Valuation Model has been the prevalent model to test such a relationship. In testing the value relevance of environmental performance information to investors in Swedish firms, Hassel et al. (2005) employ the Ohlson (1995) model. The authors consider the relationship between environmental performance disclosures and firms' market values in terms of the cost-concerned school of thought and the value creation school of thought. Under the costconcerned perspective, environmental disclosures are expected to cause the market value to decline. It is perceived that investments in environmental projects only represent increased costs, which decreases the firm's earnings. Alternatively, the value creation school of thought suggests that environmental investments are a way to enhance a firm's competitive advantage, and thus improve the prospects of future earnings, which in turn improves market value. The results show that in relation to environmental performance disclosures, the market value of the firm decreases. Thus, the results indicate that environmental disclosures are value relevant and that investors follow the costconcerned school of thought. Moneva and Cuellar (2009) examined the value relevance of financial and non-financial environmental disclosures made in the annual reports of a sample of listed Spanish companies. Both compulsory and voluntary environmental disclosures were analysed. In order to assess the value relevance of such disclosures, the authors performed a regression based on Ohlson's (1995) model. This allowed the authors to investigate the impacts of environmental activities on Income Statement accounts and the valuation of future profitability and growth through environmental investment projects. The results support the importance of financial environmental information to investors when valuing companies. However, non-financial environmental disclosures were not found to have relevance to investors. The insignificant results may be explained by firms using non-financial environmental disclosures in selfpromotion, whereby they overstate positive environmental contributions and understate negative impacts. Alternatively, the non-financial disclosures could be more associated with long-term strategic decisions while Spanish market investors focus more on the short-term strategies of firms (Moneva & Cuellar, 2009). Similarly, the link between firm value and CSRD for Finnish firms was tested by Schadewitz and Niskala (2010). The Ohlson (1995) model was employed using an indicator variable of whether or not a firm followed the GRI guidelines to represent CSRD. They found that CSRD which followed the GRI guidelines aided investors in making a more precise market valuation of the firm. This indicates that information from CSRD reduces information asymmetry and has incremental value to investors (Schadewitz & Niskala, 2010).
Theoretical Framework and Hypotheses Development
The objective of this study is to investigate whether investors consider CSRD to be decision-useful information and thus use it in their market valuations of firms. It is important to note again that CSRD is voluntary so one would expect firms to derive some benefit from the practice otherwise they would not choose to do it. Management must weigh the benefit of investors having more information about the environmental and social impacts of the firm and therefore a better understanding of the firm, against the potential costs of other stakeholders reacting negatively to the disclosed information (e.g. pressure for environmental regulation) (Cormier & Magnan, 2007). Agency theory is drawn on to explain the reasoning behind why firms would undertake voluntary CSRD.
The typical structure of a company is to have a management team (the agents) in charge of the operational activities and running the business on behalf of the external shareholders (the principals). This structure results in a separation of control and ownership.
As a consequence, information asymmetry arises as managers have a greater knowledge of the organisational activities and how the shareholders' funds are being used (Myers & Majluf, 1984). This information asymmetry generates uncertainty in investors' assessments of the potential future earnings and cash flows of the company. Investors face the risk of adverse selection as they may overvalue an investment and put their money in a company that does not generate their required rate of return. This risk generated from information asymmetry impacts on how much investors are willing to pay for companies' shares. Given the lack of information, investors are likely to assume the worst and as a result they will decrease the share price of the company to compensate for the associated risk (Myers & Majluf, 1984).
Reporting is a key tool managers use to communicate firm-performance and operational activities with external investors, hence reducing information asymmetry (Healy & Palepu, 2001). Communication between these parties is essential in the functioning of efficient markets. External investors require relevant corporate information when determining the current value of a firm (Healy & Palepu, 2001). Discretionary disclosures are made in an attempt to reduce the information asymmetry apparent between a firm's managers and its external investors (Brammer & Pavelin, 2006). Reports and disclosures make the actions of managers more transparent to investors. Transparency reduces investors' uncertainty, allowing them to make more accurate estimates of future earnings and cash flows. Enhanced transparency and more accurate estimates of future earnings mean investors can determine a more accurate share price for the company (Cormier & Magnan, 2007). Additionally, CSRD provides qualitative information regarding a firm's corporate responsibility. The benefit of nonfinancial information is that managers often disclose more information about their activities than is required by law (Cormier & Magnan, 2007). Thus, using agency theory one can argue that CSRD is carried out because it reduces information asymmetry, allowing investors to make more accurate market valuations. The information disclosed through CSRD will be value relevant if it fulfils this function and will provide incremental value to investors as they include the CSRD in the total set of information (i.e. financial reports and other company disclosures) they use in assessing a firm's value (Power, 1991). Drawing on the literature reviewed earlier and the information asymmetry arguments of agency theory, the following hypothesis is derived.
H1: Higher levels of CSRD are expected to be associated with higher market values of equity.
In addition, firms that operate in environmentally sensitive industries tend to have different CSRD disclosure practices than companies that do not operate in environmentally sensitive industries ( mining; coal mining and oil and gas exploration; paper and pulp mills; chemicals, pharmaceuticals and plastics manufacturing; iron and steel manufacturing; and electricity, gas and waste water. Given the sensitive nature of these industries, firms operating within them are exposed to higher levels of environmental publicity and public concern. This can induce public policy pressure, which acts as an incentive for these firms to provide greater levels of CSRD disclosures than firms which do not operate in environmentally sensitive industries (Cho & Patten, 2007;Cormier & Magnan, 2007). More extensive disclosures can further reduce information asymmetry and the risk of adverse selection for investors in companies operating in environmentally sensitive industries. Thus, it is expected that firms' market values will be incrementally higher when a higher level of CSRD is disclosed by firms that operate in environmentally sensitive industries. The following hypothesis is derived for testing in the context of this study.
H1a: Higher levels of CSRD by firms operating in environmentally sensitive industries are expected to be associated with higher market values of equity.
UK and Japanese firms are at the forefront of CSRD. These two countries have led the rest in making corporate responsibility disclosures over the last decade (KPMG, 2008). Of the 100 top Japanese firms ninety-three percent released CSRD and of the top 100 UK firms ninety-one percent released CSRD (KPMG, 2008). Such reporting is now considered the norm for the top firms of these two countries. As such, the UK and Japan provide an interesting context to assess the value relevance of CSRD. In the UK CSRD is a voluntary reporting practice, so one would assume that many of the country's largest companies have a valid reason for undertaking CSRD for a long period of time and one that also explains why more companies have started to produce CSRD. The reporting practice has become well established in Japan too. CSRD is also considered a voluntary reporting practice in Japan and it is reported that the majority of Japanese companies' CSRD is prepared using the GRI reporting framework (Nuzula & Kato, 2011). However, ministries for the environment and for economy, trade and industry have issued environmental reporting and accounting guidelines to aid companies with their CSRD (Kolk, 2008; Krechowicz & Fernando, 2009). Despite CSRD being considered a voluntary practice in Japan, firms listed on the Japanese stock exchange must adhere to environmental performance and reporting regulations. Such regulations are also expanding to include related economic and social issues (KPMG, 2008). This is not the case in the UK. Regulation via the Companies Act seems imminent, but has not yet been enforced (KPMG, 2008). Agency theory arguments discussed earlier can be applied to provide reasoning for such practices by UK and Japanese firms. CSRD provides additional information to investors, beyond what is required to be disclosed in the annual report. This practice reduces information asymmetry as shareholders are now more aware of the firm's activities, with regard to its societal and environmental behaviour (Cormier & Magnan, 2007). Investors demand this information Electronic copy available at: https://ssrn.com/abstract=2722280 and consider it alongside financial information when valuing companies because it helps them to assess the future economic benefits of the company and the associated idiosyncratic risk better. This works to reduce the risk of adverse selection and enhances firm value as investors consider the new information and impound it into the valuation of the share price (Healy & Palepu, 2001). However, when considering how the CSRD practice and surrounding reporting environment of the two countries, it becomes evident that there are potentially different reasons driving the similar reporting practices of the two countries. Consequently, comparing the value relevance of CSRD to investors in companies in the UK and Japan becomes an interesting and important research question to academics, companies, equity market participants, standard setters, and regulators as they consider the growing concerns for the environment and society, demand for accountability of corporations, and the future of CSRD.
Data
Two separate samples are used in conducting this research. The first sample consists of 91 UK firms and the second sample consists of 85 Japanese firms. These samples are taken from the KPMG International Survey of Corporate Responsibility Reporting (KPMG, 2008). KPMG compiled data about the disclosure practices of the top 100 companies in 22 countries, based on revenue rankings. The survey reviewed information from publicly available corporate responsibility or sustainability reports, company websites, and annual financial reports. The information evaluated was issued by companies into the public domain between 2007 and 2008 (KPMG, 2008) 1 . From this information KPMG constructed measures relating to the CSRD of each company. Two of the CSRD measures that KPMG construct are employed in this research. The survey provides a credible and independent source of information on firms' CSRD practices. The first CSRD measure is a composite measure which gives a numeric score of the disclosure trends. Ten categories are represented in this score: overall environmental strategy, stakeholder engagement, corporate management systems, reporting, governance, climate change, supply chain, responsible investment, assurance, whether or not the GRI guidelines are used when preparing reports, and the GRI Application level achieved. A number of criteria were examined to assess each company's disclosure of the above categories. A score of one was given when a criteria was achieved, with the final composite score having a possible range of 0 to 87. The second measure of CSRD is an indicator variable for whether or not a company followed the GRI reporting framework when preparing its CSRD. The GRI's Sustainability 1 KPMG does not disclose the time that each firm released their corresponding corporate responsibility disclosures. It is believed that such information is released at a similar time to the annual report being published. The data employed in this study is therefore taken for companies' fiscal yearend falling in the period January 2008 to December 2008.
Reporting Framework aims to provide guidance to any organisation on how to report their sustainability performance (GRI, 2011). KPMG report that the majority of the top 100 companies in the examined countries and the top 250 global companies use the GRI reporting framework when preparing corporate responsibility reports (KPMG, 2008). Thus, the GRI measures of CSRD provide a reasonable indication of the type of corporate responsibility disclosures provided by companies. However, the composite measure offers a deeper indication of the level of disclosures made 2 .
The remaining data is taken from the Compustat Global database. All financial accounting information, share prices and outstanding shares are collected from this database. For the UK sample, nine firms are eliminated from the original 100 analysed by KMPG because corresponding financial information could not be identified. This results in the final sample of 91 UK firms. A total of fifteen firms are eliminated from full sample of 100 Japanese firms due to an inability to identify corresponding financial data. This results in the final sample of 85 Japanese firms.
Empirical Model
Value relevance studies in accounting literature examine the relationship between accounting information and equity market valuations. More specifically, these studies test to see whether accounting information explains cross-sectional variation in share prices. Information used by investors is said to be impounded into the stock price of a firm, thus reflecting the present value of a firm's future economic benefits. By assessing the market value or stock price of firms producing CSRD an indirect test of the future benefits of such disclosure is performed (Ahmed & Falk, 2006). Ohlson (1995) derived a valuation model that evaluates firms' equity market values as a function of capitalised current earnings, current book value, 2 KPMG also supplied a measure of the level of the GRI reporting framework complied with by companies. The Global Reporting Initiative specifies three application levels of GRI reporting framework, levels A, B, and C. Level A is deemed the most comprehensive as companies must report on all 50 GRI core indicators. Level B is the next compliance level down where companies must report on 20 of the core indicators. Level C is the least comprehensive as companies only have to report on 10 of the indicators. Companies may also have these reports independently assured. This is indicated by a '+' sign. KPMG apply a numeric representation of the overall GRI Application level each company achieves. The GRI Application level measure ranges from zero to six, where zero indicates that the GRI reporting framework has not been used in preparing CSRD, 1 = C level compliance, 2 = C+ level compliance, 3 = B level compliance, 4 = B+ level compliance, 5 = A level compliance, and 6 = A+ level compliance. However, in reviewing the GRI Application level scores given to each company in the samples, we identified some inconsistencies with the GRI reporting framework indicator value (refer to Table 1 for the definition of this variable). As a result the GRI Application level measure is not used in this study.
and other value-relevant information. The examination can be done with different measurements of equity market valuations. Models may be employed using either the level of firm value or share price, or share price returns (Barth, Beaver, & Landsman, 2001). The model adopted should be driven by the research question, hypotheses developed, and econometric considerations. The difference between studying the level of firm value and share price returns is that the former is concerned with determining what is reflected in firm value and the latter is interested in determining what is reflected in changes in value over a specific period of time (Barth et al., 2001). As the purpose of this research is to determine whether CSRD may be considered by investors when pricing a firm, the primary model employed examines the level of firm value in relation to financial and nonfinancial accounting information. Ohlson's (1995) model provides the basis for the development of the least square regression models used in this paper, with CSRD representing other potentially value-relevant information. The Ohlson (1995) model is as follows: Where MV t is the market value of equity at time t, AE t is abnormal earnings for the period ending time t ,and v t is other value-relevant information at time t. AE t is calculated as the difference between net income for period t and opening book value of equity multiplied by the required rate of return.
Firms' required rates of return are needed to calculate abnormal earnings and implement the Ohlson (1995) model. However, this information is not observable in practice. An alternative would be to use analyst forecasts to calculate an implied required rate of return, but this information was also not available for the selected samples. As such, the current year's earnings are used in place of abnormal earnings (Ahmed & Falk, 2006). Following Barth and Clinch (2009), variables have been deflated by the number of the firm's outstanding shares. This is done to mitigate any scale effects present in the samples. There has been debate regarding the appropriate method of standardisation. Different studies have employed different deflator variables, so Barth and Clinch (2009) test six versions of the Ohlson (1995) model commonly used in accounting research to see which is most effective at mitigating scale effects. The six specifications for the dependent variable include market value of equity, price, equity market-to-book ratio, price-to-lagged price, returns, and equity market value-to-market value ratio. Barth and Clinch (2009) find that standardising by the number of outstanding shares (i.e. the price specification) is the most effective at mitigating scale effects, in general. They report that the price model more consistently resulted in correct inferences regarding whether the coefficients equal zero, and result in lower bias and mean absolute error in the coefficients and regression R 2 , regardless of the type of scale effect (Barth & Clinch, 2009, p. 283). The market value of equity model was also generally effective at mitigating scale effects, but to a lesser extent than the price specification. Consequently, we re-estimate the regressions using this specification as a robustness test. The remaining four variations of the Ohlson (1995) model have not been used in robustness testing due to Barth and Clinch's (2009) conclusion that they are less effective at mitigating scale effects and may lead to incorrect inferences. The price specification model employed in the primary test is as follows: , +3 = 0 + 1 , + 2 , + , As the objective of this research paper is to investigate the incremental value of CSRD, the association between financial accounting information and firm value must be tested first. This is done by implementing the above regression model (2). Then a measure of CSRD can be incorporated to test the value relevance that such disclosures have for shareholders. This is done by implementing the following regression model: An extension of model (3) Two measures of CSRD are used when testing equations (3) and (4). The first is the composite score off each company's CSRD practices, as measured by KPMG in their 2008 survey (KPMG, 2008). The second is measure indicates whether the GRI reporting framework was employed in each company's preparation of CSRD. Refer to Table 1 for detailed descriptions of all of the variables employed in the testing of equations (2) through (4).
The market value specification Ohlson (1995) model is employed as a robustness test. Barth and Clinch (2009) find evidence that this version of the Ohlson (1995) model is relatively consistent in resulting in correct inferences when data has scale effects. It was not found to be as generally effective as the price specification Ohlson (1995) model, but was more effective than the other commonly employed variations of the Ohlson (1995) model. The dependent variable is a measurement of the market share price of company i. The closing share price on the last day of the month three months after the end of the financial year, t, for company i is used to allow time for the issuance of corporate reports and subsequent examination by users of the reports.
, , is the closing book value of equity per share for company i. It is calculated as difference between the company's total assets and total liabilities scaled by the number of outstanding shares at the end of the company's financial year, t. , , is a measure of the earnings per share for company i. It is calculated as income before extraordinary items deflated by the number of outstanding shares at the fiscal yearend t. , (Measure 1: COMP) COMP is a numerical measure for the disclosure trends of a company's corporate responsibility reporting (CSRD). This measure is not deflated because it is independent of the company's size. COMP is the composite measure derived from the KPMG (2008) survey. A comprehensive description of the measurement is given in section 3.1. GRI is an indicator variable for a company's corporate responsibility reporting (CSRD). This measure is also not deflated because it too is independent of the company's size. GRI indicates whether or not company i has used the GRI reporting framework in preparing its CSRD. If it has, GRI is equal to 1. If it has not, GRI is equal to 0. , , represents companies in environmentally sensitive industries. Environmentally sensitive industries are based on the classification used in De Villiers, Naiker, and Van Staden (2011). These industries include: forestry; metal mining; coal mining and oil and gas exploration; paper and pulp mills; chemicals, pharmaceutical and plastics manufacturing; iron and steel manufacturing; and electricity, gas, and waste water. For company i, , is equal to 1 if the company operates in an environmentally sensitive industry, and 0 otherwise.
, ,
This term represents the interaction between environmentally sensitive industries ES and corporate responsibility reporting CSRD. It is calculated as ES multiplied by the CSRD measure (for COMP and GRI).
Thus, we employ this specification to test the robustness of the results derived using the price specification regression model. To further test the robustness of the results obtained, the regression models are estimated again using equity market values as at fiscal yearends. Investors may be timelier in incorporating financial and non-financial CSRD information into the firms' market values than the three month lag allowed. Investors may anticipate the information before it is disclosed and impound it in the share price at the end of the company's financial year. Thus, both the primary test and the robustness test are re-estimated using closing stock prices (and the number of outstanding shares for the robustness test) as at the last day of company i's financial year.
RESULTS
This section provides an analysis of the results for the UK sample and for the Japanese sample, respectively. The value relevance of CSRD in each of the samples is assessed by employing equation (2) and equation (3) sequentially. As CSRD is a voluntary reporting practice it is expected that firms will choose to make such disclosures on the belief that the benefits of doing so will outweigh the associated costs. The additional information that CSRD provides will aid in reducing uncertainty and risk faced by investors due to information asymmetry. It is therefore expected that CSRD will have incremental value to investors as they can include the CSRD disclosures in the full information set used to assess firm value. As such, the adjusted R 2 is expected to increase from equation (2) to equation (3) with the inclusion of the CSRD variable in the regression. Also, the coefficient of the CSRD variable ( ) is expected to be positive and significant, indicating that there is a positive relationship between the level of CSRD and firms' market value, as hypothesised in H1. In a further analysis, the results from equation (4) are used in assessing whether higher levels of CSRD provided by firms operating in environmentally sensitive industries are likely to be used differently by investors to determine the market value of a firm than with firms that do not operate in environmentally sensitive industries. The coefficient for the interaction between the ES and CSRD variables ( ) can be used to examine this issue. Equation (4) is run twice, firstly using COMP as the measure of CSRD and then again using GRI as the CSRD variable. As hypothesis H1a states, it is expected that firms' market value will be incrementally higher when a higher level of CSRD is disclosed by firms that operate in environmentally sensitive industries. Thus, the adjusted R 2 is expected to increase for equation (2) to equation (4) and the coefficient for the interaction term between ES and the CSRD variable ( ) is expected to be positive and significant.
Results for the United Kingdom
The descriptive statistics for the UK sample derived from using the price specification Ohlson (1995) model are provided in Table 2. On average, the share price for the sample of UK companies is 23.454 (with a median of 5.465). The maximum share price is 340.56 and the minimum share price is 0. 19. This indicates that the data may be positively skewed. The mean (and median) appear to be closer to the minimum value of the sample's price observations, suggesting that most of the sample is concentrated at the lower end of the distribution while a few observations have higher price values. The book value of equity per share and earnings per share also appear to be positively skewed. The book value of equity per share for the UK sample has an average of 6.825 and a median of 2.723. The maximum book value of equity per share is 111.898 and the minimum is -0.394. The average value of earnings per share for the UK sample is 0.741 (with a median of 0.421). Earnings per share has a maximum value of 10.927 and a minimum value of -5.426. COMP (the composite score) and GRI (an indicator for using the GRI Reporting Framework) are the two measurements capturing the sample's CSRD disclosures. COMP has a mean score of 30.33 and a median score of 31, for the UK sample. From a possible range of 0 to 87, the UK sample has a maximum score of 64 and a minimum score of 3. GRI has a mean of 0.374 and a median of 0. The GRI mean indicates that 37.4% of the sample uses the GRI reporting framework, translating into thirty-four out of the ninety-one companies in the sample employing the GRI reporting framework. COMP and GRI are the two measures used to represent CSRD. Refer to Table 1 for a detailed description of the variables used in the regression analyses.
The Pearson correlation coefficients are provided in Table 3. This offers an initial indication that share prices are positively associated with the two measures of CSRD disclosure, COMP and GRI. Also, most of the correlations between the independent variables are relatively low, below 0.7. The exception is the correlation coefficient between book value of equity per share (BV) and earnings per share (E), which is slightly above 0.7. However, these two variables are the major explanatory variables in the Ohlson (1995) model, so despite their correlation in explaining changes in the share price both remain included in the regression analyses. Table 4 tabulates the results for the UK sample from the regression models (2) through (4), with the two measures of companies' CSRD disclosures, COMP and GRI, tested separately.
The coefficient for book value of equity per share is negative and significant in equation (2) and in equation (3) using the composite score of CSRD. In the other variations of the equations the book value of equity coefficient is negative but not significant. The negative relation between the market share price and the book value of equity per share can be attributed to standardising the variables by the number of outstanding shares to control for scale effects. When the variables are not standardised in the robustness tests, the association between the market value of equity and the book value of equity becomes positive (see Table 8: Panel A). The coefficient for the earnings per share measure is positive and significant across equations (2) to (4) and when either COMP or GRI is used to measure CSRD. The adjusted R 2 for equation (2) is 0.089. In equation (3), the adjusted R 2 measure improves with the addition of the variable which measures CSRD disclosures. The adjusted R 2 is 0.12 with the composite score as the CSRD variable and is 0.149 with the GRI reporting framework indicator as the CSRD variable. Also, the coefficients for both the COMP and GRI variables are positive and significant at the 5% and 1% levels, respectively. These results provide support for hypothesis H1. They suggest that CSRD disclosures provide incremental value to investors as when CSRD (both COMP and GRI) is added to the regression the adjusted R 2 increases and the measure of CSRD (both COMP and GRI) is positively and significantly associated with the market share price. Equation (4) introduces a variable representing environmentally sensitive industries (ES) and an interaction term between this industry measure (ES) and the CSRD measure to capture the incremental effect on the share price. When equation (4) is run using the composite score of CSRD the adjusted R 2 increases from 0.089 (in equation (2)) to 0.13. Likewise, the adjusted R 2 increases to 0.157 when equation (4) employs the GRI measure of CSRD. However, for both variations of equation (4) (using COMP and GRI) the CSRD variable loses its significance that existed in equation (3) which did not account for environmentally sensitive industries and their interaction with CSRD. The industry indicator variable, ES, is not.
The price specification is used for the regression models, thus the number of shares outstanding is used as the deflator. The price variable is taken three months after the fiscal year end of each company to allow a reasonable time lag between the fiscal year end and the publication of corporate disclosures. The model is also tested using closing market share prices at the fiscal year end as the dependent variable. The results are qualitatively unaffected. Refer to Table 1 for a detailed description of the variables. The p-values are reported in parentheses. The significance tests for the following variables are one-tailed: , . All others are two-tailed. Statistical significance at the 0.10, 0.05, and 0.01 level is denoted by *, **, ***, respectively.
significant, but the interaction term between ES and CSRD is positive and significant at the 10% level for both measures of CSRD (COMP and GRI). This result provides evidence that is consistent with Electronic copy available at: https://ssrn.com/abstract=2722280 hypothesis H1a, which states that higher levels of CSRD by firms operating in environmentally sensitive industries are expected to be associated with higher market values of equity. The model was also tested using closing market share price data as at the end of the financial year as a robustness test.
Investors may have been timelier in impounding financial and non-financial information into the share price than the three month time lag used in the primary test. The results are not affected by the use fiscal yearend share prices. COMP and GRI are the two measures used to represent CSRD. Refer to Table 1 for a detailed description of the variables used in the regression analyses.
Results for Japan
The Pearson correlation coefficients for the Japanese sample are presented in Table 6. The composite score and the GRI reporting framework indicator are negatively correlated with the market share price. This contrasts with the correlations between the share price and CSRD in the UK sample. Correlation coefficients between the independent variables are at an acceptable level, except for the correlation between book value of equity per share and earnings per share. These two variables appear to be highly correlated, with a correlation coefficient above 0.7. This is similar to the UK sample, and again no attempt has been made to exclude either of these variables from the primary regression because they are a vital part of the value relevance model derived by Ohlson (1995). COMP and GRI are the two measures used to represent CSRD. Refer to Table 1 for a detailed description of the variables used.
The results from the primary regression (equations (2) to (4)) for the Japanese sample are provided in Table 7. As with the UK sample, equations (3) and (4) were tested twice; once using the composite score measure of CSRD and again using the GRI reporting framework indicator as the CSRD measure. The results are consistent across the two measures of CSRD. The coefficient of the book value of equity per share is continually negative and significant. The negative direction of the association between book value of equity per share and the market share price can be attributed to the scalar (number of shares outstanding) because when the market value of equity model is employed the association becomes positive (see Table 8: Panel B). The coefficient of the earnings per share variable is positive and significant for all of the equations. The adjusted R 2 value is constant across all the equations and their variations, in terms of the CSRD measure used, at 0.983. Furthermore, the coefficient of the COMP and GRI variables in equation (3) is insignificant. These results do not provide support for hypothesis H1, as they indicate that there is no association between the market share price and CSRD. Equation (4) includes a variable for environmentally sensitive industries and for the interaction between these industries and CSRD. The ES industry variable is insignificant across both variations of the equation (COMP and GRI). The results of this equation also provide further evidence against the value relevance of CSRD for the Japanese sample. The coefficient of the interaction term, for both COMP and GRI, is positive but insignificant suggesting that higher levels of CSRD in companies operating in environmentally sensitive industries is not associated with higher market share prices. Thus, the results for the Japan sample do not provide support for hypothesis H1a. As with the UK sample, the model was also tested using closing market share price data as at the end of the financial year of company i, for robustness purposes. The use of the share price data as at fiscal yearend does not impact the results of the model using share price data as at the end of the month three months after the fiscal yearend. Table 7. Value relevance of CSRD for the Japan sample: regression results for the price specification Ohlson (1995) model
Equation (4) (with CSRD as GRI)
, +3 The price specification is used for the regression models, thus the number of shares outstanding is used as the deflator. The price variable is taken three months after the fiscal year end of each company to allow a reasonable time lag between the fiscal year end and the publication of corporate disclosures. The model is also tested using closing market share prices at the fiscal year end as the dependent variable. The results are qualitatively unaffected. Refer to Table 1 for a detailed description of the variables used. The pvalues are reported in parentheses. The significance tests for the following variables are one-tailed:
Electronic copy available at: https://ssrn.com/abstract=2722280 Barth and Clinch (2009) test six variations of the Ohlson (1995) model that are commonly used in accounting research to assess which models are the most effective at mitigating scale effects. They find that the price specification Ohlson (1995) model generally mitigates the scale effects of their simulated data, hence this is the model we employ as the primary test. Barth and Clinch (2009) also find that the market value of equity specification is more effective at mitigating scale effects than the four other variations of the Ohlson (1995) model, but is less effective than the price specification variation. Therefore, as a robustness test, we re-estimate equations (2) to (4) using the market value of equity specification as the dependent variable. Correspondingly, the independent variables are no longer stated in the per share specification (i.e. they are not standardised), instead total book value of equity and total earnings are used. The results for both samples are provided in Table 8. The UK sample's results are tabulated in Panel A of Table 8. Equation (2), based on financial information only, has an adjusted R 2 value of 0.368. Total book value of equity and total earnings are positively and significantly associated with the market value of equity, which is consistent with the results from the primary test. The adjusted R 2 improves for equation (3) to 0.372 when the composite score is used and to 0.390 when the GRI reporting framework indicator is used. Both these measures of CSRD are related to the market value of equity in the expected positive direction. However, only the coefficient of the GRI measure of CSRD is significant. The adjusted R 2 decreases for equation (4) using the composite measure of CSRD, relative to equation (2) (from 0.368 for equation (2) to 0.360 for equation (4)). Furthermore, the COMP variable and the interaction term are insignificant for this specification of equation (4). On the other hand, the use of the GRI indictor variable in equation (4) results in an increase in the adjusted R 2 to 0.383 (relative to equation (2)) and a positive and significant coefficient on the GRI variable. Yet, in relation to GRI variation of equation (3), the adjusted R 2 decreases (from 0.390 for equation (3) to 0.383 for equation (4)) and the interaction term is not significant. The coefficients for environmentally sensitive industries are insignificant across the two variations of equation (4). Overall, for the UK sample, there is moderate evidence in support of hypothesis H1 but no evidence in support of hypothesis H1a. Table 8. Value relevance of CSRD for the UK sample and the Japan sample: regression results for the market value specification Ohlson (1995) model represents the market value of equity of company i three months after its financial year end. It is calculated as the closing market share price multiplied by the number of outstanding shares on the last day of the third month after the financial year end of company i. The market value three months after the end of the financial year is used to allow time for the publication and analysis of corporate disclosures.
Equation (2)^ Equation (3)^ (with CSRD as COMP) Equation (3)^ (with CSRD as GRI) Equation (4)^ (with CSRD as COMP) Equation (4)^ (with CSRD as GRI)
, represents the total book value of equity for company i as at the end of the financial year. It is calculated as Total Assets less Total Liabilities for company i.
, is the Income Before Extraordinary Items figure for company i's financial year. Refer to Table 1 for a description of the remainder of the variables used. The model is also tested using market value of equity at the fiscal year end of company i as the dependent variable for both the UK sample and the Japan sample. The results are qualitatively unaffected.
The p-values are reported in parentheses. The significance tests for the following variables are one-tailed: , , , . All others are two-tailed. Statistical significance at the 0.10, 0.05, and 0.01 level is denoted by *, **, ***, respectively..
Under the market value specification Ohlson (1995) model, the results are somewhat robust to the results from the primary test. The adjusted R2 value increases in equation (3) using both measures of CSRD, indicating that CSRD information along with financial information improves the explanatory power of market values of equity. Also, there is support for the expected positive association between market values of equity and levels of CSRD, but only when the use of the GRI reporting framework is used to indication the level of CSRD. However, in contrast to the results of the primary test, no evidence is found for the relation between higher levels of CSRD by firms operating in environmentally sensitive industries and higher market values of equity. The coefficient of the interaction term, with COMP and GRI, were insignificant and the adjusted R 2 value decreased relative to equation (3). The results from the robustness tests of the Japan sample are provided in Panel B of Table 8. The adjusted R 2 for equation (2) is 0.947, and the total book value of equity and total earnings are positively and significantly associated with the market value of equity. Similar to the results under the primary test, the adjusted R 2 remains relatively constant across the three equations (for both COMP and GRI as measures of CSRD), however it does decrease to 0.946 for both CSRD measures in equation (4). Also, the coefficients for total book value of equity and total earnings are positive and significant in all the equations. Under the price specification model the coefficient for earnings per share was positive and significant for all the equations too. Yet, the coefficient for book value of equity per share was consistently negative and significant in the price specification model (see Table 7). This indicates that the negative association between the book value of equity per share and market share price is due to the use of the number of outstanding shares as a scalar. In equation (3), the introduction of a CSRD variable (either COMP or GRI) does not indicate that higher levels of CSRD are associated with higher levels of market value of equity, as the coefficients for COMP and GRI are insignificant. This result is consistent with the results from the primary test of the Japan sample. Moreover, when environmentally sensitive industries are introduced into robustness test in equation (4), the coefficients on the CSRD variable (both COMP and GRI), the ES variable, and the interaction between CSRD and ES remain insignificant. These results are also consistent with the results obtained from the price specification Ohlson (1995) model. Overall, the results for the Japan sample are generally robust to using the market value specification Ohlson (1995) model. No support is found for higher levels of CSRD being associated with higher market values of equity (hypothesis H1) and no support is found for higher levels of CSRD by firms operating in environmentally sensitive industries being associated with higher market values of equity (hypothesis H1a). The regression models for both samples were also estimated using the market value equity (closing share price multiplied by the number of outstanding shares) of company i at the end of its 2008 financial year. It does not impact the results, as was the case for the primary tests.
DISCUSSION
Overall, the samples of some of the UK's and Japan's largest companies provide quite contrasting results. The price specification Ohlson (1995) model provides evidence that is consistent with hypotheses H1 and H1a for the UK sample but not for the Japan sample. The findings in this context are especially interesting because the UK's and Japan's largest companies have been world leaders in undertaking CSRD, for some time (KPMG, 2008). With regard to the UK sample, the adjusted R 2 increases when the CSRD measure (both COMP and GRI) is added to the regression equation and both measures of CSRD are positively and significantly related to the market share price (see Table 4). However, in terms of the Japan sample, the adjusted R 2 value remains constant for equations (2) to (4), regardless of the CSRD measure used. Also, CSRD (both measures) is not significantly associated with the market share price. The different results are potentially due to inherent differences between the UK sample and the Japan sample. CSRD is a voluntary practice for companies in the UK and in Japan (Kolk, 2008). However, publicly listed companies on the Japanese Stock Exchange have to adhere to certain environmental and social disclosure regulations (KPMG, 2008). Given the regression results, it seems that only investors in the UK companies include CSRD disclosures in the total information set they use when valuing a company. Investors in Japanese companies appear to include financial information in their total information set used when valuing a company, but the non-financial CSRD information does not seem to provide any incremental valuerelevant information to their investment decisionmaking process. From this, one may infer that UK companies consider their shareholders when making Electronic copy available at: https://ssrn.com/abstract=2722280 the decision to undergo CSRD. Management of top UK companies may perceive that CSRD will provide investors with the benefit of reducing information asymmetry, thus allowing them to make better assessments of the future economic benefits and risks of the company from which they more accurately value the company. This can be reflected by increases in the market share price because the reduction in information asymmetry means that investors do not have to assume the worst about the company's corporate responsibility practices when deciding how much they are willing to pay for its shares in the market. In contrast, this inference cannot be made for top Japanese companies. The high adjusted R 2 value suggests that little value relevance is associated with variables other than book value of equity and earnings (Lo & Lys, 2000). As CSRD does not seem to provide incremental value relevance to investors, over and above that of financial information, we cannot state that CSRD reduces information asymmetry between management and external investors of Japanese companies. An alternative suggestion for the provision of CSRD by top Japanese companies is that these CSRD disclosures are not provided for the benefit of the companies' shareholders, but are instead produced for other stakeholder groups which are not considered within the scope of this study. Or, non-financial CSRD information may be more associated with companies' strategic operational decisions in the long-term but the investors may be more focused on Japanese companies' short-term financial performance (Moneva & Cuellar, 2009). The results remain dissimilar still when the impact of higher levels of CSRD by companies operating in environmentally sensitive industries is taken into account. The UK sample demonstrates that higher levels CSRD by companies operating in environmentally sensitive industries (as classified by (De Villiers et al., 2011)) are associated with higher market share prices. Whereas, no association is found between CSRD by firms in environmentally sensitive industries and their market share prices, for the Japan sample. This provides further support for inference that CSRD reduces information asymmetry for investors in UK companies, which reduces the risk of adverse selection and enhances investors' ability to value companies that operate in environmentally sensitive industries. Again, such an inference cannot be made for Japanese companies operating in environmentally sensitive industries. The evidence from the Japan sample does not corroborate the conclusion that higher levels of CSRD disclosures, even by companies that have an incentive to provide enhanced CSRD disclosures, provide incremental value relevant information to the total information set used by investors. Despite many of Japan's top companies being world leaders in CSRD practices, it does not seem that these reports provide value relevant information to investors as was anticipated (KPMG, 2008).
CONCLUSION
CSRD is becoming a more established reporting practice around the world. Studies have investigated the value relevance of the CSRD in many different countries. We examine the value relevance of CSRD disclosed by companies from the UK and Japan, two countries that are leading the world in this reporting practice (KPMG, 2008 Barth and Clinch's (2009) findings that this is, generally, the most effective model at mitigating scale effects. The regression models are tested using two measures of CSRD. The first is a composite score of a companies' CSRD and the second is an indicator variable of whether or not the GRI reporting framework was followed. The results of the UK sample support both hypotheses. Higher levels of CSRD are associated with higher market values of equity. Likewise, higher levels of CSRD by firms operating in environmentally sensitive industries are associated with higher market values of equity. These results suggest that CSRD provides incremental value relevant information to investors in UK companies as the non-financial information is said to be included in their total information set used to value a company. Agency theory provides depth and reasoning to why investors may find this information value relevant and why companies choose to undertake CSRD. The additional information available to investors can reduce their uncertainties of companies' operational activities, future earnings, and associated risks. These uncertainties come about because the separation between ownership and control in publicly listed companies causes informational asymmetries between firms' managers and shareholders. Thus, with more disclosures information asymmetries can be reduced and shareholders can make better informed investment decisions. As a result, they are less likely to assume the worst case scenario (the adverse selection problem) and so make more accurate valuations of a company's shares (Healy & Palepu, 2001). Companies continue to provide CSRD to investors as it has the benefit of enhancing the market valuations of its shares. As Japan also as a well-established practice of CSRD one may expect that CSRD would be positively associated with the market value of equity for Japanese companies too, given this theoretical perspective. However, no association was found between CSRD and market values, even for companies operating in environmentally sensitive industries. This suggests that investors in Japanese companies do not find CSRD information value relevant and do not include the disclosures in the total information set they use to value companies. Inherent differences in the reporting and investment environments of these two countries may explain why such different results were obtained. Future research could extend the findings of this study and add to the research regarding why companies undertake CSRD by considering other stakeholder groups which may benefit from Japanese companies providing CSRD. Future research could also assess the value relevance of CSRD in a longitudinal study as firms' environmental and social decisions tend to be more strategic and long-term rather than about short-term performance.
The market value specification Ohlson (1995) model is used as a robustness test. The results from this model supported the primary model's results for the Japan sample, but only partially supported the main findings for the UK sample. The GRI variable as a measure of CSRD provided support for the positive association between market values of equity and CSRD, but did not support the expectation that higher levels of CSRD by firms operating in environmentally sensitive industries would be associated with higher market values of equity. The composite measure provided evidence in support of hypothesis H1 as the adjusted R 2 value increased when this CSRD measure was added into the regression equation. However, the composited score did not generate support for hypothesis H1a when the market value specification Ohlson (1995) model was used. This may be because this model does not mitigate scale effects as effectively as the price specification variation (Barth & Clinch, 2009). The F-values showed that all the models (in the primary tests and robustness tests) were significant and thus aided in understanding the relationship between firms' book values of equity, earnings, and CSRD disclosures and their market values. However, there is the possibility that the model used does not fully capture the relationship between the disclosures (both financial and non-financial) and market valuations. The measures of CSRD (the composite score and GRI indicator) may not be completely effective in representing the information that companies disclose through CSRD. However, the composite score is a comprehensive measure of CSRD as it incorporates several reporting aspects into its calculation and the GRI reporting framework is a well-established guideline used around the world in the preparation of CSRD (KPMG, 2008). Thus, these measures provide a reasonable indication of the level of CSRD provided by a company. Also, the measures used are derived by a high-level and independent public accounting firm (KPMG) which adds a level of credibility to the data.
The findings of this study have implications for academics, companies, investors, and policy makers. The study adds to the existing debate of the value relevance of CSRD by providing some contrasting, yet interesting, results. This study can be extended and provides avenues for future research, perhaps by using longitudinal data or by assessing the research question in the context of different countries or stakeholder groups. The findings may be useful to companies in making decisions of whether or not to undertake CSRD, especially for UK or Japanese companies. Similarly, the study provides investors with useful information regarding how companies' CSRD practices can affect firm value. Regulators may consider the results of this study when assessing the future of CSRD and whether or not to mandate some, or all, of the disclosure practices. Likewise, standard setters may also find the results important to the potential preparation of CSRD standards in the future. It is important to make clear that the empirical results only show the correlation between the CSRD measures and share prices, they do not establish that higher levels of CSRD shape higher share prices for UK companies and not for Japanese companies. | 2018-12-09T22:43:46.319Z | 2016-01-25T00:00:00.000 | {
"year": 2016,
"sha1": "a3bda53c3a5fc807b66dd9d811e53868ec90cbb7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.22495/cocv13i2c1p2",
"oa_status": "HYBRID",
"pdf_src": "ElsevierPush",
"pdf_hash": "d2f908d4a7f5d70972cad5fe4b703f97aa0965af",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
238237676 | pes2o/s2orc | v3-fos-license | The use of mice in diabetes research: The impact of experimental protocols
Mice are used extensively in preclinical diabetes research to model various aspects of blood glucose homeostasis. Careful experimental design is vital for maximising welfare and improving reproducibility of data. Alongside decisions regarding physiological characteristics of the animal cohort (e.g., sex, strain and age), experimental protocols must also be carefully considered. This includes choosing relevant end points of interest and understanding what information they can provide and what their limitations are. Details of experimental protocols must, therefore, be carefully planned during the experimental design stage, especially considering the impact of researcher interventions on preclinical end points. Indeed, in line with the 3Rs of animal research, experiments should be refined where possible to maximise welfare. The role of welfare may be particularly pertinent in preclinical diabetes research as blood glucose concentrations are directly altered by physiological stress responses. Despite the potential impact of variations in experimental protocols, there is distinct lack of standardisation and consistency throughout the literature with regards to several experimental procedures including fasting, cage changing and glucose tolerance test protocol. This review firstly highlights practical considerations with regard to the choice of end points in preclinical diabetes research and the potential for novel technologies such as continuous glucose monitoring and glucose clamping techniques to improve data resolution. The potential influence of differing experimental protocols and in vivo procedures on both welfare and experimental outcomes is then discussed with focus on standardisation, consistency and full disclosure of methods.
| INTRODUCTION
Animal models play an essential role in preclinical diabetes research owing to the complexity of the disease and its impact on multiple organs and pathways. 1,2 Mice are the most used animals in preclinical diabetes studies largely because their glucose handling, which is very similar to that of humans, can be studied using relatively simple procedures. 3 However, many common preclinical end points in diabetes research can be impacted by the strain, sex and model of mouse chosen and so this should be carefully considered when planning in vivo experiments. 4 The choice of end points and protocols are also vital in experimental design to ensure that data are reproducible and translatable. 5 Indeed, it has been shown that poor study planning and implementation has direct implications on clinical translatability. 5 In-depth study design can also improve animal welfare and reduce animal numbers with the contemporary 3Rs acknowledging the importance of new technologies in maximising data resolution to increase the usefulness of each animal and/or reduce sample sizes. 6 In diabetes research, the role of welfare may be particularly pertinent as blood glucose concentration, a commonly used primary end point, is directly altered by physiological stress responses. 7-10
| END POINTS
Experimental end points should be considered at the early stages of experimental design to ensure that study outcomes can be appropriately interpreted. Common end points used in preclinical diabetes research usually relate to blood glucose homeostasis, which includes blood glucose and insulin concentrations, glucose tolerance and insulin resistance, all of which can be measured in various ways. 2 It is not uncommon that several end points are used within one study as well as secondary end points (e.g., weight) which can be used to monitor animal health, disease progression and the impact of off-target effects.
| Measuring blood glucose concentrations using a glucometer
Blood glucose concentration is a common end point in diabetes research and is classically measured using a glucose meter (glucometer) at standard single time-points. This method involves delivering a needle-prick or cut to the tip of the tail and gently massaging the tail from the base upwards to generate a blood droplet. 11 This droplet is captured on a glucose strip placed inside a handheld glucometer, which normally requires 0.5-2 µl of blood. 11 This technique is simple, requires only partial restraint and provides a blood glucose concentration reading within seconds. Glucometer readings are typically taken in either a fed (often described as 'random') or fasted state. 12 Glucometer blood glucose measurements provide a rapid and simple snapshot of glycaemic control. Random measurements are useful in indicating glycaemic control under normal physiological conditions (i.e., in the presence of food) and are preferable to show the robustness of a treatment in reducing blood glucose concentrations. 12 Additionally, as random measurements do not require fasting the animal, these can be taken daily if necessary. However, measuring random blood glucose concentrations can lead to data that are influenced by variations in timing and degree of food intake. 12 This may be particularly important comparing mice fed different diets (e.g., high-fat vs. normal chow) as sugar content can vary and influence results. For example, sugar (i.e., fructose/ sucrose) comprises ~7% of the 60% high-fat diet (HFD) (D12492; Research Diets) compared with ~3.5% of a standard chow diet (Rodent Diet 20; PicoLab). This may have immediate impacts on blood glucose concentrations that could alter random glucometer measurements. However, only minimal increases in random blood glucose concentrations (~1 mM) in HFD versus standard chow mice have been observed in our laboratory, despite an impairment in glucose tolerance and insulin sensitivity developing within 2 weeks of diet induction. [13][14][15] What is already known • Experimental design is vital in preclinical diabetes research for optimal mouse welfare and outcomes. • Experimental protocols are poorly standardised throughout the literature.
What this study has found
• Small variations in experimental protocol can influence mouse welfare and scientific end points. • Refinement of procedures still allows for reproducible data and drug effects to be observed.
What are the implications of this study?
• Highlights the need for standardisation, consistency and full disclosure of methods with improved experimental design having the ability to improve drug discovery.
On the other hand, the fasted state is a more reliable indicator of overt diabetes and reduces variability caused by food intake (e.g., variation in time since last feed and quantity of consumption). 12 Therefore, an appropriate fast length with regards to both the scientific question and welfare should be chosen and limited in frequency. Although overnight fasts are commonly used clinically, this induces a state of starvation in mice and, therefore, should be avoided. [16][17][18] It is important to note, however, that induction of a catabolic state (e.g., hypoglycaemia) may be important when testing the effects of prolonged fasting on metabolic responses and counter-regulatory mechanisms. 16,[19][20][21] Therefore, longer fasts may be used in some studies but only if scientifically justified. Considerations when fasting mice are described in more detail below.
It should be noted that, irrespective of whether measurements are in the fed or fasted state, mice are nocturnal and, therefore, most active with approximately two thirds of their calorific intake consumed during the night (dark cycle). 16 This results in circadian rhythms in activity and blood glucose concentrations, both of which are higher at night ( Figure 1). Therefore, random blood glucose concentrations, which are normally obtained during the light cycle, can overestimate general glycaemic control. 22 Therefore, it is often suggested that blood glucose concentration measurements across a study should be taken at similar times of the day and reported in methods sections to allow comparisons between studies.
Overall, both random and fasted blood glucose measurements can give important and complementary insights into blood glucose homeostasis. 12 These can be complemented with measurements of plasma insulin either under fed, fasted or glucose-stimulated conditions. Larger blood samples are typically required to measure insulin (~50 µl) and, therefore, it is usually not measured as frequently. 2 Irrespective of which end point is chosen it is vital that experimental protocols are consistent and fully disclosed with consideration of both welfare and scientific outcomes.
| Measuring glucose tolerance
Glucose intolerance is a key characteristic of impaired glycaemic control and can be quantified using glucose tolerance tests (GTTs), which are a fundamental tool used in diabetes research. 2,23 GTTs involve measuring baseline blood glucose concentrations before F I G U R E 1 Ten second averages of blood glucose concentrations and activity over 72 h in 11 normal diet (ND) male mice captured by HD-XG glucose telemetry devices (Data Sciences International). Grey bars = dark phase. Blue arrows = times of disturbance. Disturbance describes times at which mice were woken by animal unit staff during daily checks but not handled [Colour figure can be viewed at wileyonlinelibrary.com] administration of a glucose bolus and subsequent repeated blood glucose measurements for ~2 h typically at 15-, 30-, 60-, 90-and 120 min. 2,23 The glucose bolus can be given via several routes, although glucose response varies widely depending on the method chosen as the rate of glucose delivery to the system differs. 23 Glucose response is also altered by glucose dose with common concentrations ranging from 1 to 3 g/kg. 17 Prior to GTTs, it is common practice to fast mice to eliminate the impact of variability in food intake on glucose homeostasis. Although previous attempts have been made, there remains no standardised fast length with durations commonly ranging from 6 h in the daytime to 16 h overnight. 16,17,23 Furthermore, cages are sometimes changed at the start of fasting to eliminate the risk of food remnants being left in the cage. 24,25 However, whether and/ or how this is done is not consistent or well documented with the potential impacts of this discussed later in this review.
| Measuring insulin resistance
Glucose intolerance can result from reduced insulin secretion and/or sensitivity but GTTs do not provide this mechanistic information. Consequently, insulin tolerance tests (ITTs) are often undertaken to investigate insulin resistance. As with GTTs, mice are frequently fasted prior to ITTs, albeit often for shorter durations as longer fasting is associated with increased risk of hypoglycaemia. 26 However, as before, there is lack of standardisation in protocol across the literature. 2,26 At the start of an ITT, baseline blood glucose concentration is measured before a bolus of insulin (normally 0.25-1 IU/ kg) is injected i.p. or s.c. and subsequent blood glucose measurements are obtained over ~1-2 h. 26 Several factors should be considered when undertaking ITTs, most important of which is the risk of hypoglycaemia meaning that mice should be carefully monitored for signs of lethargy, inactivity, hunching and piloerection. If any of these are observed, and/or blood glucose concentrations fall <2.5 mM, a glucose bolus (2 g/kg) should be administered, food and warmth should be provided, mice should be separated if lethargic and blood glucose concentration and welfare should be monitored until restoration of a normal state. 26 If this happens more than very rarely, the insulin dose should be adjusted accordingly in future studies.
The ITT is particularly susceptible to stress-related increases in blood glucose concentrations, which in the case of GTT is often masked by the glucose bolus. Therefore, researcher experience, refinement, consistency and full disclosure of protocols is particularly pertinent.
METHODS THAT INCREASE DATA RESOLUTION
The techniques described above can provide a snapshot of glycaemic control at any given time-point. However, for some studies, increased data resolution may be required which is problematic with standard glucometer methods due to the need for repeated blood sampling affecting welfare. Therefore, surgical techniques involving implantation of devices or catheters can be used for more frequent sampling.
| Continuous glucose monitoring
Continuous glucose monitoring (CGM) generates high data resolution with 10-s averages of blood glucose concentration, temperature and activity recorded over several weeks in unrestrained mice. 27,28 CGM consequently allows for blood glucose concentrations to be measured at time-points, which would normally be missed, such as at night, and may, therefore, enhance understanding of 24 h glycaemic control. Furthermore, glycaemic variability can be quantified, which has been shown to be a key driver of morbidity and mortality in human diabetic patients. [29][30][31][32] CGM also offers an opportunity to measure glucose concentrations without the need for handling the mice. Indeed, mouse cages are placed on top of receiver pads, which collect the probe data and transmit it to a data acquisition system on a nearby computer meaning that data can be acquired without even entering the room. 27 This may be important as researcher intervention, which is unavoidable when using a glucometer, can increase blood glucose concentrations ( Figure 1). Hence, CGM may provide more accurate representation of absolute blood glucose concentrations without the influence of stress.
Despite the potential benefits of CGM, this technique requires invasive surgery. The glucose sensor of the telemetry probe is placed in free-flowing blood in the aortic arch and a radio-transmitter is then placed either s.c. or i.p. 27 This requires significant researcher training and experience for both optimal welfare and outcomes. Mice must also be allowed to recover from surgery prior to any further experimentation. In our lab, we have found that immediate surgical recovery is surprisingly swift although mice can lose up to 10% body weight particularly in the first 48 h. In general, recovery is achieved by day 5 post-surgery, but experimentation often begins on day 7. Overall, the drawbacks of invasive surgery may be outweighed by the reduction in researcher intervention and subsequent stress throughout experimentation. Although only one telemetered mouse can be placed on each receiver mat, mice can be housed with non-surgical cage-mates to avoid isolation. 27 Further caveats of this technique are the requirement for researchers to be trained to use the advanced equipment required and the cost of probes and equipment, which vastly exceeds that of standard glucometer methods. Hence, setting up CGM in a laboratory can be extremely costly and time-consuming. The HD-XG probes (Data Sciences International) have a guaranteed lifespan of 28 days meaning that only 21 days of data may be obtained following surgical recovery. 28 However, in our lab, the average probe lifespan achieved in both normoglycaemic and glucose intolerant mice is 8 weeks.
From our experience, CGM provides an opportunity to comprehensively understand several physiological and experimental parameters that would normally be difficult or impossible to assess: (1) 24 h blood glucose concentrations with quantification of glycaemic variability; (2) the impact of researcher intervention on both welfare and blood glucose concentrations and (3) real-time quantification of drug and treatment effects, which is particularly beneficial when responses are subtle. If glucose tolerance and/or insulin resistance are the primary end points of interest, however, we have not found that CGM provides additional information.
| Glucose clamping
A sophisticated technique used to quantify insulin resistance and secretion is glucose clamping. This method is used frequently clinically but has been translated into mice, allowing for detailed mechanistic investigation of the factors regulating blood glucose. 33 There are two main types of glucose clamp: the hyperinsulinaemiceuglycaemic clamp and the hyperglycaemic clamp. The hyperinsulinaemic-euglycaemic clamp is the gold standard to assess insulin sensitivity and is conducted by infusing insulin at a steady state to achieve hyperinsulinaemia, following which varying amounts of glucose are infused to reach a euglycaemic set-point. The amount of glucose required to reach euglycaemia is then translated into a 'glucose infusion rate' (GIR), an equation that also considers the concentration of glucose used and weight of the animal. The hyperglycaemic clamp is a similarly sensitive technique that allows for precise measurements of insulin secretion in parallel with GIR. This technique, therefore, produces additional metabolic parameters compared with the more basic GTTs and ITTs.
The setup of this technique is technically demanding as it requires the catheterisation of both the carotid artery and jugular vein. 34 Catheters are then tunnelled s.c. and exteriorised between the shoulder blades of the mouse, where they are then attached to a pin port that is sutured into the skin. This allows for direct infusion and sampling access to both vessels. The surgery is highly invasive and can take a long time to master, but competency is essential for reducing time under anaesthetic (~60 min), which in turn improves survival rates and postoperative recovery. Similar to CGM, surgical recovery is achieved by day 5 post-surgery with experimentation beginning on day 7. It is also a similarly expensive technique with the cost of probes and equipment, alongside the surgical training required, greatly outweighing that of glucometer methods.
However, as explained above, the different types of clamps allow for quantification of metabolic parameters that outreach the simpler GTTs and ITTs. In addition to this, rapid blood sampling can be conducted (similar to that of a frequently sampled intravenous GTT-FSIVGTT), while infusing erythrocytes to help maintain total blood volume and, hence, animal welfare. It is also possible to measure tissue-specific glucose uptake and endogenous glucose production using this technique by infusion of radiolabelled glucose tracers followed by harvesting of tissues and plasma. This again gleans more information than can be obtained from basic tolerance tests. Once the surgery is mastered, animal welfare is also improved with this technique as animals are conscious and freely moving during studies thanks to the easily accessible pin port located on their back. This both reduces stress associated with intervention and increases precision of data, which in turn reduces sample sizes. Although animals are singly housed during the study, they can be group housed prior to experimentation. From our experience, glucose clamping is, therefore, an unrivalled technique when investigating specific metabolic mechanistic questions. However, it is not essential for simpler end points and should, therefore, only be used as an adjunct to GTTs and ITTs when necessary.
| EXPERIMENTAL PROTOCOLS
When designing experiments, the influence of extraneous variables should be understood and minimised where possible. As previously described, there are many different techniques undertaken in diabetes research to assess blood glucose homeostasis, but protocols for these are poorly standardised despite there being many changeable aspects.
| Fasting
As previously mentioned, it is common practice to fast mice both prior to fasted blood glucose measurements and GTTs to limit the influence of food intake on end points. 12 However, fasting protocols are poorly standardised with variability in both the length and time of day of the fast. [16][17][18] Overnight 16-h fasts were historically common, but these have been associated with hypoglycaemia, hypothermia, weight loss and cardiovascular changes, which all act as potential stressors. 16 Data from our own laboratory using CGM also indicate that 16-h fasts induce severe hypoglycaemia with blood glucose concentrations consistently falling <2.8 mM at ~13 h after the start of fasting. 24 More recently, it has become more common practice to fast for shorter periods (3-6 h) during the daytime and this has been shown to be sufficient for ensuring gastric emptying and hepatic control of glucose homeostasis. 18,[35][36][37] However, overnight fasting still features in many research papers. Interestingly, overnight fasts not only have welfare implications but can significantly alter glucose response during an i.p.GTT with 16-h fasts being associated with impaired glucose tolerance particularly in males. 24 Another poorly reported variable in the fasting protocol is whether the cage is changed at the start of the fast. Although no food remnants will remain if the cage is changed, this procedure is associated with stress responses in mice, including increased corticosterone, most likely due to removal of familiar odours. 38 Conversely, not changing the cage (i.e., removing food from the lid only) may result in small amounts of food remnants on the cage floor, which could impact the fast. A method to ensure complete food removal while reducing the stress of a full cage change is to retain the used bedding (after shaking out any food residue) and place it in the new cage. 38 Indeed, our data have shown that bedding retention reduces the initial glucose spike associated with researcher intervention at the start of the fast. 24 While removing food from the food hopper without changing the cage causes the least amount of stress at the start of the fast, the blood glucose concentration reductions are not as significant, most likely due to food remnants present in the bottom of the cage.
As with overnight fasting, cage change method not only impacts effectiveness of the fast but also GTT outcome with whole cage changes at the start of a 6-h fast significantly impairing glucose tolerance during an i.p.GTT when compared with when bedding is retained or the cage is not changed. 24 Overall, these results highlight the role of welfare in scientific outcomes and the importance of consistency in practices and full disclosure of methods.
| Moving animals from holding rooms
to procedure rooms In many animal units, mice are transported from holding rooms to procedure rooms prior to experimentation (e.g., GTTs). Conversely, because CGM requires mice to remain on their receiver mats for measurements to be taken, mice remain in their holding room for procedures. 28 This is likely to reduce stress due to minimising the degree of disturbance and novelty of environment. Indeed, we have observed that moving mice into a procedure room increases blood glucose concentrations by ~40% for up to 60 min. We have also observed worsened glucose tolerance during an i.p.GTT in mice transported to procedure rooms versus those which remained in holding rooms.
| Protocols undertaken during glucometer measurements, glucose tolerance tests and insulin tolerance tests
Protocols undertaken during experimentation (e.g., glucometer measurements, GTTs and ITTs) can also vary. These procedures all involve handling the mice and obtaining a blood sample using the tail-prick glucometer method. 11 For GTTs and ITTs, either a glucose or insulin bolus is then administered with repeated blood samples obtained over ~2 h. 2,23,26 Overall, differing procedures can significantly alter results due to the influence of both stress and direct physiological changes. Hence, the importance of consistency and full disclosure of methods should not be overlooked.
| Handling, blood sampling and intraperitoneal injections
Using CGM we have shown that even mild researcher intervention (e.g., entering the holding room) can increase blood glucose concentrations (Figure 1). It is, therefore, unsurprising that in vivo procedures (handling, tailprick blood sampling and i.p. injections) can also causes glucose spikes. Indeed, we have shown that in vivo procedures cumulatively increase blood glucose concentrations with a maximum increase of 0.8-3.1 mM in males and 0.8-2.3 mM in females 30 min after intervention ( Figure 2). Blood glucose began increasing from 5-min post-disturbance onwards with responses lasting between 30 and 45 min in females and 45-and 75 min in males.
Although unavoidable, the potential impact of these procedures on outcomes should be considered. Overall, our data have shown that when glucose is administered i.p. at the start of a GTT, the ~13 mM glucose increase observed is a composite of the stress associated with the in vivo procedures required to administer glucose (~2.8 mM) and the glucose dose itself (~10.2 mM). Therefore, any refinement to improve welfare could alter scientific outcomes and this should be acknowledged.
For example, we have observed that tunnel handling tends to produce lower blood glucose responses than tail or cup handling with regards to both magnitude and duration of response.
It is also important to note that various glucometers can be used each with differing properties. Indeed, glucometer technology has improved over the years, meaning that smaller blood samples are now required. For example, the Accu-Chek Performa meter (Roche) requires 0.6 µl of blood whereas Stat Strip Xpress meters (Nova Biomedical) require 1.2 µl. 39,40 Although historically blood samples were often obtained by cutting the tip of the tail with scissors or a scalpel, these blood volumes can now be obtained via a simple needle prick at the end of the tail. 11 Choice of needle size is dependent on blood volume required, but 27-30 G needles are sufficient for most glucometers. Interestingly, we have found that using larger 27 G needles causes higher glucose spikes, indicating increased stress. However, if the needle is too small for the sample required (e.g., 30 G for 1.2 µl), glucose concentrations increased most likely due to the additional pressure applied to the tail.
Other differences between glucometers include testing ranges of blood glucose concentrations. Many glucometers (e.g., Accu-Chek Performa) capture concentrations of up to 33.3 mM but other, more specialised meters (e.g., StatStrip Xpress) can measure up to 50 mM. 39,41 Accuracy of glucometers also differs with only Contour Next and StatStrip Xpress meters meeting the accuracy standards set out by the International Organisation for Standardisation (ISO 2013). This may be linked to haematocrit interference, which is not always corrected for (e.g., with the Accu-Chek Performa) but can significantly alter blood glucose concentrations. [42][43][44][45] Overall, as different glucometers can alter results by as much as 30%, consistency in glucometer use is vital and should be clearly described in methods. 45
| Route of administration
When carrying out GTTs and ITTs, choosing the route of glucose/insulin administration is extremely important as it can markedly impact test outcome. Glucose is most commonly administered via i.p. injection but can also be administered orally through gavage or voluntary ingestion of a glucose gel. 23 Insulin is usually administered either via i.p. or s.c. injection as it is a peptide and, therefore, cannot be administered orally. 2,26,46 Increases in blood glucose concentrations after glucose administration are highest after i.p. administration, followed by gavage and then voluntary gel ingestion. Attenuated glucose spikes during an oral GTT are primarily due to incretin release from the gastrointestinal tract with both glucagon-like peptide 1 (GLP-1) and glucosedependent insulinotropic polypeptide (GIP) enhancing glucose stimulated insulin secretion. 47 The improved glucose tolerance observed with voluntary gel ingestion versus gavage is likely due to reduced stress responses as gavage requires restraint of the animals. However, it is important to note that there is also evidence for increased incretin responses with mastication. 48 Hence, voluntarily gel ingestion may improve welfare while avoiding stressinduced artefacts in the GTT. Considerations when using voluntary ingestion of glucose gels include the need to separate mice to ensure correct dosing, the requirement for mouse training and potential issues of non-adherence. 23 Options to separate mice include single housing (either continually or during the GTT) or using physical barriers to separate mice (if in pairs) while ensuring that both mice have access to water. However, our data have shown that cage separation, but not single housing, immediately prior to an i.p.GTT, significantly impairs glucose tolerance. A further concern with voluntarily ingestion of gels is that dosing may be inaccurate due to partial and/or prolonged consumption not consistent with a bolus. These issues can largely be overcome with training as mice are neophobic and need to become familiar with new processes. 49 In our laboratory, mice are initially given gels in a plastic pot for two nights in the presence of food and absence of researcher intervention. After this, the mice are habituated to single housing or cage separation for 2 h on two separate occasions. Finally, single housing/cage separation is combined with gel administration. Mice are considered 'trained' when they consume 90% of the gel within 1 min on two separate occasions. This is normally achieved in two training sessions with all training taking ~1 week. To calculate percentage consumption, gels are weighed in their plastic pots before and after administration.
| Dose and volume of glucose/insulin
Chosen dose of glucose and insulin will have direct implications on tolerance tests. Although not standardised, glucose is typically administered at a dose of 1-3 g/kg. 17 Previous studies have found that ≥2 g/kg glucose is required to detect differences in oral glucose tolerance in HFD versus normal chow mice, although this was also true for a set dose of 50 mg (equivalent to 2 g/kg in a 25-g mouse). 17 For ITTs, doses ranging from 0.25 to 1 IU/kg have been used although we have found that 0.75 IU/kg dose is sufficient to discriminate between insulin sensitive and resistant animals with a low incidence of hypoglycaemia. 26 However, it is recommended that insulin dosage is determined using low starting doses and incremental increases to ensure ~50% blood glucose reductions with low risk of hypoglycaemic events. 26 Furthermore, genetic background, weight, phenotype and age can alter insulin dose required, so further dosage studies should be undertaken if starting experiments in a new mouse model. 26 Volume of glucose and insulin may also have an impact on GTT and ITT outcome, respectively, but local guidelines should be followed regarding maximum volumes to a particular injection site. Adherence to voluntary consumption of glucose gels can be impaired in heavier mice (e.g., HFD or Lep Ob/Ob mice) due to increases in gel volume. Consequently, gels can be made with smaller volumes of more concentrated glucose solutions which can reduce consumption time by ~30% (for 45% vs. 30% glucose gels). Importantly, these differences in rates of intake did not alter glucose response.
| Timing of repeated blood samples
Blood glucose concentrations following either a glucose or insulin bolus are normally measured using a glucometer F I G U R E 3 The effect of repeated blood sampling on blood glucose concentrations over 6 weeks in normal diet (ND) (a) Male and (b) Female mice. * represents a significant difference compared with other repeats. # represents a significant difference compared with the −30min pre-intervention concentration for each repeat (p < 0.05, two-way ANOVA with Holm-Sidak post hoc test, n = 5-11) [Colour figure can be viewed at wileyonlinelibrary.com] at 15-, 30-, 60-, 90-and 120 min. However, CGM data show a maximum glucose response following i.p. glucose injection at 19.0 ± 0.8 min in normoglycaemic mice with 15-min glucometer readings underestimating the glucose concentration at this time-point (21.2 ± 0.9 vs. 20.3 ± 0.6 mM, p < 0.05, paired t-test, n = 16). Conversely, in glucose intolerant HFD mice, maximum glucose is achieved at 31.4 ± 2.2 min, which is accurately represented by 30-min glucometer reading (25.7 ± 1.0 vs. 24.5 ± 1.1 mM, p > 0.05, paired t-test, n = 9). Although peak glucose concentrations may be missed using 15-min glucometer readings in some mice, overall glucose tolerance is accurately captured using these standard timepoints and consistency between studies and mice is most important.
Overall, glucose tolerance may be affected by variations in both protocols undertaken prior to GTTs (i.e., fast length, cage changing and movement of animals) and during GTTs (i.e., in vivo procedures, route/dose of glucose and timing of repeated sampling). Hence, full disclosure of methods and consistency in protocols between mice and cohorts is key.
| Refinement on drug efficacy studies
GTT protocols can be refined in several ways, including by using shorter fast lengths (6 h daytime vs. 16 h overnight), modifying cage change method (retention of used bedding vs. whole cage changes) and reducing restraint for glucose administration (oral gels vs. gavage and i.p. injection). 24 However, it could be argued that higher blood glucose concentrations, whether due to stress, sex or exogenous glucose, may be required to detect the ability of drugs to reduce them. Importantly, our data have recently refuted this by showing that the effect of both i.p. exendin-4 and oral metformin on glucose tolerance can still be observed when all these refinements are practiced even in females who have lower blood glucose concentrations. 24 Hence, refinement of procedures in line with the 3Rs does not impede on the predictive validity of the model.
| The effect of refinement on GTT reproducibility
We have also considered the reproducibility of GTTs using different protocols with regards to differences observed between days, between mice and between cohorts. Our data have shown that more refined procedures (6 h fasted oral gel GTTs with bedding retention cage change) are actually more reproducible between repeats in the same mice whereas less refined procedures (16 h fasted i.p.GTTs) are less reproducible between mice of the same cohort and between cohorts. Hence, refinement of protocols maintains scientific integrity and reproducibility of data while improving welfare. In addition, we found that more refined in vivo procedures (i.e., tunnel vs. tail handling, 30 G vs. 27 G needles for tail-pricks and 0.6 vs. 1.2 µl blood samples) also reduced researcher-induced glucose responses and, hence, could potentially affect GTT outcome.
| Habituation and acclimatisation
It has previously been shown that mice can become acclimatised and habituated to stressors. 50 However, we have observed no evidence of reductions in researcher-induced glucose responses with repeated experimentation despite reduced behavioural responses including defaecation and urination. For example, there was no significant difference in response to blood sampling for three repeats over 6 weeks ( Figure 3). Despite this, a 'first GTT/ITT' phenomenon has been regularly observed in our laboratory whereby glucose tolerance and/or insulin sensitivity during the first i.p.GTT or ITT is consistently and significantly impaired compared with all future repeats. Although repeating experiments may remove this effect, mice will then undergo further blood sampling and stress T A B L E 1 Our recommendations for protocols undertaken in preclinical diabetes research such as those both prior and during glucose tolerance tests (GTT) to maximise welfare and scientific outcomes
Protocol Recommendation
Habituation Mock injection prior to first GTT which could impact welfare. Therefore, prior administration of i.p. saline, which we have found to eliminate this effect, may be preferable. 51 Although not tested, prior scruffing and touching the i.p. site with a needle or finger may also provide similar habituation while ensuring further refinement.
| The effect of different researchers
Our data have shown that different researchers can profoundly alter both response to stressors and glucose tolerance with cumulative reductions in blood glucose responses during an i.p.GTT alongside enhanced researcher experience and animal familiarity. The impact of different researchers should, therefore, be considered, especially in laboratories where numerous researchers collaborate, as data may not be directly comparable.
| Accumulation of responses
Finally, we have found evidence of an accumulation in glucose responses following in vivo procedures which may cause cumulative effects on impaired glucose tolerance. For example, moving animals into new procedure rooms only impairs glucose tolerance when mice have undergone a whole cage change at the start of a 6 h fast or have been fasted overnight. Furthermore, sex and oestrous-related differences in glucose tolerance are exaggerated when a less experienced researcher undertakes the i.p.GTT. These results, therefore, suggest that using the most refined procedures can partially protect from unavoidable researcher-induced artefacts in preclinical diabetes studies.
| CONCLUSIONS
A summary of our recommendations for protocols undertaken in preclinical diabetes research is shown in Table 1.
In conclusion, appropriate experimental design is vital in preclinical diabetes research with various aspects that must be considered including appropriate end points, experimental protocols and the potential impact of variations/refinements in procedures. 5,52 This is paramount to maximise welfare in line with the 3Rs while ensuring the end point of interest is being tested, scientific reproducibility is maintained, and drug effects can still be observed. Most importantly, methods should be standardised, kept consistent and fully disclosed to ensure comparability of data as even minor variations in protocols can significantly impact results. | 2021-10-02T06:17:21.520Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "03a6d7f2d1753b1f95f46d1e95786cba027d8fbd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dme.14705",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "e5e13f56f9c17c916d61f1a19d16ad21c5e92fef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16042373 | pes2o/s2orc | v3-fos-license | Variability Aware Network Utility Maximization
Network Utility Maximization (NUM) provides the key conceptual framework to study resource allocation amongst a collection of users/entities across disciplines as diverse as economics, law and engineering. In network engineering, this framework has been particularly insightful towards understanding how Internet protocols allocate bandwidth, and motivated diverse research on distributed mechanisms to maximize network utility while incorporating new relevant constraints, on energy/power, storage, stability, etc., for systems ranging from communication networks to the smart-grid. However when the available resources and/or users' utilities vary over time, a user's allocations will tend to vary, which in turn may have a detrimental impact on the users' utility or quality of experience. This paper introduces a generalized NUM framework which explicitly incorporates the detrimental impact of temporal variability in a user's allocated rewards. It explicitly incorporates tradeoffs amongst the mean and variability in users' allocations. We propose an online algorithm to realize variance-sensitive NUM, which, under stationary ergodic assumptions, is shown to be asymptotically optimal, i.e., achieves a time-average equal to that of an offline algorithm with knowledge of the future variability in the system. This substantially extends work on NUM to an interesting class of relevant problems where users/entities are sensitive to temporal variability in their service or allocated rewards.
I. INTRODUCTION
Network Utility Maximization (NUM) provides the key conceptual framework to study (fair) resource allocation among a collection of users/entities across disciplines as diverse as economics, law and engineering. In network engineering this framework has recently served as a particularly insightfull setting in which to study (reverse engineer) how the Internet's congestion control protocols allocate bandwidth, how to devise schedulers for wireless systems with time varying channel capacities, and motivated the development of distributed mechanisms to maximize network utility in diverse settings including communication networks and the smart grid, while incorporating new relevant constraints, on energy, storage, power control, stability, etc. However when the available resources and/or users' utilities vary over time, allocations amongst users will tend to vary, which in turn may have a detrimental impact on the users' utility or perceived service quality.
Indeed temporal variability in utility, service, resources or associated prices are particularly problematic when humans are the eventual recipients of the allocations. Humans typically view temporal variability negatively, as sign of an unreliable service, network or market instability, or as a service which when viewed through human's cognitive and behavioral responses can, and will, translate to a degraded Quality of This research was supported in part by Intel and Cisco under the VAWN program, and by the NSF under Grant CNS-0917067. Experience (QoE). For example temporal variability in video quality has been shown to lead to hysteresis effects in humans quality judgments can can substantially degrade a user's QoE. This in turn can lead users to make decisions, e.g., change provider, act upon perceived market instabilities, etc., which can have serious implications on buisineses and engineered systems, or economic markets. This paper introduces a generalized NUM framework which explicitly incorporates the detrimental impact of temporal variability in a user's allocated rewards. We use the term rewards as a proxy representing the resulting utility of, or any other quantity associated with, allocations to users/entities in a system. Our goal is to explicitly tackle the task of incorporating tradeoffs amongst the mean and variability in users' rewards. Thus, for example, in a variance-sensitive NUM setting, it may make sense to reduce a user's mean reward so as to reduce its variability. As will be discussed in the sequel there are many ways in which temporal variations can be accounted for, and which, in fact, present distinct technical challenges. In this paper we shall take a simple elegant approach to the problem which serves to address systems where tradeoffs amongst the mean and variability over time need to be made rather than systems where the mean (or target) is known, or where the issue at hand is the cumulative variance at the end of a given (e.g., investment) period.
To better describe the characteristics of the problem we introduce some preliminary notation. We shall consider a network shared by a set N of users (or other entities) where |N | = N denotes the number of users in the system Throughout the paper, we distinguish between random variables (and random functions) and their realizations by using upper case letters for the former and lower case for the latter. We use bold letters to denote vectors, e.g., a = (a i : i ∈ N ). We let (a) 1:T denote the finite length sequence (a(t) : 1 ≤ t ≤ T ). For a function U on R, U ′ denotes its derivative.
Thus if r i (t) represents the reward allocated to user i at time t, then r(t) = (r i (t) : i ∈ N ) is the vector of rewards to users N at time t and (r) 1:T represents the rewards allocated over time t = 1, . . . , T slots to the same users. We assume that reward allocations are subject to time varying network constraints, c t (r(t)) ≤ 1 for t = 1, . . . , T, where c t : R N → R corresponds to convex function, thus implicitly defining a convex set of feasible reward allocations. To formally capture the impact of the time-varying resources on users' QoE consider the following offline convex optimization problem OPT(T): where for each i ∈ N , We refer to this as an offline optimization because timevarying time constraints (c t ) 1:T are assumed to be known, and allow functions U R i , U V i i∈N making the optimization problem convex. Note that the first term in a user i's proxy QoE 1 T T t=1 U R i (r i (t)) captures the degree to which QoE increases in his/her allocated rewards at any time, whereas the second term typically increasing in Var T (.) would penalizes temporal variability in reward allocation. Hence, this general formulation allows us to tradeoff between mean and variability associated with the reward allocations by appropriately choosing the functions U R i , U V i i∈N .
A. Main result and contributions
The main contribution of this paper is in devising an online algorithm, for Adaptive Variability-Aware Resource (AVR) allocation, which realizes variance-sensitive NUM. Under stationary ergodic assumptions on the time-varying constraints, we show AVR is asymptotically optimal, i.e., achieves a performance equal to that of the offline optimization OPT(T) introduced earlier. This is a strong optimality result, which at first sight may be surprising due to the dependency of Var T (.) in the objective of OPT(T) on reward allocations over time and the time varying nature of the constraints (c t ) t . The key idea exploits the characteristics of the problem, by keeping online estimates for the relevant quantities associated with users' allocations, e.g., the mean, variance, and mean QoE, which over time are shown to converge, and which eventually enable the online policy to produce allocations corresponding to the optimal stationary policy. Proving this result is somewhat challenging as it requires, showing that the estimates based on allocations produced by our online policy, AVR, (which itself depends on the estimated quantities), will converge to the desired values. To our knowledge this is the first attempt to generalize the NUM framework in this direction. We will contrast our problem formulation and approach to some of the past work in the literature addressing variance minimization, risk-sensitive control and other MDP based frameworks the related work below.
B. Related Work
Network Utility Maximization (NUM) provides the key conceptual framework to study how to allocate rewards fairly amongst a collection of users/entities. [?] provides an overview of NUM. But all the work on NUM including several major extensions (for e.g., [?], [?], [?] etc.) have ignored the impact of variability in reward allocation on the quality of experience of users.
Adding a variance term in the objective function, would take things out of the general dynamic programming setting, see e.g. [?]. Indeed, including variance in the utility/cost to users at each time, signifies the overall cost is not decomposable, i.e., can not be written as a sum of costs each dependent only on the allocation at that time this makes sensitivity to variability challenging. For instance, [?] discusses minimum variance controller for linear systems (Section 5.3) where the objective is the minimization of the sum of second moments of the output variable. Sum of second moments is considered instead of the variance, which allows the cumulative cost to be represented sum of the costs incurred over time. Note however, that minimization of second moments does not directly address variability unless the mean is zero. The variance of the cumulative cost is incorporated in the objective for problems in risk sensitive optimal control (see [?]) to capture the risk associated with a policy. Note however, that the variance is of the cumulative cost rather than of the variability as seen by a user over time. To summarize, to our knowledge there are no previously proposed works on NUM that addresses the negative impact of variability. The algorithm proposed here falls into the class of stochastic fixed point algorithms (see [?]). Our algorithm is also related to the algorithms proposed in [?] and [?] although these works also ignore variability.
C. Organization of the paper
In Section II, we discuss the system model and assumptions. We study the optimality conditions for OPT(T) in Section III. We introduce OPTSTAT in IV and study its optimality conditions. We start Section V by formally introducing our online algorithm AVR. Then do a convergence analysis of AVR in Subsection V-A, and conclude the section by establishing the asymptotic optimality of AVR in Subsection V-B. We conclude the paper in Section VI. The proofs of some of the intermediate results used in the paper are discussed in an appendix given at the end of the paper.
II. SYSTEM MODEL
We consider a slotted system where slots are indexed by t ∈ {0, 1, 2...}, and the system serves a fixed set of users N and let N = |N |. Let where . denotes the Euclidean norm associated with the space. For a function U on R, U ′ denotes its derivative. We use I as the indicator function, i.e., for any set A, we let I {a∈A} = 1 if a ∈ A, and zero otherwise. We assume that the reward allocation r(t) ∈ R N + in slot t is constrained to satisfy the following inequality where c t is picked from a (arbitrarily large) finite set C of real valued maps on R N + . We make the following assumptions on these constraints:
Assumptions C1-C4 (Time varying constraints)
C.1 There is a constant r min ≥ 0 such that for any c ∈ C, c (r) ≤ 0 for r such that r i = r min for each i ∈ N . C.2 There is a constant r max > 0 such that for any c ∈ C and r ∈ R N + satisfying c (r) ≤ 0, we have r i ≤ r max for each i ∈ N . C.3 Each function c ∈ C is convex and differentiable on an open set containing [r min , r max ]. C.4 For any c ∈ C and r such that r i = r min for each i ∈ N , c (r) < 0 or c (r) ≤ 0 if c is an affine function. C.5 Let (C t ) t be a stationary ergodic process, and let (π(c) : c ∈ C) denote the stationary distribution associated.
We let C π denote a random constraint with distribution (π(c) : c ∈ C).
We could allow the constants r min and r max to be user dependent. But, we avoid that for notational simplicity. The condition C.4 is imposed to ensure that the constraint set is 'nice' when used as a feasible set for an optimization problem OPT(T) (see for e.g. Lemma 1).
Next we discuss the assumptions on the functions U R i , U V i i∈N . For each i ∈ N , we make the assumptions U.V and U.R discussed next.
U.V: U V i is defined and twice continuously differentiable on an open set containing
Further, we assume that for any two elements x 1 and x 2 in any Euclidean space R d with x 1 = x 2 , and α ∈ (0, 1) withᾱ = 1 − α, we have where . denotes the Euclidean norm associated with the space.
i is defined and differentiable on an open set containing [r min , r max ]. Further, we assume that U R i is concave and strictly increasing on [r min , r max ].
Note that by picking, for each i ∈ N , the functions U V i from the following set we satisfy the requirements in U.V. Note that this includes the identity function U V (v) = v. Also, the function U V (v) = √ v + δ for any (arbitrarily small) δ > 0 satisfies the conditions in U.V, We satisfy U.R if we pick the functions U R i i∈N from following class of strictly concave increasing functions These functions are commonly used to enforce fairness to obtain allocations that are α−fair (see [?]). A larger α corresponds to a more fair allocation. Note that we have to ensure that 0 / ∈ [r min , r max ] to ensure that function is well defined, and even if this is not the case, we could use U α (.+ δ) instead of U α (.) for an arbitrarily small positive shift δ in the argument to avoid this requirement.
We will see later that AVR can be made more efficient if U V i is linear for some users i ∈ N . We define the following subsets of N : i is not linear . We focus on obtaining an algorithm for reward allocation that can be implemented at a centralized coordinator that has access to c t at the beginning of slot t. For instance, in a cellular network setting (like in WN), this could be a basestation that estimates the channel strengths of the users in the network to find c t .
A. Applications and scope of the model
The presence of time varying constraints c t (r) ≤ 0 allows us to apply the model to several interesting and useful settings. In particular, here we focus on a wireless network setting by discussing three cases WN, WN-E and WN-T, and show that the model can handle problems involving time varying exogenous constraints and time varying utility functions. We start by discussing case WN where the reward in a slot is the rate allocated to the user in that slot. Let P denote a finite (but arbitrarily large) set of positive vectors where each vector corresponds to the peak transmission rate vector for a slot seen by users in a wireless network. Let C = c p : c p (r) = i∈N ri pi − 1, p ∈ P . Here, for any allocation r, r i /p i is the fraction of time the wireless system needs to serve user i in slot t to deliver data at the rate of r i to user i in a slot where the user has peak data transmission rate p i . Thus, the constraint c p (r) ≤ 0 can be seen as a scheduling constraint that corresponds to the requirement that the sum of the fractions of time that different users are served in a slot should be less than or equal to one.
Time varying exogenous constraints: We can also allow for time varying exogenous constraints on the wireless system by appropriately defining the set C. For instance, consider case WN-E where a base station in a cellular network allocates rates to users some of whom are streaming videos. As pointed above QoE of users viewing video content is sensitive to temporal variability in quality. But, while allocating rates to these users, we also need to account for the time varying resources requirements of the voice and data traffic handled by the basestation. We can deal with this constraint by defining where each element in the set corresponds to the fraction of time in a slot that is utilized the voice and data traffic.
Time varying utility functions: For the users streaming video content discussed in the case WN-E, it is more appropriate to view the perceived video quality of a user in a slot as the reward for that user in that slot. However, for users streaming video content, the dependence of perceived video quality (in a short duration slot roughly a second long which corresponds to a collection of 20-30 frames) on the compression rate is time varying. This is typically due to the possibly changing nature of the content, e.g., from an action to a slower scene. Hence, the 'utility' function that maps the reward (i.e., perceived video quality) derived from the allocated resource (i.e., the rate) is time varying. This is the setting in the case WN-T, and we can handle it as follows. Let q t,i (w i ) denote the strictly increasing concave function that, in slot t, maps the perceived video quality to the rate w i allocated to user i. For each user i, let Q i be a finite set of such functions. Hence, we can view WN-T as a case that has the following set of constraints: Note that each element in C 3 is a convex function. For WN and WN-E, we can verify that by choosing r max = max p∈P max i∈N p i and an r min satisfying 0 ≤ r min ≤ 1 N min p∈P min i∈N p i , we satisfy C.1-C.4. In WN-T, if we assume that each function q ∈ Q is differentiable and convex with q(0) = 0 (which are very reasonable assumptions on the dependence between quality and compression rate), then we can verify that by choosing r min = 0 and r max = max p∈P max i∈N max q∈Q q (p i ), we satisfy C.1-C.4.
Variability aware rate adaptation for video: The above formulation is applicable to the problem of finding optimal (joint) video rate adaptation that maximizes the sum QoE of users streaming videos utilizing resources of a shared network. Given the predictions for explosive growth of video traffic in the near future (see [?]), this is among one of the important networking problems today. For a user viewing a video stream, variations in video quality over time has a detrimental impact on the user's QoE, see e.g., [?], [?], [?]. Indeed [?] even points out that variations in quality can result in a QoE that is worse than that of a constant quality video with lower average quality. Furthermore, [?] proposed and evaluated a metric for QoE which roughly corresponds to the choices U R i (r) = r and U V i (v) = √ v + δ in the model described above for a very small δ > 0.
III. OPTIMAL VARIANCE-SENSITIVE OFFLINE POLICY
In this section, we study OPT(T), the offline formulation for optimal joint reward allocation introduced in Section I. In the offline setting, we assume that (c) 1:T , i.e., the realization of the process (C) 1:T , is known. We denote the objective function of OPT(T) by φ T , i.e., and U R i i∈N and U V i i∈N are functions satisfying U.R and U.V respectively, and Var T ((r i ) 1: Hence the optimization problem OPT(T ) can be rewritten as: where c t ∈ C is a convex function for each t. The next result asserts that OPT(T ) is a convex optimization problem satisfying Slater's condition (Section 5.2.3, [?]) and that it has a unique solution.
Lemma 1. OPT(T ) is a convex optimization problem satisfying Slater's condition with a unique solution.
Proof: Since we made the assumptions U.R and U.V, the convexity of the objective of OPT(T ) is easy to establish once we prove the convexity of the function U V i (Var T (.)) for each i ∈ N . Using (1) and the definition of Var T (.), we can show that U V i (Var T (.)) is a convex function for each i ∈ N . The details are given next. For two different quality vectors r 1 1:T and r 2 1:T , any i ∈ N , α ∈ (0, 1) andᾱ = 1 − α, we have that Var T α r 1 i 1:T +ᾱ r 2 Using (1), we have that is a convex function. Using the above arguments and concavity of U R i and −U V i (Var T (.)), we conclude that OPT(T ) is a convex optimization problem.
Note that, from (1) (since we have a strict inequality), the inequality above is a strict one unless Thus, for the inequality not to be a strict one, we require that Var T r 1 i 1:T = Var T r 2 i 1:T . Further, Slater's condition is satisfied and it mainly follows from the assumption C.4. Now, for any i ∈ N , U R i and −U V i (Var T (.)) are not necessarily strictly concave. But, we can still show that the objective is strictly concave as follows. Let r 1 1:T and r 2 1:T be two optimal solutions to OPT(T ). Then, from the concavity of the objective, α r 1 i 1:T +ᾱ r 2 i 1:T is also an optimal solution for any α ∈ (0, 1) and α = 1 − α. Due to concavity of U R i (.) and convexity of for each i ∈ N and 1 ≤ t ≤ T . Since for each i ∈ N , Var T r 1 i 1:T = Var T r 2 i 1:T , due to optimality of r 1 1:T and r 2 1:T , we have that Since U R i is a strictly increasing function for each i ∈ N , the above equation implies that and thus, From the above discussion, we can conclude that OPT(T ) has a unique solution.
We let r T 1:T denote the optimal solution to OPT(T ). Since OPT(T ) is a convex optimization problem satisfying Slater's condition (Lemma 1), Karush-Kuhn-Tucker (KKT) conditions ([?])given next are necessary and sufficient for optimality. Let .
KKT-OPT(T):
r T 1:T is an optimal solution to OPT(T ) if and only if it is feasible, and there exist non-negative constants µ T 1:T and γ T i : i ∈ N 1:T such that for all i ∈ N and t ∈ {1, ..., T }, we have Here c ′ t,i denotes ∂ct ∂ri , and we have used the fact that for any i ∈ N and τ ′ ∈ {1, ..., T } From (6), we see that the optimal reward allocation r T (t) in any time slot t depends on the entire allocation r T 1:T only through the following four quantities associated with r T 1:T : (i) time average reward m T , (ii) U V i ′ i∈N evaluated at the variance seen by the respective users. So, if a genie revealed these quantities, the optimal allocation for each slot t, can be determined by solving an optimization that only requires the knowledge of c t (associated with current slot) and not (c) 1:T . We exploit this key idea while formulating the online algorithm AVR (proposed in Section V).
IV. A RELATED PROBLEM: OPTSTAT
In this section, we introduce and study another optimization problem OPTSTAT closely related to OPT(T ). The formulation OPT(T) mainly involves time averages of various quantities associated with it. Instead, the formulation of OPTSTAT is based on the expected value of the corresponding quantities evaluated using the stationary distribution of (C t ) t .
Recall that (see C.5) (C t ) t is a stationary ergodic process with stationary distribution (π(c) : c ∈ C), i.e., for c ∈ C, π(c) is the probability of the event c t = c. Since C is finite, we assume that π(c) > 0 for each c ∈ C without any loss of generality. Let (r (c)) c∈C be a vector representing the reward allocation r (c)(∈ R N ) to the users for each c ∈ C. Although we are abusing the notation introduced earlier where r(t) denoted the the allocation to the users in slot t, one can differentiate between the functions based on the context in which they are being discussed. Now, let The optimization problem OPTSTAT given below: The next result gives few useful properties of OPTSTAT.
Lemma 2. (a) OPTSTAT is a convex optimization problem satisfying Slater's condition. (b) OPTSTAT has a unique solution.
Proof: The proof is similar to that of Lemma 1 (and is easy to establish once we prove the convexity of the function Var π (.)).
Using Lemma 2 (a), we can conclude that KKT conditions are necessary and sufficient for optimality for OPTSTAT. Let (r π (c) : c ∈ C) denote the optimal solution.
KKT-OPTSTAT:
There exist constants (µ π (c) : c ∈ C) and (γ π i (c)) i∈N : c ∈ C are such that where c ′ i denotes ∂c ∂ri , and we used following result: for any c 0 ∈ C, i ∈ N ,
V. ADAPTIVE VARIANCE AWARE REWARD ALLOCATION
In this section, we present our online algorithm AVR to solve OPT(T ), and establish its asymptotic optimality.
The reward allocations for AVR are obtained by solving OPTAVR(m, v, c) given below: where Note that OPTAVR(m, v, c) is closely related to OPT-ONLINE (discussed in Subsection I-A). Also, note that h 0 (e, v) does not depend on the allocation and thus can be ignored while solving the optimization problem. But, it modifies the objective function and (thus) the optimal value of the objective function to ensure certain nice properties for the partial derivatives of latter (see Lemma 3 (b)). Let r * (m, v, c) denote the optimal solution to OPTAVR (m, v, c). Also, let H be given by: where × denotes cross product operator for sets. Next, we describe the algorithm AVR in detail. AVR consists of three steps, AVR.0-AVR.2, given next: In each slot t + 1 for t ≥ 0, carry out the following steps: AVR.1: The reward allocation in slot t is given by r * ( m(t), e(t), v(t), c t+1 ) and will be denoted by r * (t + 1) (when the dependence on the variables is clear from context). AVR.2: In slot t, update m i as follows: for all i ∈ N , and update v i as follows: We see that the update equations (14)-(15) roughly ensure that the parameters m(t) and ( v i (t)) i∈NV n keep track of mean reward and variance in reward respectively associated with the reward allocation under AVR. Also, note that we do not have to keep track of the estimates of variance in reward seen by users i with linear U V i . We let θ t = ( m(t), v(t)) for each t. The update equations (14)-(15) ensure that θ t stays in the set H.
For any (m, v, c) ∈ H, we have U V i ′ (v i ) > 0 (see assumption U.V). Hence, OPTAVR(m, v, c) is a convex optimization problem with a unique solution. Further, using assumption C.4, we can show that it satisfies Slater's condition. Hence, the optimal solution for OPTAVR(m, v, c) satisfies KKT conditions given below.
KKT-OPTAVR(m, v, c):
There exist non-negative constants µ * and (γ * i : i ∈ N ) such that for all i ∈ N Let h (m, v, c) denote the optimal value of the objective function of OPTAVR(m, v, c), i.e., h is a function defined on an open interval (the obvious one that can be obtained from the domains of the functions U R i , U V i i∈N ) containing H as given below where r * stands for r * (m, v, c).
In the next result, we establish continuity and differentiability properties of r * (m, v, c) (also denoted by r * in the result) and h (m, v, c) respectively, viewing them as functions of (m, v).
Proof Sketch: Proofs of parts (a) and (b) mainly rely on some fundamental results on perturbation analysis of optimization problems from [?] and [?]. Part (a) can be proved using Theorem 2.2 in [?]. The result in part (b) can be shown using Theorem 4.1 in [?]. This theorem tells us that if certain conditions are met, then we can evaluate the partial derivative of the optimal value of a parametric optimization problem (with respect to any parameter) by just evaluating the partial derivative of the objective of the optimization problem, and then substituting the optimal solution. For instance, by using the theorem, we can evaluate the partial derivative of the optimal value h (θ, c) with respect to m i as follows. We first evaluate the partial derivative of the objective function of OPTAVR (θ, c): Now, on substituting r * in the above expression, we obtain the first result in part (b). The other results can be obtained similarly. Parts (c) and (d) can shown using parts (a) and (b) respectively, and Bounded Convergence Theorem (see [?]).
From part (b) of the above result, we see that the update equations (14)-(15) ensure that θ(t) moves in a direction that increases h(.). This is in part due to the careful choice of the function h 0 (which is independent of variables being optimized) appearing in the objective function of OPTAVR.
Next, we find relationships between the optimal solution (r π (c) : c ∈ C) of OPTSTAT and OPTAVR. Towards that end, let m π i = c∈C π (c) r π i (c) and v π i = Var π (r π i (c)) c∈C for each i ∈ N . Next, let where the conditions (19)-(20) are given below: Part (a) of the next result provides a fixed point like relationship for the optimal solution to OPTSTAT using the optimal solution function r * (.) of OPTAVR, and part (b) is a useful consequence of part (a). A proof for the result is given in Appendix A.
The next result tells us that we can obtain the optimal solution to OPTSTAT from any element in H * by using the optimal solution function r * (.). Further, it gives us very useful uniqueness results for the components of the elements in H * . A proof for the result is given in Appendix B.
Till now, we focused only on the optimization problem OPTAVR associated with AVR. In the next subsection, we study the evolution of θ t t under AVR.
A. Convergence Analysis
In this subsection, we focus on establishing some properties related to the convergence of the sequence θ t t that are key to proof of the main optimality result (Theorem 1).
Towards that end, we study the the differential equation whereḡ (θ) is a function taking values in R 3N defined as follows: for θ = (m, v) ∈ H, let The motivation for studying the above differential equation should be partly clear by comparing the RHS of (21) with the update equations in (14)-(15) in AVR. Now we study (21) in light of the above result and obtain a convergence result for the differential equation, which tells us that for any initial condition, θ(t) evolving according to (21) converges to the set H * given by We can verify that H * ⊂ H * (using (19)- (20)). A proof for the next result is discussed in Appendix C.
Lemma 6. Suppose θ(t) evolves according to (21). Then, θ(t) converges to H * as t tends to infinity for any θ(0) ∈ H. Now, due to the above result, we have a key convergence result for the differential equation (21) which is closely related to the update equations (14)-(15) of AVR. Next, we use this result to obtain a convergence result for θ t t . We do so by viewing (14)-(15) as a stochastic approximation update equation, and using a result from [?] that helps us to relate it the differntial equation (21). We had pointed out that our main interest is in the convergence properties of m i (t), i∈N . The next result uses Lemma 7 to establish the desired convergence property. A proof for the result is given in Appendix D.
Lemma 8. If θ 0 ∈ H, then the sequence θ t t generated by AVR satisfies: (a) For each i ∈ N , lim t→∞ m(t) = m π , and (b) lim t→∞ r * θ(t), c = r * (θ π , c), and . Next, we use Lemma 8 and stationarity to establish certain properties associated with the time averages of the reward allocations under the online scheme AVR. For brevity, in the following result, we let r * (t) denote r * ( m(t), v(t), c t ). A proof for the result is given in Appendix E.
Lemma 9. For almost all sample paths,
B. Asymptotic Optimality of AVR
The next result establishes the asymptotic optimality of AVR, i.e., if we run AVR for long enough period, the dif-ference in performance of AVR and the optimal finite horizon policy becomes negligible.
To prove part (b), consider any realization of (c) 1:T . Let (µ * ) 1:T and (γ * i : i ∈ N ) 1:T be the sequences of non negative real numbers satisfying (16), (17) and (18) for the realization. Hence, from the non-negativity of these numbers, and feasibility of r T 1:T , we have Since ϕ T is a differentiable concave function, we have (see [?]) have that (r * (m 1 , v 1 , c)) c∈C and (r * (m 2 , v 2 , c)) c∈C are two distinct solutions to OPTSTAT. However, this contradicts fact that OPTSTAT has a unique solution (see Lemma 2(b)). Thus, (b) has to hold. Now suppose that (m 1 , v 1 ) , (m 2 , v 2 ) ∈ H * . and that (c) does not hold. Then, we can conclude that atleast one of the conditions given in part (c) does not hold. For instance, suppose that v 1j = v 2j for some j ∈ N V n . This along with the fact that (m 1 , v 1 ) , (m 2 , v 2 ) ∈ H * (and thus they satisfy (20)) implies that Var (r * i (m 1 , v 1 , C π )) = Var (r * i (m 2 , v 2 , C π )). Thus, we can conclude that for some c 0 ∈ C and i ∈ N , . We can reach the same conclusion if any of the conditions given in (c) are violated. But, the conclusion contradicts part (b). Thus, (c) has to hold.
Part (d) follows from part (c) and Lemma 4 part (b).
In the remaining part of the proof, we prove that H π ⊂ H * from which the main claim follows.
From the above conclusion and (24), we can conclude that for any θ ∈ H π , we have θ ∈ H * . Since H * ⊂ H * , we have that for H π ⊂ H * . Now, since θ(t) converges to H π , we can conclude that θ(t) converges to H * and the result follows.
APPENDIX D PROOF FOR LEMMA 8
For any (m, v) ∈ H * , m = m π and from Lemma 7, θ(t) converges to H * . Hence (a) holds.
Thus, part (b) holds. Parts (c) and (d) can be proved using a similar approach as above by using the following facts: (i) θ(t) converges to
APPENDIX E PROOF OF LEMMA 9
Consider any realization (c t ) t of (C t ) t . For any c ∈ C, using Lemma 8 (b) and the ergodicity of (C t ) t , we have = π (c) r * (θ π , c) Since, r * θ t , c t = c∈C I (ct=c) r * θ t , c and C is a finite set, we can use the above result to conclude that This proves part (a). Using the ergodicity of (C t ) t , parts (b) can be proved using a similar approach (as above) by using part (c) of Lemma 8. | 2012-04-13T18:48:47.000Z | 2011-11-16T00:00:00.000 | {
"year": 2011,
"sha1": "ed7c0ca796877715424722c3833fa54b57f1f6d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed7c0ca796877715424722c3833fa54b57f1f6d4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
18309096 | pes2o/s2orc | v3-fos-license | Neutralization of Bacterial YoeBSpn Toxicity and Enhanced Plant Growth in Arabidopsis thaliana via Co-Expression of the Toxin-Antitoxin Genes
Bacterial toxin-antitoxin (TA) systems have various cellular functions, including as part of the general stress response. The genome of the Gram-positive human pathogen Streptococcus pneumoniae harbors several putative TA systems, including yefM-yoeBSpn, which is one of four systems that had been demonstrated to be biologically functional. Overexpression of the yoeBSpn toxin gene resulted in cell stasis and eventually cell death in its native host, as well as in Escherichia coli. Our previous work showed that induced expression of a yoeBSpn toxin-Green Fluorescent Protein (GFP) fusion gene apparently triggered apoptosis and was lethal in the model plant, Arabidopsis thaliana. In this study, we investigated the effects of co-expression of the yefMSpn antitoxin and yoeBSpn toxin-GFP fusion in transgenic A. thaliana. When co-expressed in Arabidopsis, the YefMSpn antitoxin was found to neutralize the toxicity of YoeBSpn-GFP. Interestingly, the inducible expression of both yefMSpn antitoxin and yoeBSpn toxin-GFP fusion in transgenic hybrid Arabidopsis resulted in larger rosette leaves and taller plants with a higher number of inflorescence stems and increased silique production. To our knowledge, this is the first demonstration of a prokaryotic antitoxin neutralizing its cognate toxin in plant cells.
Introduction
Toxin-antitoxin (TA) systems are extensively found in bacteria and archaea, where they play diverse roles in important cellular functions. Bacterial toxin-antitoxin (TA) systems usually consist of a pair of genes encoding a stable toxin and its cognate labile antitoxin and are located in the chromosome or in plasmids of most bacterial species [1,2]. Currently, TA systems have been classified into five types (types I-V) according to how the antitoxins counteract the effect of their cognate toxins [2][3][4][5][6]. However, the most studied TA systems are of type II, in which the protein antitoxin inhibits the toxin protein through tight binding, with the antitoxin blocking the toxin active site [1,7].
Most of the characterized TA system toxins function as endoribonucleases, while other toxins disrupt DNA replication and transcription. Some toxins may also interfere with the synthesis of the bacterial cell wall [7,8]. Among these bacterial toxins, some have been shown to be functional when expressed in eukaryotic cells. They have the potential for use in the manipulation of eukaryotic cell growth, such as in restricting the accidental escape of genetically-modified cells [9]. The bacterial RelE toxin from E. coli has been shown to be functional in the yeast Saccharomyces cerevisiae, where induction of the toxin gene in transgenic yeast cells inhibited their growth. When expressed in human osteosarcoma cells, RelE was also shown to trigger apoptosis [10].
The Gram-positive bacterium, Streptococcus pneumoniae (the pneumococcus), is a common cause of human respiratory tract infections and has been associated with outstanding morbidity and mortality [11]. Up to 10 putative type II pneumococcal TA systems have been identified [12]. Of these, four have been demonstrated to be functional, namely relBE2, yefM-yoeB Spn , pezAT and phd-doc [13,14]. The yefM-yoeB Spn TA system has been shown to be functional with overexpression of the yoeB Spn toxin, leading to cell stasis and eventually cell death in both S. pneumoniae and E. coli [15]. Our previous work [16] produced transgenic Arabidopsis thaliana carrying a yoeB Spn chromosomal toxin-Green Fluorescent Protein (GFP) fusion gene expressed using a 17-β-estradiol inducible two-component expression system. We showed that expression of the yoeB Spn toxin-GFP fusion gene apparently triggered apoptosis, which resulted in lethality in A. thaliana; therefore, we suggested that the conditional expression of yoeB Spn toxin could be used to ablate pollen formation for the development of male sterile plants for containment of transgenic plants or for hybrid seed production [16]. In the current study, we investigated the effects of co-expressing the yefM Spn antitoxin and yoeB Spn toxin in A. thaliana by cross-pollination between plants carrying either the yefM Spn antitoxin or yoeB Spn toxin-GFP fusion in inducible plant gene expression constructs.
Production of yefM Spn Antitoxin in Transgenic A. thaliana
In this study, a 17-β-estradiol-inducible two-component system [17] was used to produce transgenic A. thaliana for controlled expression of the yefM Spn antitoxin gene. The yefM Spn antitoxin gene was cloned in the responder vector pMDC160 (resulting in the recombinant designated pMDC160_yefM), while the cauliflower mosaic virus (CaMV) 35S promoter was cloned into the activator vector pMDC150 (as the recombinant pMDC150_35S [16]) to drive the constitutive expression of the 17-β-estradiol-responsive XVE transcriptional activator ( Figure 1). These two constructs were introduced into A. thaliana by the floral dip method, and five independent transgenic lines (T 0 ) were obtained by screening under kanamycin and Basta selection. After subsequent screening on the kanamycin-Basta mixture, three independent T 1 transgenic lines were chosen for further analysis and used to produce 68 Basta-and kanamycin-resistant transgenic T 2 lines. The introduction of the yefM Spn antitoxin into the plant genome was confirmed by PCR analysis of three randomly-selected yefM Spn -expressing plants from each T 2 line (Lines 1, 2 and 3) (Figure 2, Lanes 1.1-3.3). All of the T 2 progeny tested contained the expected 255-bp band corresponding to the yefM Spn DNA fragment, indicating that the transgene was successfully transferred into A. thaliana.
Transgenic A. thaliana Showed Normal Morphology after 17-β-Estradiol Induction for the Expression of yefM Spn Antitoxin
The growth of the transgenic A. thaliana (yefM Spn ) plants induced with 17-β-estradiol to express yefM Spn was not distinguishable from that of the control plants, i.e., 17-β-estradiol-induced A. thaliana wild-type, non-induced wild-type and non-induced A. thaliana (yefM Spn ) transgenic plants, with no morphological differences in rosette leaves and shape, even nine days after induction ( Figure 3). Similar results were observed for all three independent lines. , yefMSpn in pMDC160_yefM and the yoeBSpn-GFP fusion in pMDC221_yoeB-GFP) were cloned between the AscI and PacI unique restriction sites via Gateway recombination. In pMDC150_35S, expression of the XVE activator is driven by the CaMV 35S promoter and is, thus, constitutively expressed. Expression of the transgenes yefMSpn (in pMDC160_yefM) and yoeBSpn-GFP (in pMDC221_yoeB-GFP) are driven by the XVE-responsive promoter (labeled as OlexA TATA) and are, thus, inducible by 17-β-estradiol [16,17]. The plant selection markers (in yellow boxes and labeled as: Bar, Basta resistance gene; Kan, kanamycin resistance gene; hpt, hygromycin resistance gene) are driven by the nos promoter (labeled as nosP). TE9, T3A and nosT are transcriptional terminators (indicated in purple boxes); pBSK, pBlueScript backbone (indicated as a grey box). in pMDC160_yefM and the yoeB Spn -GFP fusion in pMDC221_yoeB-GFP) were cloned between the AscI and PacI unique restriction sites via Gateway recombination. In pMDC150_35S, expression of the XVE activator is driven by the CaMV 35S promoter and is, thus, constitutively expressed. Expression of the transgenes yefM Spn (in pMDC160_yefM) and yoeB Spn -GFP (in pMDC221_yoeB-GFP) are driven by the XVE-responsive promoter (labeled as OlexA TATA) and are, thus, inducible by 17-β-estradiol [16,17]. The plant selection markers (in yellow boxes and labeled as: Bar, Basta resistance gene; Kan, kanamycin resistance gene; hpt, hygromycin resistance gene) are driven by the nos promoter (labeled as nosP). TE9, T3A and nosT are transcriptional terminators (indicated in purple boxes); pBSK, pBlueScript backbone (indicated as a grey box).
Crosses of T 1 Transgenic yoeB Spn -GFP Plants with T 1 Transgenic yefM Spn Plants Produced yefM Spnˆy oeB Spn -GFP Hybrid Lines
All T 0 transgenic yoeB Spn -GFP and yefM Spn plants were capable of self-pollination and produced normal seeds. The seeds were harvested and germinated under selection. The T 1 yoeB Spn -GFP plants were crossed with T 1 yefM Spn plants, and the seeds were harvested.
Crosses of T1 Transgenic yoeBSpn-GFP Plants with T1 Transgenic yefMSpn Plants Produced yefMSpn × yoeBSpn-GFP Hybrid Lines
All T0 transgenic yoeBSpn-GFP and yefMSpn plants were capable of self-pollination and produced normal seeds. The seeds were harvested and germinated under selection. The T1 yoeBSpn-GFP plants were crossed with T1 yefMSpn plants, and the seeds were harvested.
Hybrids of Transgenic Plants Expressed Both yefMSpn and yoeBSpn-GFP after Induction with 17-β-Estradiol
Before induction, the yefMSpn × yoeBSpn-GFP hybrid plants grown in selective media did not show any signs of abnormality, and we detected no expression of either transgene by RT-PCR. The RT-PCR analysis with total RNA extracted from rosette leaf tissues after induction with 100 μM 17-β-estradiol confirmed the transcription of both genes from Days 1-7 after induction (Figure 5a). The relative expression levels of yefMSpn and yoeBSpn-GFP were analyzed by qRT-PCR using the rosette leaves from the same plants for Days 1-7 after induction (Figure 5b). The transcript levels of
Hybrids of Transgenic Plants Expressed Both yefM Spn and yoeB Spn -GFP after Induction with 17-β-Estradiol
Before induction, the yefM Spnˆy oeB Spn -GFP hybrid plants grown in selective media did not show any signs of abnormality, and we detected no expression of either transgene by RT-PCR. The RT-PCR analysis with total RNA extracted from rosette leaf tissues after induction with 100 µM 17-β-estradiol confirmed the transcription of both genes from Days 1-7 after induction (Figure 5a). The relative expression levels of yefM Spn and yoeB Spn -GFP were analyzed by qRT-PCR using the rosette leaves from the same plants for Days 1-7 after induction (Figure 5b). The transcript levels of yefM Spn and yoeB Spn -GFP each increased over the first three days, after which they decreased with yefM Spn showing higher relative expression levels than yoeB Spn -GFP from Day 2 post-induction. yefMSpn and yoeBSpn-GFP each increased over the first three days, after which they decreased with yefMSpn showing higher relative expression levels than yoeBSpn-GFP from Day 2 post-induction.
Induced Expression of yefMSpn and yoeBSpn-GFP Enhanced Growth in Hybrid A. thaliana
Before induction, the growth of the yefMSpn transgenic plants, yoeBSpn-GFP transgenic plants and yefMSpn × yoeBSpn-GFP hybrid plants was similar to that of the untransformed control plants (Figure 6a). By the seventh day post-induction, transgenic plants expressing the yoeBSpn-GFP fusion had died (Figures 6 and 7; as we had reported previously in [16]). However, hybrid transgenic plants co-expressing both the yoeBSpn-GFP fusion and the yefMSpn antitoxin gene remained healthy, indicating that co-expression of the yefMSpn antitoxin was able to neutralize the lethality of the yoeBSpn toxin (Figure 6a). Transgenic plants expressing yoeBSpn showed characteristic DNA fragmentation patterns indicative of apoptosis [16]. The DNA fragmentation assay showed that no fragmentation was observed in the 17-β-estradiol-induced transgenic hybrid plants ( Figure S1). Interestingly, in all three independent hybrid lines, the plants induced to express both yefMSpn and yoeBSpn-GFP displayed increased growth in terms of height, number of branches and inflorescence stems (Figure 6c-e), as well as rosette leaf size at the full stage of maturity, five weeks after 17-β-estradiol induction (i.e., nine weeks post-planting). The growth of each rosette leaf in the hybrid plants exceeded that of the leaves from the non-induced and induced control plants (i.e., yefMSpn, yoeBSpn-GFP and wild-type plants); both the petiole length and width of the rosette leaves were greater and significantly increased in the induced hybrid plants (Figures 6b snd S2). The increased
Induced Expression of yefM Spn and yoeB Spn -GFP Enhanced Growth in Hybrid A. thaliana
Before induction, the growth of the yefM Spn transgenic plants, yoeB Spn -GFP transgenic plants and yefM Spnˆy oeB Spn -GFP hybrid plants was similar to that of the untransformed control plants (Figure 6a). By the seventh day post-induction, transgenic plants expressing the yoeB Spn -GFP fusion had died (Figures 6 and 7; as we had reported previously in [16]). However, hybrid transgenic plants co-expressing both the yoeB Spn -GFP fusion and the yefM Spn antitoxin gene remained healthy, indicating that co-expression of the yefM Spn antitoxin was able to neutralize the lethality of the yoeB Spn toxin (Figure 6a). Transgenic plants expressing yoeB Spn showed characteristic DNA fragmentation patterns indicative of apoptosis [16]. The DNA fragmentation assay showed that no fragmentation was observed in the 17-β-estradiol-induced transgenic hybrid plants ( Figure S1). Interestingly, in all three independent hybrid lines, the plants induced to express both yefM Spn and yoeB Spn -GFP displayed increased growth in terms of height, number of branches and inflorescence stems (Figure 6c-e), as well as rosette leaf size at the full stage of maturity, five weeks after 17-β-estradiol induction (i.e., nine weeks post-planting). The growth of each rosette leaf in the hybrid plants exceeded that of the leaves from the non-induced and induced control plants (i.e., yefM Spn , yoeB Spn -GFP and wild-type plants); both the petiole length and width of the rosette leaves were greater and significantly increased in the induced hybrid plants (Figure 6b snd Figure S2). The increased growth of the hybrid A. thaliana plants was also reflected in the significant increase in the dry weight (Figure 6f). growth of the hybrid A. thaliana plants was also reflected in the significant increase in the dry weight ( Figure 6f). At nine weeks post-planting (i.e., five weeks after induction), the differences in the length of siliques were, however, not significant (Figure 7a,b). Nevertheless, the number of siliques per induced hybrid plants was significantly higher (up to 50%) than that of all control plants (Figure 7c), except for the yoeB Spn -GFP transgenic plants that had died after the first week of induction, and therefore, no measurement could be recorded.
At nine weeks post-planting (i.e., five weeks after induction), the differences in the length of siliques were, however, not significant (Figure 7a,b). Nevertheless, the number of siliques per induced hybrid plants was significantly higher (up to 50%) than that of all control plants (Figure 7c), except for the yoeBSpn-GFP transgenic plants that had died after the first week of induction, and therefore, no measurement could be recorded.
Discussion
The yefM-yoeBSpn TA system from the bacterial pathogen S. pneumoniae has been characterized and shown to be functional in its native host, as well as in E. coli, where over-expression of the yoeBSpn toxin, an endoribonuclease, was found to be inhibitory to cellular growth [12,15,18]. In our earlier study, we showed that the yoeBSpn toxin-GFP fusion was functional and toxic in Arabidopsis thaliana plants, as its induced expression was associated with signs of apoptosis [16]. The current study aimed to determine whether co-expression of the S. pneumoniae yefMSpn antitoxin with the yoeBSpn toxin-GFP fusion in A. thaliana could neutralize the lethal effects of the toxin. While over-expression of yefMSpn in E. coli reportedly inhibited growth [15], we found that the induced expression of the yefMSpn antitoxin alone in A. thaliana did not adversely affect the plants, nor were there any morphological differences between the wild-type, transgenic induced and non-induced plants up to five weeks after induction (Figures 3 and 6). The lack of any change in transgenic A. thaliana expressing yefMSpn is an important observation, as when expressed together with the toxin, clear changes in phenotype were evident ( Figure 6).
In this study, we performed sexual crosses to obtain hybrid plants containing both yoeBSpn toxin-GFP and yefMSpn antitoxin constructs. Co-expression of the yefMSpn and yoeBSpn-GFP enabled hybrid plants to thrive (Figure 6), in contrast to expression of yoeBSpn toxin-GFP alone, which was lethal [16]. This indicated that the yefMSpn antitoxin was able to neutralize the yoeBSpn toxin-GFP fusion in A. thaliana. A study carried out by Nieto et al. [15] has shown that the lethal action of the
Discussion
The yefM-yoeB Spn TA system from the bacterial pathogen S. pneumoniae has been characterized and shown to be functional in its native host, as well as in E. coli, where over-expression of the yoeB Spn toxin, an endoribonuclease, was found to be inhibitory to cellular growth [12,15,18]. In our earlier study, we showed that the yoeB Spn toxin-GFP fusion was functional and toxic in Arabidopsis thaliana plants, as its induced expression was associated with signs of apoptosis [16]. The current study aimed to determine whether co-expression of the S. pneumoniae yefM Spn antitoxin with the yoeB Spn toxin-GFP fusion in A. thaliana could neutralize the lethal effects of the toxin. While over-expression of yefM Spn in E. coli reportedly inhibited growth [15], we found that the induced expression of the yefM Spn antitoxin alone in A. thaliana did not adversely affect the plants, nor were there any morphological differences between the wild-type, transgenic induced and non-induced plants up to five weeks after induction (Figures 3 and 6). The lack of any change in transgenic A. thaliana expressing yefM Spn is an important observation, as when expressed together with the toxin, clear changes in phenotype were evident ( Figure 6).
In this study, we performed sexual crosses to obtain hybrid plants containing both yoeB Spn toxin-GFP and yefM Spn antitoxin constructs. Co-expression of the yefM Spn and yoeB Spn -GFP enabled hybrid plants to thrive (Figure 6), in contrast to expression of yoeB Spn toxin-GFP alone, which was lethal [16]. This indicated that the yefM Spn antitoxin was able to neutralize the yoeB Spn toxin-GFP fusion in A. thaliana. A study carried out by Nieto et al. [15] has shown that the lethal action of the pneumococcal YoeB Spn toxin was neutralized by tight binding with its cognate YefM Spn antitoxin in its native host cell, as well as E. coli; we suggest that YefM Spn and YoeB Spn -GFP in A. thaliana behaved similarly. Our previous work revealed that the yoeB Spn toxin-GFP fusion mRNA was expressed in transgenic A. thaliana, peaking at three days after induction, after which it decreased [16]. In this study, induction of either of the yoeB Spn toxin-GFP fusion or yefM Spn antitoxin constructs in separate hybrid plants showed similar stable expression with the yefM Spn transcript levels also peaking three days after induction. The yoeB Spn -GFP expression levels were also relatively lower than that of its cognate antitoxin, yefM Spn (Figure 5b), in the hybrid plants, the reason for which is currently unknown.
Interestingly, the induced expression of yefM Spn and yoeB Spn -GFP constructs when together in hybrid plants led to unexpected phenotypic effects in the growth and morphology of A. thaliana. The major alterations were seen in larger rosette leaves, taller plants with higher number of inflorescence stems and increased silique production, as compared to the wild-type, transgenic induced and non-induced yefM Spn , transgenic induced and non-induced yoeB Spn -GFP and to the non-induced hybrid control plants. It is likely that the larger rosette leaves provide more of the photosynthate needed for a higher number of inflorescence stems and seed development, thereby leading to an increase in silique production. The reasons and mechanisms are not known, but some of the possible pathways that could be affected are plant hormones, water-use efficiency, mineral uptake and photosynthetic efficiency. In S. pneumoniae, the YefM Spn antitoxin also functions as a transcriptional autorepressor by binding to a palindrome sequence that overlaps the promoter for the yefM-yoeB Spn operon. The YoeB Spn toxin functions as a co-repressor by enhancing the binding of YefM Spn to its operator site when it is in a YefM-YoeB Spn protein complex [18]. Thus, the YefM-YoeB Spn protein complex has DNA-binding capabilities, and it is thus possible that binding of the YefM-YoeB-GFP protein complex to certain sections of the Arabidopsis genome in the transgenic hybrids could have led to the enhanced growth phenotype, as indicated in Figures 6 and 7. To explore the possibility that the A. thaliana genome contained similar sequences to the native YefM-YoeB Spn binding sites, the 27-nucleotide binding sequence of the protein complex obtained through DNase I footprinting assays [18] was used as the query in a BLASTN search of the A. thaliana genome sequence [19]. We found 10 A. thaliana loci that had 18-21 nucleotide matches to the 27-nucleotide YefM Spn -binding motif (Table S1). None appeared to be within gene promoter or enhancer regions. The highest nucleotide identities (at 21 out of 27 nucleotides) were found within the ARPC5 gene (GI: 240256243), which codes for an actin-related protein 2/3 complex, subunit 5A, and that plays a role in cell morphogenesis, plant growth and development [20]. Two of the matching sequences (at identities of 19 out of 27 nucleotides) were found to belong to genes coding multidrug and toxic compound extrusion (MATE) efflux family proteins (GIs: 240256493 and 240254678) that have been reported to modulate genes involved in plant growth and development, as well as conferring defense mechanism against biotic stress [21]. As far as can be ascertained, neither the YefM Spn nor the YoeB Spn -GFP proteins (with estimated molecular weights of 9.7 and 37.6 kDa, respectively, as determined using ProtParam) contained any recognizable nuclear localization sequence (NLS), and the estimated size of the putative complex is within the limit of size to allow for nuclear transport (i.e., 90-110 kDa; [22]). Nevertheless, future studies using structural molecular models for protein-DNA binding of the putative complex to the Arabidopsis genome might shed more light on the mechanism, as using the DNA sequence alone has a limited ability for the confirmation of suitable binding sites.
The detailed mechanism by which co-expression of yoeB Spn -GFP and yefM Spn led to enhanced plant growth remains to be elucidated and is a subject for further research. This study has demonstrated that co-expressing the pneumococcal yoeB Spn toxin gene with its cognate yefM Spn antitoxin gene was able to neutralize the lethality of the YoeB Spn toxin in transgenic A. thaliana. To our knowledge, this is the first demonstration of a prokaryotic antitoxin neutralizing its cognate toxin in plant cells. In addition, the enhanced growth phenotype of the transgenic hybrid plants co-expressing the YefM Spn and YoeB Spn proteins is an attractive motivation to pursue research along this line for potential biotechnological applications.
Construction of Plasmids
This study used the plant inducible responder vector pMDC150_35S, which contains the CaMV 35S promoter [16], and activator vector pMDC160_yefM Spn , which contains the yefM Spn antitoxin gene. To develop pMDC160_yefM Spn , the yefM Spn antitoxin coding sequence from S. pneumoniae was amplified by PCR from the construct pET28a_HisYefMYoeB [18] with primers yefM_F: 5 1 -CACCATGGAAGCAGTCCTT-3 1 and yefM_R: 5 1 -TCACTCCTCAATCACATGGA-3 1 . The PCR-amplified yefM Spn was inserted into the Invitrogen Gateway ® pENTR_D_TOPO cloning vector (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions and transformed into E. coli TOP10-competent cells. The presence of the insert was confirmed by colony PCR using the primers M13_F: 5 1 -GTAAAACGACGGCCAG-3 1 and yefM_R; which resulted in an amplicon of approximately 455 bp. The plasmids were extracted from positive colonies and verified by conventional Sanger DNA sequencing prior to cloning the yefM Spn antitoxin coding sequence into the Gateway ® pMDC160 destination vector [17] via LR Clonase using Gateway ® technology. The two expression constructs, pMDC150_35S and pMDC160_yefM Spn , were separately transformed into Agrobacterium tumefaciens strain LBA4404 using a freeze and thaw method [23]. Antibiotic resistance was used to select the transformed colonies.
Plant Material and Growth Condition
Arabidopsis thaliana ecotype Columbia 0 was used in all experiments. Seeds were stratified for 3 days at 4˝C and sown on soil. Plants were grown in a growth room with a 16-h photoperiod under 70% relative humidity at 22˝C. The two control plants used in this study were wild-type and non-induced transgenic Arabidopsis (yoeB Spn -GFP, yefM Spn and yefM Spnˆy oeB Spn -GFP hybrid).
Plant Transformation and Selection
Agrobacterium tumefaciens-mediated transformation of A. thaliana with both recombinant constructs pMDC150_35S and pMDC160_yefM Spn was performed using a double floral dip method as described by [24]. A total of 5 independent transformation events were conducted, from which 3 transgenic lines were used for further analysis. Seeds were harvested and grown under antibiotic and/or herbicide selection until the T 2 generation. For selection of transgenic Arabidopsis plants that are resistant to the antibiotic kanamycin (for pMDC150_35S) and the herbicide Basta (glufosinate) (for pMDC160_yefM Spn ), seeds were stratified for 3 days at 4˝C before sowing on soil. The germinated seeds were grown for 1 week before spraying with a mixture of 50 mg/L kanamycin and 0.25 mg/L Basta. The antibiotic-herbicide mixture was applied at 3-day intervals for 2 weeks. Surviving seedlings with resistance to kanamycin and Basta were transferred to new soil until maturity. For selection of transgenic Arabidopsis that were resistant to kanamycin (for pMDC150_35S) and hygromycin (for pMDC221_yoeBGFP), the concentration used was 50 mg/L for each antibiotic and applied using the same regimen as above.
PCR Analysis
Genomic DNA was isolated from the rosette leaves using a cetyl trimethylammonium bromide (CTAB) method, as previously described [25]. PCR analysis was performed to confirm the presence of the entire gene cassette in transgenic plants using primers Transg_yefM_F: 5 1 -ATGGAAGCAGTCCTTTACTCA-3 1 with Transg_yefM_R: 5 1 -TCACTCCTCAATCACATGGA-3 1 for the yefM Spn -expressing plants and Transg_yoeB_F: 5 1 -CACCATGCTACTCAAGTTTA-3 1 with Transg_GFP_R: 5 1 -TTATAATCCCAGCAGCTGTT-3 1 to detect yoeB Spn -GFP transgene for the yefM Spnˆy oeB Spn -GFP hybrid plants. The same Transg_yefM primers were also used to detect the presence of yefM Spn transgene in the hybrid plants. PCR reaction mixtures consisted of 50 ng genomic DNA, 0.5 µM of each primer, 1ˆGoTaq ® Green Master Mix (Promega Corporation, Madison, WI, USA) and MilliQ water to make up the total volume of 25 µL. The amplification protocol was as follows: initial denaturation at 95˝C for 2 min, 32 cycles of incubation at 95˝C for 30 s, 48˝C for 30 s, 72˝C for 1 min and one final extension at 72˝C for 5 min.
Morphology of Transgenic A. thaliana
Four-week old transgenic A. thaliana transformed with pMDC150_35S and pMDC160_yefM Spn were induced using 100 µM 17-β-estradiol as described by [16]. The induced, non-induced and wild-type plants were observed each day from Day 1 after induction until Day 9. A total of 68 T 2 transgenic plants from 3 different T 1 lines were used in this study. For the transgenic hybrid A. thaliana harboring yefM Spn and yoeB Spn -GFP, the same amount of 17-β-estradiol was applied, and the plants were allowed to grow until reaching full maturity (where all siliques were fully formed) before recording phenotypic measurements. Phenotypic measurements of induced, non-induced and wild-type plants (n = 20 for each group) included length of rosette leaves, height (measured as the length from the soil to the top of each plant), number of inflorescence stems formed in each plant, number of branches bearing siliques, measurement of dry weight, length of siliques and total number of siliques harvested per plant.
Statistical Analysis
Data were analyzed using ANOVA with SPSS for Windows, Version 16.0. (SPSS Inc., Chicago, IL, USA). A significant difference from the control value(s) was determined at p < 0.05 levels. All reported data represent the mean˘SD of at least three independent experiments.
Cross-Pollination and Selection of Hybrid T 2 Seeds
The flowers of transgenic A. thaliana plants from T 1 generations harboring pMDC150_35S/ pMDC160_yefM Spn and pMDC150_35S/pMDC221_yoeB Spn -GFP were used as the ovule donor and pollen donor, respectively. Unopened flower buds were sliced open lengthwise and emasculated with sterilized forceps. Mature pollen from the donor plant was transferred onto the stigmas of emasculated plants by brush. The cross-pollinated flowers were marked and wrapped with small plastic bags to prevent additional pollination from other pollen sources. The plants were grown under the same conditions as described above until the seeds were ready to be collected. The seeds were stratified and selected on hygromycin, kanamycin and Basta, as described above. The surviving seedlings were then transplanted and grown under non-selective conditions until they reached four weeks old before further analysis. Eleven lines of hybrid plants were produced from the cross-pollination.
Southern Blot Analysis
Total genomic DNA (20 µg) from both wild-type and hybrid A. thaliana was digested with EcoRI for 48 h and separated in 0.7% (w/v) agarose gel. Digested DNA was transferred to a small strip of a positively-charged nylon membrane (Roche Diagnostics, Indianapolis, IN, USA) and hybridized with probes derived from the yoeB Spn -GFP (732 bp) and yefM Spn (255 bp) PCR products. All blotting procedures and immunological detection were carried out according to the DIG DNA Labeling and Detection Kit application manual (Roche Diagnostics).
qRT-PCR Analyses of yefM Spn and yoeB Spn -GFP Expression in Transgenic Hybrid Plants
Total RNA was extracted from Arabidopsis rosette leaves each day from Days 1-7 after induction with 100 µM 17-β-estradiol (as described above) using RNeasy Plant Mini Kit (Qiagen, Düsseldorf, Germany) and according to the manufacturer's protocol. To remove traces of DNA contamination, 1 µg of the isolated RNA was treated with DNase 1 using QuantiTect ® Reverse Transcription Kit (Qiagen, Germany). cDNA was then synthesized from 1 µg of the treated RNA in two steps using a QuantiTect ® Reverse Transcription Kit (Qiagen, Germany) under the following conditions: 42˝C for 15 min followed by inactivation at 95˝C for 3 min. Master Mix (20 µL) was prepared according to the manufacturer's protocol. qRT-PCR was performed in a final volume of 20 µL, which consisted of 0.5 µM of both forward and reverse primers each, 25 ng of cDNA as the template and 1ˆSYBR Green Master mix (Applied Biosystem, Foster City, CA, USA) using a QuantStudio™ 12K Flex Real-Time PCR System (Qiagen, Düsseldorf, Germany). After PCR, the data were quantified using the comparative C t method (2 ∆∆Ct ) [26]. The expression of yefM Spn was determined using the primers q-yefM-F: 5 1 -AGCCTTTGACGGTGGTCAATAA-3 1 and q-yefM-R: 5 1 -AGCACGGACTTGAGCCATTC-3 1 ; whereas the expression of yoeB Spn -GFP was measured using the primers q-yoeB-F: 5 1 -GGACGACGGGAACTACAAGA-3 1 and q-yoeB-R: 5 1 -CGGCCATGATGTATACGTTG-3 1 . The expression level from the Day 1 sample was used as the calibrator (value of 1.0). Each gene was assayed using three biological replicates. The A. thaliana actin gene was amplified using the q-Actin-F primer: 5 1 -CCAGTGGTCGTACAACCGGTAT-3 1 and q-Actin-R primer: 5 1 -ACCCTCGTAGATTGGCACAGT-3 1 and was used as the reference to normalize gene expression across the samples. | 2016-04-23T08:45:58.166Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "6e282ebe7d4cd2a21b00edd8f242eb5bbd078dff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/4/321/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e282ebe7d4cd2a21b00edd8f242eb5bbd078dff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
86694566 | pes2o/s2orc | v3-fos-license | Main and auxiliary control strategies of DFIG under asymmetrical grid voltage dips
When the Doubly Fed Induction Generator (DFIG) asymmetrical dips in the grid, the stator current will not only generate transient DC components, but also generate negative sequence components, especially when the motor impedance is small. The negative sequence component is larger, which will further deteriorate the grid voltage. In this paper, the DFIG is taken as the research object. When the grid is unbalanced, the main and auxiliary control strategies are adopted for the rotor-side converter, which can control the rotor current quickly and effectively, and improve the uninterrupted operation of the entire wind turbine. The MATLAB simulation software was used to simulate the 1.5MW wind turbine model, and the test was carried out on the 1.5MW wind turbine test bench. The simulation results and test results verify the feasibility and effectiveness of the control strategy.
Introduction
With the expansion of wind power installed capacity, the impact of grid-connected wind power on the grid is becoming more and more serious. Therefore, it is necessary to improve the ability of wind turbines to respond to grid faults.When the grid voltage is asymmetrically dropped, its transient process will also form a strong current and voltage shock to the stator and rotor of DFIG, which will affect the normal operation of the generator, such as increased loss, increased heat, torque ripple,and the gearbox and mechanical drive shaft fatigue loss, reactive power pulsation which is caused by the torque ripple, etc. [1][2][3][4], if the corresponding measures are not taken, the power grid will be further deteriorated, according to the technical regulations for wind farms connected to the grid [5], the wind turbines are required to ride through a symmetrical grid fault and the waveform quality is good.
At present, there are many researches on the control strategy of DFIG under asymmetric grid voltage. Literature [6] proposes the goal of realizing grid voltage imbalance control by positive and negative sequence and double dq axis current control schemes in the forward and reverse synchronous coordinate system. The dynamic response delay controlled by the scheme system and the calculation process is complicated. Literature [7] optimizes the positive and negative sequence calculation process, accelerate the respondence. In [8], the rotor transient current and negative sequence current are injected into the rotor end when the grid is in an asymmetric drop fault, but in the fault recovery process the rotor surge current is also increased. Literature [9] proposed a tracking control scheme of transient flux linkage, which effectively reduces the impact of rotor current, but causes the wind turbine to absorb reactive power from the grid side. The rotor voltage compensation control strategy of the rotor-side converter proposed by the literature [10] can reduce the rotor current impulse in the EEEP2018 IOP Conf. Series: Earth and Environmental Science 227 (2019) 032040 IOP Publishing doi:10.1088/1755-1315/227/3/032040 2 asymmetric fault of the power grid and enhance the low voltage ride-through capability of the DFIG. This strategy has not been effectively applied in practice.
In this paper, the dynamic modeling and control strategy of the DFIG under the grid voltage asymmetry drop is described. The rotor-side converter adopts the main and auxiliary control strategy, which can control the rotor current quickly and effectively, and improve the ability of wind turbine to run without interruption. Finally, the feasibility of the improved strategy is verified by simulation and experiment.
The mathematical model of DFIG under unbalanced condition
A three-phase symmetric DFIG system with isolated midpoint, can be considered to have no zero sequence component [11]. Therefore, under unbalanced grid conditions [12][13], only the positive and negative sequence components of the system electromagnetic quantity can be considered. Therefore, the vector form of DFIG, rotor voltage and flux linkage equations in the stationary coordinate system are expressed as follows: According to the paper [2], in the positive and negative synchronous rotating coordinate systems, the stator voltage, rotor voltage, current and flux equations in the stationary coordinate system is expressed by the respective positive and negative sequence components are: Here,
Master and auxiliary control strategy
It can be seen from the above analysis that when the grid is unbalanced, the double-frequency fluctuation occurs on the rotor side of the DFIG, but the conventional PI-regulator-based DFIG cannot realize isochronous control to the AC disturbance caused by the negative sequence voltage. so this paper proposes a new control strategy, adding negative sequence adjustment in the traditional PI regulator. The main and auxiliary control is to add an auxiliary compensator for rejecting the negative sequence current based on the traditional vector control designed under normal operating conditions, that is, the main controller in the forward synchronous coordinate system does not need to decompose 4 and extract the positive and negative current. and the auxiliary controller in the reverse synchronous rotating coordinate system is used to compensate. The auxiliary controller needs to extract the negative sequence current component to compensate the insufficient control for the negative sequence current of the system and control separately to the negative sequence component of the current.
In the positive synchronous rotating coordinate system and the negative rotating coordinate system, respectively, the motor stator voltage orientation is used, and the stator voltage integrated vector is
System experiment
According to the control strategy mentioned above,a simulation model is built for the unbalanced drop control of the system. The system simulation control model is shown in Figure 3. According to the system setting parameters, In Figure 4, the simulation shows that the grid has an asymmetrical 20% drop at 0.5s. This is the rotor current output waveform under the control condition of the conventional control strategy.From the waveform, there is a large amount of negative sequence current due to the rotor current. It will cause overcurrent on the rotor side current. As the grid voltage imbalance increases, the negative sequence current increases faster. Figure 5 shows the rotor current output waveform of the main and auxiliary control strategies described in this paper, The negative sequence current is eliminated by the auxiliary control of the negative sequence. In order to further verify the effectiveness of the control strategy, a test was carried out on a 1.5 MW converter test-bed. At a motor speed of 1700 rpm, the grid bc phase was subjected to a 20% unbalanced drop, as shown in Figure 6. Under the conventional control strategy, The peak-to-peak value of the rotor current reaches 2200A; as shown in Figure 7, the peak-to-peak value of the rotor current is 1340A. Compared with the conventional control strategy, the rotor current is reduced by 860A, which improves the system performance.
Conclusions
In this paper, the main and auxiliary control is to add an auxiliary compensator for rejecting the negative sequence current based on the traditional vector control designed under normal operating conditions, which is to compensate for the lack of the negative sequence current control, and realize the independent control of sequence components; double closed-loop PI current control of the model can control active and reactive currents respectively, realize decoupling of active and reactive currents, and obtain better dynamic and steady-state characteristics. | 2019-03-28T13:14:44.825Z | 2019-03-02T00:00:00.000 | {
"year": 2019,
"sha1": "17cf8cf0b7b4218d18c2291581af3be572726658",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/227/3/032040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3adc4ccb1c53bebbd5e19bedb5c7bd8c3aaece0e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
333769 | pes2o/s2orc | v3-fos-license | WISP1/CCN4: A Potential Target for Inhibiting Prostate Cancer Growth and Spread to Bone
Prostate cancer (PC) is a leading cause of death in men however the factors that regulate its progression and eventual metastasis to bone remain unclear. Here we show that WISP1/CCN4 expression in prostate cancer tissues was up-regulated in early stages of the disease and, further, that it correlated with increased circulating levels of WISP1 in the sera of patients at early stages of the disease. WISP1 was also elevated in the mouse prostate cancer model TRAMP in the hypoplastic diseased tissue that develops prior to advanced carcinoma formation. When the ability of anti-WISP1 antibodies to reduce the spread of PC3-Luc cells to distant sites was tested it showed that twice weekly injections of anti-WISP1 antibodies reduced the number and overall size of distant tumors developed after intracardiac (IC) injection of PC3-Luc cells in mice. The ability of antibodies against WISP1 to inhibit growth of PC3-Luc cancer cells in mice was also evaluated and showed that twice weekly injections of anti-WISP1 antibodies reduced local tumor growth when examined in xenografts. To better understand the mechanism of action, the migration of PC3-Luc cells through membranes with or without a Matrigel™ barrier showed the cells were attracted to WISP1, and that this attraction was inhibited by treatment with anti-WISP1 antibodies. We also show the expression of WISP1 at the bone-tumor interface and in the stroma of early grade cancers suggested WISP1 expression is well placed to play roles in both fostering growth of the cancer and its spread to bone. In summary, the up-regulation of WISP1 in the early stages of cancer development coupled with its ability to inhibit spread and growth of prostate cancer cells makes it both a potential target and an accessible diagnostic marker for prostate cancer.
Introduction
Being the second leading cause of cancer death in men of all races, prostate cancer is a major health concern for men [1,2]. It has been proposed that most elderly men harbor traces of prostate cancer, and yet the molecular underpinnings of how and why the cancer progresses are still elusive [3]. Like many other dangerous cancers, prostate cancer cells have a very high incidence of migrating from the primary tumor to distant sites where they are a more direct cause of morbidity and mortality [4]. A frequent site for the metastasis of prostate cancer is to bone, however, when the cancer progresses to this stage, it is usually incurable [5,6,7]. Therefore there is a critical a need to 1) understand what factors contribute to the disease progression in the prostate, 2) understand how and why prostate cancers ''home'' to bone and, further, 3) devise new ways to prevent this complex and devastating process.
The metastasis of prostate cancer can be an inefficient process and only a fraction of prostate cancer patients develop cancer that metastasizes to distant sites [3]. By analyzing the early events that take place during prostate cancer progression it is feasible that new diagnostic procedures could be developed that predict the progression and future severity of the cancer and optimize the timing and nature of therapeutic interventions. New information about candidate proteins involved in this process could, also, potentially be used to develop new therapies to reduce the spread and establishment of the disease at distant sites such as bone.
WISP1 (wnt induced secreted protein-1) is a member of the CCN family that is named from its founding members of Cyr61/ CCN1, CTGF/CCN2 and Nov/CCN3. There are currently six members of the family that also include WISP1/CCN4, WISP2/ CCN5 and WISP3/CCN6 [8]. WISP1 was first identified as a gene expressed in the metastatic melanoma line, K-1735, where it was called Elm1 (referring to its expression in lowly metastatic cells) [9]. Around the same time that Elm1 was discovered, WISP1 was identified in a separate laboratory where it was found to be up-regulated by wnt1 transformed mammary epithelial cells, and in various colon cancer lines as well as being expressed in human colon cancer tissue [10]. Subsequently, WISP1 was shown to confer oncogenic features to rat kidney cells (NRK-49F), including accelerated growth, enhanced saturation density and increased ability to form tumors in mice [11]. Since its original identification, WISP1 has been found in a variety of cancers, including esophageal squamous cell carcinoma [12], chondrosarcoma [13], breast carcinoma [14,15], neurofibromatosis type I [16], colorectal carcinoma [17], Lewis lung carcinoma [18], invasive cholangiocarcinoma, scirrhous gastric carcinoma [19] and endometrial endometriod adenocarcinoma [20]. Interestingly, WISP1 expression in many of these cancers is localized to the stromal tissues surrounding the cancerous cells [15], suggesting that it could play a role in the microenvironment that supports the growth and/or eventual spread of the primary tumor.
In non-pathological conditions, WISP1 is found in several embryonic and adult tissues that notably include new sites of bone formation [21] where it appears to control osteogenesis [22]. Considering the fact that prostate cancers have a high tropism to bone and, further, that this process can be enhanced by increasing bone turnover [23] we hypothesized that WISP1 may, potentially, be involved in regulating prostate cancer metatasis. This study was undertaken to first determine if and where WISP1 is expressed in primary prostate tumors and then, to determine the role of WISP1 in the growth and homing of prostate cancer cells to skeletal tissue. Our findings showed that WISP1 is found in early stages of prostate cancer either in tissues biopsies or in sera from afflicted patients as well as in the hypoplastic pre-carcinoma tissue from a mouse model of prostate cancer. Furthermore, we showed that inhibition of WISP1 function using neutralizing antibodies reduced the growth of a prostate cancer cell line's xenograft tumor as well as its homing to bone. Taken together, our study points to WISP1 as a novel participant in the prostate cancer growth and bone metastasis processes.
Ethics Statement
Human Subjects: Core biopsies representing different grades of prostate cancer (from I-IV) including control tissues were purchased from US Biomax TM (Cat#PR801) and their use approved by The National Institutes of Health Office of Human Subjects Research, Bethesda, MD (Exemption #11583). Approval for the use of human serum was obtained from the Johns Hopkins University Medicine (JHM) Institutional Review Board (IRB), Baltimore, MD. The number of the approved IRB protocol for use to assay serum is JHM IRB-X No:04-07-27-03e. The pathological specimens/diagnostic specimens from the commercial suppliers are from sources that are publicly available and the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects and informed consent was obtained from all serum donors prior to its use.
To clarify: the nature of the samples used for human subjects, written consent was obtained from donors as part of the protocols for collecting serum samples (ProMedDx, Inc.) and tissue biopsy cores (US. Biomax Inc.). These protocols and their consent forms were approved by the commercial sources own Institutional Review Boards. Further information about the exact details of the contents of the written approval forms can be found at www. promeddx.com for serum samples and at www.biomax.us for tissue arrays. These protocols and their consent forms were approved by the commercial sources own Institutional Review Boards. Because the commercially available samples are deidentified and de-linked from their donor, their use does not fit the NIH's definition of Human Subjects Research, therefore the protocols are exempted (Exemption #1158 for the NIH and No:04-07-27-03e for JHU).
Experiments Using Animals
All procedures using animals were carried out at the National Institutes of health and the institution that granted approval for the animal procedures described in the paper is known as the Animal Care and Use Committee (ACUC). The approval number for the experiments performed in the current study is NIH ACUC #12-645.
Polyclonal Antibody Production
A human WISP1 peptide with the sequence RDTGAFDAV-GEVEAWHRN (amino acids 198-216, accession number NP 003873, see Figure S1A) was synthesized and conjugated through the cysteine to activated keyhole limpet hemocyanin and injected into a rabbit (LF-185) to produce polyclonal antibodies in an AAALAC approved facility (Covance Immunology Services, Denver, PA, USA). The titre of the resultant antiserum was tested using a direct ELISA with either the LF-185 peptide or with purified human WISP1 as control (PeproTec, Rocky Hill, NJ, USA) bound to the microtiter plate. The specificity of LF-185 was tested by Western blot using three other available members of the CCN family Cyr61/CCN1, CTGF/CCN2 and Nov/CCN3 (PeproTec, Rocky Hill, NJ, USA) and showed immunoreactivity only towards WISP1 ( Figure S1B). A second rabbit antibody to WISP1 was generated to a sequence at the C-terminus, which is highly conserved (.95%) between mouse and human WISP1 (amino acids 346-367, NP 003873 see, Figure S1A), using the human peptide sequence CRNPNDIFADLESYPDFSEIN conjugated through the cystein to activated KLH (LF-187). This latter antibody also showed high reactivity to WISP1 and not to the other CCN family members tested (S1B).
Polyclonal Antibody Purification
The peptides used to generate LF-185 and LF-187 were purified using their corresponding peptide and the Sulfolink Immobilization Kit (Thermo Scientific, Rockford, IL, USA), and antibodies were affinity purified as follows. First, the peptide-linked columns were equilibrated according to the manufacturer's protocol and then 2.0 ml of each serum was passed through the column, washed with PBS, and were eluted with a 4 M guanidine solution in PBS. The eluted antibodies were immediately dialyzed with excess volume of PBS overnight at 4uC and the recovery of their immunoreactivity verified by ELISA. Affinity-purified antibodies were stored at 280uC prior to use. For experimental controls, IgG was purified from a rabbit challanged only with adjuvant using the Protein A IgG Purification Kit (Thermo Scientific, Rockford, IL, USA) with a Protein A affinity column using the same binding, wash, elution and dialysis conditions used for purifying LF-185 and LF-187.
Immunohistochemistry and TRAP Staining
Slides were stained using the manufacturer's recommendations in a manner identical to that used for mouse sections described below. Briefly, tissue sections were deparaffinized, endogenous peroxidase activity destroyed by methanolic H 2 O 2, rehydrated and incubated with a rabbit polyclonal anti-WISP1 IgG (sc-25441, Santa Cruz Biotechnology, Santa Cruz, CA, USA) diluted to a concentration of 4 mg/mL (1:50) in PBS plus 10% goat serum overnight at 4uC. A normal rabbit IgG isotype was used as nonimmunoreactive control (AB-105-C, R&D Systems, Minneapolis, MN, USA). For some stainings (S5) primary antibodies to human WISP1 (LF-185) or preimmune serum were used and first diluted 1:500 and incubated at 4uC overnight before detection. Sections were counterstained with Methyl green and areas of positive immunostaining evaluated by at least two independent investigators. For TRAP staining, the TRAP/ALP stain kit (#294-67001, Wako, Osaka, Japan) was used following the manufacturers recommendations.
Serum Analysis
Serum from patients with defined grades of prostate cancer or from normal controls was resolved by SDS-PAGE, transferred to nitrocellulose and probed with a polyclonal antibody against the conserved carboxy-terminus of human WISP1 (LF-187) by standard Western blotting procedures. The primary antibody, LF-187, was added at a dilution of 1:2000 followed after incubation and washing with a HRP-labeled secondary antibody used at a dilution of 1:10,000. Following removal of second antibody solution, the membrane was washed and exposed to the chemiluminescent enzyme substrate and signals were captured, digitized and analyzed using a Kodak GEL Logic 2200 Imaging System (Carestream Health Inc., Rochester, NY, USA). For each blot, the net intensity of the band corresponding to full-length WISP1 was normalized to the average signal from duplicate lanes containing 20 ng of recombinant WISP1 (PeproTec, Rocky Hill, NJ, USA).
Animals
Immunocompromised mice (athymic nude-Foxn1nu) were purchased from Harlan and were 8 weeks of age at the time of the study. A mouse model of prostate cancer (transgenic adenocarcinoma mouse prostate or TRAMP) that over-expresses the simian virus 40 (SV-40/Tag) gene from the prostate specific probasin (PB) promoter strain was previously generated [24] and used to examine WISP1 expression in affected prostate tissue. The genetic background of the TRAMP mice was NOD and was obtained by crossing the TRAMP B6 females with NOD/LtJ males to generate an F1 strain hemizygous for the PB-Tag. Resulting TRAMP-NOD mice were euthanized by pentobarbital overdose at 8 and 16 weeks of age and prostate and seminal vesicles harvested for histological analysis. Tissues were fixed in buffered formalin overnight and then transferred to 70% alcohol. Fixed tissues were embedded in paraffin and sectioned to a thickness of 4 mm. To determine the relative bone mineral density of the ant-WISP1 treated mice we used a Lunar PIXImus Densitometer (GE Medical Systems) specifically designed for rodents.
Colonization of PC-3 Luc Cells to Distant Sites by Intracardiac Injection (IC)
Eight week-old male Athymic nude-NIH/Bg-nu/nu-XID mice were anesthetized with isofluorane and shaved at the injection site that was then swabbed with iodine and 70% alcohol. A TB syringe with a 27 gauge needle attached was then loaded with a suspension of PC3-Luc (5610 5 cells/150 ml of PBS) and the needle inserted vertically in the fourth intercostal space. When a flash of blood was noticed in the hub of the needle the cells were injected slowly until the contents of the needle were empty and the needle was pulled out slowly. To assure ourselves that the cells were correctly injected into the left ventricle and disseminated throughout the mouse and not, for example, trapped in the lungs (as would be the case for an injection into the right ventricle) after IC injection the mice were immediately injected with 100 ml of a 40 mg/ml solution d-Luciferin firefly (Biosynth) intravenously and imaged using a Lumina-XR (Caliper Life Sciences, Hopkinton, MA, USA). Mice that showed distribution of the PC3-Luc cells as judged by luminescence throughout the entire body were then used for further treatments. The mice were divided into three treatment groups (n = 6/group) and injected with cells on Day 0. Administration of antibodies was carried out with a dose of 100 mg via IP injection twice per week, starting on Day 0 of tumor inoculation and continuing throughout the experiment. Three groups consisted of: 1) mice injected with 100 ml affinity purified antibody, LF-185, 2) those given control IgG, and 3) those given 1X PBS alone. In vivo imaging was performed weekly to track the growth and establishment of tumor cells using a Lumina-XR (Caliper Life Sciences, Hopkinton, MA, USA). Prior to imaging mice were injected with 100 ml of a 40 mg/ml solution d-Luciferin firefly (Biosynth) in PBS intravenously. Mice were treated for a total of 4 weeks and the area and counts of light exposure were quantified as described above. In the fourth week, after the final luciferase analyses, the bones exhibiting colonization by the tumor cells were harvested and analyzed using an MX-20 Faxitron radiography system with exposure at 30 Kv for 40 seconds using PPL film from Kodak with subsequent embedding and processing for histology and immunochemistry.
Cell Culture and RT-PCR
PC3-Luc cells [25] are a human prostate cancer constitutively expressing luciferase (a gift from Dr. Russell Taichman, University of Michigan School of Dentistry, USA). Cells were cultured in RPMI Medium 1640 (Invitrogen, Grand Island, NY, USA) containing 10% FBS (Atlanta Biologicals, Lawrenceville, GA, USA) and 1% penicillin/streptomycin solution (Gibco Gluta-MAX-I, Grand Island, NY, USA) in a 37uC atmosphere of 5% CO 2 /air. Cells were passaged every 2 days at 85% confluence. RT-PCR was performed on mRNA isolated from cultured PC3-Luc cells and amplified using oligonucleotide sets corresponding to human WISP1 using sequences and conditions described previously [22].
Xenograft
Human prostate cancer cells PC3-Luc were delivered to 8 week old athymic nude-Fox1nu mice by subcutaneous inoculation of a tumor cell suspension (5610 6 cells/200 ml of PBS) on day 0. In this experiment cells were counted, resuspended in 200 ml of cold PBS and kept in sterile tubes on wet ice during transport to the animal facility. The optimal number of cells for this experiment was determined by first measuring overall PC3-Luc growth using 2610 6 , 5610 6 1610 7 cells/inoculation on 4 different sites/mouse. For the inoculation, cells were drawn into a TB syringe with a 25G needle attached and injected with bevel side up into the dorsal side previously wiped with alcohol. The size of the tumor was then measured by caliper or by relative luciferase activity using the Lumina XR as described above. Our pilot study showed that the lowest 2610 6 dose of PC3-Luc grew slowly, while the highest dose 1610 7 dose grew very rapidly both of which we judged to be suboptimal for the 4-week time course planned for our antibody treatments. The middle dose of 5610 6 was optimal and then used for subsequent experiments. Three treatment groups comprised of 6 challenged nude mice per group injected intraperitoneally (IP) twice a week with either 1) 100 mg of affinity-purified polyclonal antibody directed against WISP1 (LF-185) 2) 100 mg of similarly purified control IgG or 3) PBS alone in a volume of 100 ml. Freshly prepared purified antibodies were prepared prior to injection into the test mice. Through the course of the experiments tumors were measured with a caliper to estimate their relative growth rate and in the fourth week, the tumors were harvested and weighed. In one experiment mice were followed for 6 weeks (S4) with similar results. All experiments using mice were repeated at least twice. At the end of the experiment some tumors were further analyzed by histology and analyzed for WISP1 expression as described in the previous section ''Immunohistochemistry and TRAP staining''.
In vitro Migration
In vitro migration of the PC3-Luc cells was tested using BD Falcon FluoroBlok Cell Culture Inserts with 8-micron holes (BD Biosciences, Bedford, MA, USA). PC3-Luc cells in logarithmic growth phase were detached from 100 mm plates by trypsin-EDTA, and 2610 4 cells were pretreated with 100 mg/ml of LF-187, IgG antibody or PBS for 1 hour at 37uC in serum free RPMI 1640 and were added onto FluoroBlok Cell Culture Inserts. RPMI 1640 containing 5% FBS or 200 ng/ml WISP1 (PeproTec, Rocky Hill, NJ, USA) was added to the lower chamber, and the entire system was incubated at 37uC for 24 hours in 5% CO 2 . After incubation and fixation, cells were stained with DAPI (cat# P36935, Molecular Probes, Life Technologies, Grand Island, NY, USA). Migrated cells were examined microscopically on the lower side of the membrane by detecting migrated DAPI-labeled nuclei. The number of cells in 3 random whole fields at 100x magnification was counted with Image-Pro Plus (MediaCybernetics, Rockville, MD, USA) and the average of three wells was determined.
Statistical Analysis
An unpaired Student's t-test was used to compare control vs. experimental samples using GraphPad Prism software (Prism 5, GraphPad Software, Inc., La Jolla, CA, USA) for cell migration, tumor growth and luciferase assays. P values ,0.05 were considered statistically significant. The distribution of measured parameters for serum WISP1 values was assessed by the D'Agostino & Pearson omnibus normality test. Comparisons between two groups were performed using an unpaired Student's t-test, while comparisons across three or more groups utilized a one-way ANOVA test. The association of WISP1 protein levels with PSA was assessed using a Spearman correlation. P values ,0.05 were considered statistically significant.
WISP1 Protein Expression in Prostate Cancer Tissue and in Serum from Affected Patients
Antiserum were raised against a poorly and highly conserved domain of WISP1 (LF-185 and LF-187 respectively, see S1) in rabbits and found to be highly reactive to WISP1 (S1) and unable to cross-react with three other members of the CCN family including Cyr61/CCN1, CTGF/CCN2 or Nov/CCN3 (S1). Although not directly tested by Western blot, human WISP2/ CCN5 and WISP3/CCN6 do not contain any primary protein sequences that could reasonably be considered to be homologous to the highly conserved human peptide used to produce LF-185. The location and relative level of expression of WISP1 in normal human prostate and prostate cancers of various degrees of severity was assessed with LF-185 using tissue core biopsies commercially obtained from Biomax TM . In this experiment core biopsies from low and high grade cancers (I was lowest, IV was highest) and were stained for immunohistochemistry using the WISP1 antisera and compared with staining using IgG as a negative control. Evaluation of this staining by at least two independent observers revealed that WISP1 is expressed to a greater extent in the prostate cancer compared to normal controls ( Figure 1A) and that it was primarily located in the stroma tissue surrounding the tumor and to some extent in the epithelial tissue. In addition to this, WISP1 staining was higher in samples that were from patients with the lowest grade of cancer (I) compared to those with the higher grades (IV) of cancer.
Our immunohistochemistry data showing that WISP1 expression was strongest in lower grade biopsies led us to speculate that WISP1 in such cancers could escape into serum and be detected using immunoblotting techniques. To test this hypothesis, serum samples were examined by western blotting using antibodies to WISP1 ( Figure 1B). Our analysis showed that when all the grades were grouped together WISP1 was significantly higher in serum from patients with prostate cancer (PCA) compared to normal controls (NL) ( Figure 1C). These samples were further subdivided using the tumor, node, metastasis (TNM) scoring system and showed that the relative levels of WISP1 were significantly higher in patients with grades pT2, pT2 and pT3 compared to either NL or to patients that were lymph node and seminal vesicle positive (LN+/SV+) ( Figure 1C). As predicted, when the samples were reevaluated using the Gleason scoring system the relative levels of WISP1 were significantly higher in grades 5-7 compared to the more advanced grades 8-9 ( Figure 1E). Interestingly, when compared to PSA levels there was a significant inverse correlation with WISP1 being highest in the samples that had the lowest PSA levels ( Figure 1F).
WISP1 Expression in the TRAMP Model of Prostate Cancer
In order to confirm our notion that WISP1 was up-regulated in prostate cancer we used a previously generated mouse model known as TRAMP (transgenic adenocarcinoma mouse prostate). This strain of transgenic mice begins to acquire hypoplastic prostate tissue by 8 weeks of age and by 16 weeks the affected tissues progress to carcinomas with significant tumor burden. Affected prostates were isolated from TRAMP-NOD mice at early (8 weeks of age) and late (16 weeks of age) stages of tumor progression and analyzed for WISP1 expression by immunohistochemistry. In the early stages of the disease WISP1 was highly expressed in the hypoplastic tissue that initially develops in the model at that age (Figure 2A bottom panel). In tissues isolated from 16 week-old mice, WISP1 expression was increased even more with the further development of hyperplasia ( Figure 2B middle panel) and, further, was expressed broadly in regions with advanced carinoma (Figure 2B bottom panel).
Anti-WISP1 Treatment Reduces the Spread and Establishment of PC3-Luc Distant from the Site of Injection PC3-Luc cells were injected into the left ventricle of immunocompromized mice and allowed to spread and grow for 4 weeks with twice weekly injections of the PBS, IgG or anti-WISP1. The number of tumors and total tumor burden were then determined by their ability to emit light (luminescence) and showed that anti-WISP1 significantly reduced both of these parameters compared to PBS or IgG treatments ( Figure 3A and 3B respectively). In our hands the PC3-Luc cell line used in the study had a high triopism for bone making its way after IC injection to numerous skeletal sites including the jaw, snout, spine, femur, tibia, ribs, sternum, scapular and ulnae. However, other sites of PC3-Luc cell homing were also found outside the skeleton and included soft tissues such the heart, lung and testicle comprising approximately10% of all the metastases. A representative picture of the tumors formed after IC injection is shown in (S2) that illustrates these findings. When the numbers of tumors were counted we found that the skeletal sites were preferentially decreased by the WISP1 antibody treatment. Specifically, in controls, 90% of the metastases were in hard tissue and 10% in soft tissue. With WISP1 antibody treatments the number of skeletal sites affected was reduced to 66% of the total metastases with the soft tissue sites making up 33% of the total metastases. In addition to total tumor number the total tumor burden was reduced in by WISP1 antibody treatments compared to IgG treatments ( Figure 3B). This indicated that not only dissemination but also tumor growth was reduced by WISP1 neutralization. To test the possibility that WISP1 could affect tumor grown we next evaluated growth of the PC3-Luc using xenografts.
WISP1 Antibody Treatment Reduces the Size of PC3-Luc Xenografts in Immunocompromised Mice
Antibodies against WISP1 were used along side IgG and PBS controls to test their relative ability to reduce the growth of prostate cancer cells that were grown under the skin of immunocompromised mice. Animals were injected twice a week with anti-WISP1 and tumor growth monitored for 4 weeks. Considering that WISP1 is made in bone we also examined the mice using a DEXA scanning machine and showed that anti-WISP1-treated mice had no significant changes in the percentage of bone mineral density before and after treatment compared to either PBS or IgG controls (S3). The rate of tumor growth was measured during the course of the experiment using calipers and indicated that the tumors in the mice treated with anti-WISP1 were reduced in size compared to either PBS or IgG controls (S4). When the tumors were removed at the end of the treatments and photographed, it was clear that tumors were smaller in the anti-WISP1 treated mice, ( Figure 4A) and that they had statistically less image of a quantitative western blot assay used to measure serum levels of WISP1 in serum from normal subjects (lanes 1-6) and from subjects with prostate cancer (lanes 7-12). C. Quantified bands from 20 normal subjects and 60 subjects with prostate cancer were compared by t-test. D. Samples were stratified by disease stage using the TNM scoring system and compared by an ANOVA test. NL, Normal, pT1, pT2, pT3 lowest to the highest severity, LN + /SV + , lymph node positive, seminal vesicle positive. E. The levels of WISP1 were stratified by Gleason scores and compared by ANOVA test, 5 is lowest severity, 9 is greatest severity. F. The association of serum WISP1 levels with PSA was analyzed by Spearman correlation (PSA, X-axis, WISP1, Y-axis). Each small circle represents values from a single patient. doi:10.1371/journal.pone.0071709.g001 overall weight compared to tumors from mice treated with either PBS or IgG ( Figure 4B).
Appearance and Expression of WISP1 in PC3-Luc Tumors Invading Bone
The nature of the PC3-Luc tumors formed 4 weeks after IC injection was first assessed by light emission in live mice using a Lumina-XR, system which was done to determine their precise location for further processing. As previously noted [25], this line of PC3-Luc cells has a high tropism to bone, and in particular appears to ''home'' to sites within the alveolar bone in the jaw just below the teeth ( Figure 5A, arrow). To determine the location and WISP1 expression in the invading tumor and its proximity relative to the osteoclasts that were actively resorbing the bone tissue in this lytic tumor model we prepared sections through a tumor in the jaw and stained them for Tartrate-resistant acid phosphatase (TRAP) an enzyme enriched in the osteoclast and for WISP1 by immunohistochemistry. As predicted, the border of the tumor surrounded by bone was rife with osteoclasts judged by the pattern of TRAP staining ( Figure 5B, middle panel). Serial sections subject to immunohistochemistry using WISP1 antibodies showed that WISP1 was also expressed (compared to IgG controls) at the interface between the alveolar bone and the invading PC3-Luc tumor ( Figure 5B, arrow) however it was notably located in a position near but not coincident with the TRAP positive osteoclasts (compare middle and right panels, Figure 5B). WISP1 was also found within the invading tumor ( Figure 5B). Sections trough a control, non-tumor bearing jaw showed a low level of both WISP1 and TRAP expression in normal bone and in the periodontal ligament surrounding the tooth (not shown).
To determine the source of WISP1 that is being blocked by the WISP1 antibody treatments we carried out immunostaining using antibodies to WISP1 and showed that xenografts composed of PC3 tumor cells contained some WISP1 judged by the relative intensity of staining using LF-185 vs IgG control (S5A). When PC3 cells were grown in vitro and protein extracted we also found of WISP1 present (not shown). Finally, to determine if WISP1 mRNA is being translated in the PC3 cells we extracted mRNA and using oligonucleotides specific for WISP1 and detected an amplicon of the correct size (S5B) indicating to us that the WISP1 found in PC3 cells both in vitro and in vivo could come, at least in part, from the PC3 cells themselves. This was further verified using sections of the PC3-Luc cell tumor within the jaw; WISP1 staining was found in both the tumor and in regions outside the tumor at sites of bone remodeling and in the PDL ( Figure 5B) near but not within the osteoclasts.
Migration of PC3-Luc Cells was Blocked by Treatment with WISP1 Antibodies
Considering the fact that WISP1 is concentrated at the bonetumor interface and that the PC3-Luc cells ''home'' to bone we wondered whether WISP1 could have chemotactic properties. To test the chemotactic capacity of the PC3-Luc cells they were placed in a Boyden chamber with varying concentrations of FBS above and below the membrane they traverse through (S6). Significant migration of the PC3-Luc cells towards 5% FBS in the bottom chamber from serum free media in the top chamber was observed indicating the cells indeed possessed chemotactic abilities. When the experiment was repeated using WISP1 protein as chemoattractant the migration of PC3-Luc cells across the membrane was significantly increased compared to a PBS control ( Figure 6A). To test the ability of anti-WISP1 to block these chemotactic activities the tumor cells were pre-incubated with either anti-WISP1, IgG or PBS prior to migration. Anti-WISP1 treatment significantly blocked the migration of PC3-Luc towards media containing 5% FBS ( Figure 6B) or WISP1 ( Figure 6C) compared to either PBS or IgG controls. Finally, the invasion of PC3-Luc cells across membranes coated with Matrigel TM was tested and showed that anti-WISP1 inhibited PC3-Luc invasion towards the bottom chamber containing WISP1 to a much greater extent than cells treated with either IgG or PBS.
Discussion
The goal of our investigation was to determine whether WISP1 could play a role in prostate cancer growth and spread to bone and then to provide evidence that it could be a novel target for detection and future therapeutics. Using antibodies against WISP1 we found they could block the growth of xenografts and the localization of PC3-Luc prostate cancer cells to bone. One way that anti-WISP1 treatment might inhibit cancer growth and spread could be by controlling cancer cell migration. In this paper we showed that PC3-Luc cell migration towards increased concentrations of fetal bovine serum (FBS) or to purified WISP1 could be blocked by pre-treating the cancer cells with WISP1 antisera. Exactly how WISP1 controls cell movement is not known, but is likely to involve interaction with one or more integrins. We recently reported that WISP1 regulates the binding of BMP-2 to bone marrow stromal cells (BMSCs) using a mechanism that depended on the integrin a5 [22,26]. Whether similar interactions are taking place in PC3-Luc cells as they migrate towards bone is not clear and will need to be addressed in future studies. Whatever the mechanism, it is logical to propose that one reason PC3-Luc cells are attracted to bone is because of the high levels of WISP1 found there [26] which could, in turn, lead to what is now referred to as a ''fatal attraction'' [5].
It is also not yet known which of the several domains of WISP1 are important for PC3-Luc migration, invasion and homing. To date, at least 6 different transcripts of WISP1 have been identified in the NCBI Nucleotide database (http://www.ncbi.nlm.nih.gov/ sites/entrez?db = pubmed) that are composed of various combinations of the structural domains referred to as IGF binding protein (IGFBP), Von Willebrand factor type C (VWF), throm-bospondin1 (TSP1) and the cysteine rich terminus (CT). One WISP1 variant known as WISP1v lacks the VWF domain and is highly up-regulated in Scirrhous gastric carcinoma [19], and, when ectopically expressed in cultured cells, causes them to have a more invasive phenotype. WISP1v is also expressed by human bone marrow stroma cells (hBMSCs), where it responds differently to TGF-b induced proliferation compared to full length WISP1 [27]. It will be interesting to determine the precise location of WISP1v and the other WISP1 variants and, then, to elucidate if they have unique, overlapping, or even competitive functions in controlling prostate cancer cell function.
Immunohistochemistry of the PC3-Luc xenografts as well as RT-PCR of mRNA extracted from PC3-cells grown in vitro show the presence of WISP1 protein and mRNA respectively. In spite of this finding we can not exclude the possibility that WISP1 also comes from the bone, the circulation or both. Work from our lab and others shows that WISP1 can be extracted in substantial quantities from demineralized bone [26] and can be produced by differentiating bone marrow stromal cells [22,27] and, further, that it is found at sites of new bone formation [21]. In this regard it is important to note that the PC3-Luc cells used in this experiment that migrated and established themselves in bone are lytic causing the bone to dissolve ( Figure 5). Thus, the expression of WISP1 at the bone tumor interface could be coming from the PC3 cells, the resorbing bone, and either made by the osteoblasts or by the osteoclasts. To further address this question histological sections were prepared and magnified at this interface and stained with both WISP1 and TRAP, a marker for osteoclasts. Our data showed that WISP1 localizes at the sites of resorption near but not within in the osteoclast. In summary, since WISP1 is made by PC3 cells and by bone forming cells we conclude that the source of WISP1 could come from either the osteoblast that produces the WISP1 found in bone, the PC3 tumor that have both detectable mRNA and protein or from both. Whatever the source it is clear that blocking WISP1 function inhibits the journey and establishment of PC3 tumor cells in bone. Additional experiments using siRNA in PC3 cells will be needed to further resolve this point.
In addition to its expression at the bone-tumor interface, WISP1 is predominant in the stroma tissue surrounding the primary prostate tumor cells. Considering these distinct localization patterns, we suspect that WISP1 could be a ''bi-directional'' [6] link in the communication between the prostate cancer and the surrounding microenvironment. In this context, WISP1 could potentially serve to enrich the cancer cell milieu, subsequently facilitating cancer cell activities such as cell migration, and cell growth where the cancer cells themselves also contribute to changes in the microenvironment [28]. Such interaction might then accelerate the ''vicious cycle'' in cancer metastasis [29], caused by perturbations in the connections between the cancer cells (seed) and the surrounding stroma (soil) [30]. We show that WISP1 levels in the primary prostate cancer stroma and in the serum from patients afflicted with this disease decreases with increasing severity of the cancer. In this regard it can be noted that one hallmark of cancer progression is the induction of proteases that presumably cause the destruction of surrounding tissues aiding the cancer cells to make their way out of the primary tumor site to distant locations. Such proteases could degrade WISP1 such that its abundance in both the primary tumor and in the serum from afflicted patients is reduced. Further experiments that examine the level and activity of proteases to see if they preferentially target WISP1 are needed to fully understand how and why the induction of WISP1 in the prostate tumor eventually recedes as the prostate cancer reaches advanced stages.
Several other CCN family members besides WISP1 have also been implicated in cancer. Cyr61/CCN1 has been linked to both skeletal (osteosarcoma) [31] and prostate cancer [32,33]. In prostate cancer, the expression of Cyr61/CCN1 is associated with lower risk of disease recurrence [34], and its expression is highest in prostate tumor cells that have low levels of p53 tumor suppressor [32]. Cyr61/CCN1 is also expressed in pancreatic cancer [35] and in human chondrosarcoma cells where it appears to up-regulate MMP13 expression and cell migration [36]. CTGF/CCN2 is linked to breast cancer metastasis, where it regulates angiogenesis [37] in a manner that could be further influenced by PTHrP. Nov/CCN3 is differentially expressed in human prostate cancer cell lines and tissues [38], where it is specifically localized to epithelial tissue. WISP3/CCN6 expression is linked to the severity of breast cancer and is implicated in regulating the epithelial to mesenchymal transition (EMT) [39]. Taken together, it is possible to imagine that CCN specific antibody interference could also be used to both diagnose and treat the numerous cancer types where they are expressed. Promising pre-clinical studies showing inhibition of metastasis of breast [37] and pancreatic cancer using antibodies to CTGF/ CCN2 [40] further validate this concept.
Many theories abound about the potential role of the EMT in cancer progression. During this process, the ordered alignment and shape of epithelial glandular cells, as well their characteristic gene expression patterns, ''transition'' to become more mesenchymal -like, being less adhesive and more dysmorphic. The role of WISP1 in regulating the EMT process accompanying idiopathic pulmonary fibrosis (IPF) has recently been investigated in a mouse model with IPF induced by bleomycin treatment [41]. When the diseased mice were treated with antibodies against WISP1, the EMT and subsequent fibrosis were reduced, causing the mice to live longer than their untreated counterparts [41]. In light of this new finding, it is tempting to speculate that anti-WISP1 treatment could, in a similar fashion, modulate the EMT in prostate cancer. In this case, it is likely that the supporting stroma where WISP1 is expressed will be one important factor in regulating the activities of the transforming prostate cells. In this context, many consider the stroma to be a key target for cancer therapy because of its important roles in regulating the cancer cell microenvironment and ultimately cancer cell fate [28,42].
The accessory molecules that could modulate WISP1's functions in prostate cancer cell are not entirely clear, however, considering what is known about WISP1 in normal tissues, it is possible that the BMP/TGF beta family members are somehow involved. In hBMSCs WISP1 inhibits TGF-b1 induced Smad2 phosphorylation as well as TGF-b1 induced proliferation of BMSCs [26]. BMP-2-induced Smad1 phosphorylation and subsequent osteogenesis, on the other hand, is enhanced by WISP1 [22]. When BMP-2 action is down-regulated by treatment with the BMP antagonist, noggin, PC3 cells have less migration and invasion in vitro and less tumor formation in vivo [43]. In this regard, it is possible that WISP1's positive influence on BMP-2 could be one way it increases migration, invasion and spread of prostate cancer to bone. The positive influence of WISP1 on BMP-2 could therefore be part of the molecular underpinnings related to increased PC3-Luc metastasis in mice that have increased bone turnover induced by intermittent application of PTH [23]. BMP-7 has also been implicated to control prostate cancer growth and spread, however, its potential relationship to WISP1 remains to be clarified [44,45]. One other factor that may be connected to WISP1 function is vitamin D 3 an agent known to be beneficial to bone and in reducing cancer. The WISP1 promoter has numerous vitamin D 3 responsive elements and pilot work from our lab shows it is down regulated in hBMSCs treated with 1,25 dihydroxyvitamin D 3 (unpublished data). Our future challenges will be to confirm and identify new connections that tie WISP1 function to cancer and bone.
Interestingly, WISP1 expression is itself up-regulated by both TGF-b1 and BMP-2 [46], suggesting a ''feed forward'' regulatory loop for the control of these growth factors known to be important for both bone and cancer regulation. It is possible the coupled interference of TGF-b1 and BMP-2 by anti-WISP1 treatment could in turn alter the composition of the extracellular matrix (ECM), now known to be affected by and influential to prostate cancer cell behavior [47].
PSA (Prostate Specific Antigen) is a protein produced by the prostate gland that has been widely used to detect prostate cancer, however, its routine and extensive use is now being questioned for several reasons. First, total PSA levels in blood can result in false negatives where men who have prostate cancer do not have elevated PSA. In addition to this, there are some non-cancerous conditions that lead to increased PSA, such as prostatitis (inflammation of the prostate) or prostatic hyperplasia (BPH), leading to false positives. Compounding this problem is the fact that the actual ''normal'' levels of PSA are not clearly known and, furthermore, the normal PSA range can vary with age and race. The ultimate harm in the current PSA testing, therefore, is that men will either go untreated (in the case of false negatives) or unnecessarily be treated (in the case of false positives), leading to complications with harmful side effects. Indeed only 25% of men who have a prostate biopsy due to elevated PSA actually have prostate cancer [48]. There is a need, therefore, for improved biomarkers for prostate cancer detection that could, for example, predict different stages of prostate cancer or, even to discriminate those that are predicted to progress compared to those that will remain benign. The use of WISP1 and even some of the other CCN family members offer new candidates that could be further tested for this purpose.
Prostate cancer is the 2nd leading cause of cancer-related death among men [2]. It is generally thought to occur as an androgendependent tumor that can progress to a highly invasive androgenindependent tumor. When the disease is advanced, the tumor continues to proliferate, spread locally, and metastasizes to lymph nodes and bone. At this point, the disease is incurable. Targeted antibody therapy has proven efficacious in clinical cancer treatment, making this a reasonable approach to consider for the development of new therapies for prostate cancer. During the course of our studies, we generated antibodies towards human WISP1 protein and found it to be highly expressed in the stromal tissue of early stages of prostate cancer in both humans and mice. Our hypothesis was that up-regulation of WISP1 in stroma creates a hospitable niche for tumor cells that supports its growth and invasion, and final establishment in bone. As we predicted, WISP1 antibody treatments reduced both PC3 xenograft growth and cancer spread to bone in mice. Taken together our new findings provide a promising foundation for future development of diagnostics and therapeutics and based on WISP1 detection and neutralization.
Supporting Information
File S1 Specificity of anti-WISP1/CCN4 when probed for Cyr61/CCN1, CTGF/CCN2 and Nov/CCN3. A. Alignment of the human and mouse sequence of WISP1 showing the position and sequence of the peptides used to generate antibodies LF-185 (grey box) and LF-187 (black box). Amino acids that are identical between mouse and human are shown on the line between the human and mouse sequences,+indicates sequences that are similar but not identical between the two species and gaps are created for best alignment. | 2016-05-04T20:20:58.661Z | 2013-08-14T00:00:00.000 | {
"year": 2013,
"sha1": "8cec7aa8711ac84359b57a44ebbf1d78e7ac876f",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0071709&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cec7aa8711ac84359b57a44ebbf1d78e7ac876f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
197542662 | pes2o/s2orc | v3-fos-license | Review of glucagon‐like peptide‐1 receptor agonists for the treatment of type 2 diabetes mellitus in patients with chronic kidney disease and their renal effects
Abstract Type 2 diabetes mellitus (T2DM) is the most common cause of chronic kidney disease (CKD), and when it causes CKD it is collectively referred to as diabetic kidney disease. One of the newer therapies for managing hyperglycemia is the glucagon‐like peptide‐1 receptor agonist (GLP‐1RA) drug class. This review summarizes the effects of GLP‐1RAs in patients with T2DM with CKD and evidence for renoprotection with GLP‐1RAs using data from observational studies, prospective clinical trials, post hoc analyses, and meta‐analyses. Evidence from some preclinical studies was also reviewed. Taken together, subgroup analyses of patients with varying degrees of renal function demonstrated that glycemic control with GLP‐1RAs was not markedly less effective in patients with mild or moderate renal impairment vs that in patients with normal function. GLP‐1RAs were associated with improvements in some cardiorenal risk factors, including systolic blood pressure and body weight. Furthermore, several large cardiovascular outcome studies showed reduced risks of composite renal outcomes, mostly driven by a reduction in macroalbuminuria, suggesting potential renoprotective effects of GLP‐1RAs. In conclusion, GLP‐1RAs effectively reduced hyperglycemia in patients with mild or moderately impaired kidney function in the limited number of studies to date. GLP‐1RAs may be considered in combination with other glucose‐lowering medications because of their ability to lower glucose in a glucose‐dependent manner, lowering their risk for hypoglycemia, while improving some cardiorenal risk factors. Potential renoprotective effects of GLP‐1RAs, and their renal mechanisms of action, warrant further investigation.
are generally well tolerated, with a low risk of hypoglycemia when not taken with concomitant insulin or sulfonylurea, [17][18][19][20][21][22][23] but are associated with an increased frequency of gastrointestinal adverse events (AEs) such as nausea, vomiting, and diarrhea. 24 Gastrointestinal symptoms associated with GLP-1RAs occur early in the course of therapy and generally lessen over time. [25][26][27] Hydration is important for all patients with diabetes, particularly those with DKD, given that severe vomiting without commensurate fluid replacement can lead to hypovolemia and acutely worsening renal function. 26,27 Postmarketing cases of acute kidney failure associated with GLP-1RA use have been reported, 28 resulting in warnings and precautions in the prescribing information for their use in patients with impaired renal function.
Because renal impairment and T2DM are often comorbid conditions, a need exists for effective glucose-lowering therapies in patients with renal impairment. GLP-1RAs vary in their primary mechanism of metabolism and elimination, and while some agents undergo renal clearance, GLP-1RAs are not nephrotoxic. However, impaired renal function would be expected to affect the pharmacokinetics of renally eliminated GLP-1RAs, potentially increasing drug exposure and the risk of AEs. GLP-1RAs currently available in the United States include exenatide twice daily (bid), exenatide once weekly (qw), lixisenatide once daily (qd), liraglutide qd, dulaglutide qw, and semaglutide qw; albiglutide qw has been withdrawn from the market. Exenatide undergoes renal elimination and generalized proteolysis; exenatide qw is not recommended for patients with an eGFR below 45 mL/min/1.73 m 2 . 18,19 Lixisenatide is eliminated through glomerular filtration, followed by tubular reabsorption and subsequent metabolic degradation; patients treated with lixisenatide who have mild, moderate, or severe renal impairment should be monitored for changes in renal function and for gastrointestinal AEs. 21 The "glutides": liraglutide, dulaglutide, and semaglutide are human GLP-1 analogs eliminated by general proteolysis pathways by mechanisms other than renal elimination but should be used with caution in patients with renal impairment, particularly during treatment initiation or dose escalation, as adverse gastrointestinal reactions associated with GLP-1RAs can increase the risk of developing volume depletion and worsen renal function. 20,22,23 Given the relationship between diabetes and kidney disease, the objective of this review is to summarize the efficacy of GLP-1RAs and their effects on renal outcomes in patients with T2DM and renal impairment. To identify studies reporting effects of GLP-1RAs in patients with renal impairment, the US National Library of Medicine PubMed database was searched for combinations of relevant terms including "exenatide," "lixisenatide," "liraglutide," "dulaglutide," "semaglutide," "glucagon-like peptide-1 receptor agonists," "kidney," "renal," "nephropathy," and "diabetic kidney disease." The search was limited to English-language publications. Articles were manually searched to identify studies reporting the efficacy of GLP-1RAs in patients with varying degrees of renal function and effects of GLP-1RAs on renal outcomes, with additional studies identified within the reference lists of resultant articles.
| RENAL EFFECTS OF GLP-1RAS
Experiments evaluating exenatide and liraglutide in animal model systems have demonstrated improvements in glomerular hyperfiltration, albuminuria, oxidative stress, and histologic features indicative of DKD, suggesting a role for exenatide and liraglutide in protecting renal function (Table 1). [29][30][31][32][33][34] These effects may extend across the GLP-1RA class; although to date no preclinical studies directly examining renoprotective effects with other GLP-1RAs are known. Importantly, preclinical studies have generated hypotheses regarding potential renoprotective effects of GLP-1RAs. While clinical studies on renoprotection are limited, several suggest that GLP-1RAs may promote improved kidney function in humans (Table 2). In addition, clinical studies including subgroup analyses stratified by renal function have examined the effect of impaired renal function on the efficacy of GLP-1RAs in patients with T2DM.
| Exenatide
Several studies have examined the effect of renal function on the efficacy of exenatide treatment. A post hoc analysis of a randomized controlled trial (RCT) compared the effects of exenatide qw formulated for autoinjection with exenatide bid by renal function status. 51 As renal function decreased, the glycemic effect of exenatide bid increased (glycated hemoglobin [HbA1c] reductions of −0.7%, −1.3%, and −1.4% [−7.5, −14.6, and −15.2 mmol/mol] for the eGFR subgroups ≥90, 60-89, and 30-59 mL/min/1.73 m 2 , respectively), while there was no effect on body weight. In contrast, renal impairment had no effect on HbA1c reductions associated with exenatide qw for autoinjection ( 44 A subanalysis of the primary outcome of major adverse cardiovascular events (MACE; first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke) in prespecified renal function subgroups demonstrated no significant treatment interaction, suggesting no effect modification by renal function status. In the total population, exenatide qw showed improvement in terms of overall difference from placebo for some cardiorenal risk factors, including reductions in systolic BP (SBP; −1.57 mmHg; P < .001) and body weight (−1.27 kg; P < .001). Furthermore, a composite renal outcome consisting of 40% eGFR decline, renal replacement, renal death, or new macroalbuminuria (UACR >300 mg/g) was significantly reduced with exenatide qw vs that with placebo in an analysis adjusted for age, sex, ethnicity, race, region, duration of diabetes, history of cardiovascular event, insulin use, baseline HbA1c, eGFR, and body mass index (5.8% vs 6.5%; adjusted hazard ratio [HR], 0.85 [95% confidence interval (CI), 0.73-0.98]; P = .027) (Figure 1). 45 The effect of exenatide on renal fibrosing factors has also been examined in patients with T2DM and renal impairment. Transforming growth factor β1 (TGF-β1) and type IV collagen both contribute to extracellular matrix accumulation in DKD. In a small study (N = 31) of patients with T2DM and microalbuminuria (defined as urinary albumin 30-300 mg/24 hours), after 16 weeks, exenatide bid significantly reduced 24-hour urinary albumin (−38.0%), urinary TGF-β1 (−37.3%), and type IV collagen (−25.3%; P < .01 for all), whereas the glimepiride-treated group had no significant reductions in these measurements. 35 Exenatide also resulted in a small, nonsignificant reduction in SBP vs glimepiride. However, neutral effects of exenatide bid on renal function have also been observed. A post hoc analysis of 54 patients without overt nephropathy treated with exenatide bid or insulin glargine for 52 weeks found no significant change from baseline in creatinine clearance or albuminuria (urinary albumin excretion and UACR) among exenatide-treated patients. 52 An observational study examined renal outcomes with glucose-lowering treatments among 466 patients studied sequentially over 3 years, 275 of whom were treated with a GLP-1RA (exenatide or liraglutide). 40 patients had a mean decrease in albuminuria (−39.6 mg/g; P < .0001) compared with a mean increase in albuminuria (+5.6 mg/g) in patients treated with unspecified glucoselowering drugs. Among those with macroalbuminuria at baseline, greater proportions of GLP-1RA-treated patients developed microalbuminuria (UACR 30-300 mg/g; 23%) or normoalbuminuria (UACR <30 mg/g; 2.8%) compared with those receiving unspecified glucose-lowering therapies (microalbuminuria, 12.3%; normoalbuminuria, 0%; P = .0005). SBP was also lower among patients receiving GLP-1RAs (by 3 mmHg).
| Lixisenatide
A post hoc meta-analysis of nine RCTs that examined lixisenatide in patients with normal renal function or with mild or moderate renal impairment found no difference in efficacy on the basis of renal status (end-of-study placeboadjusted differences in HbA1c of −0.52%, −0.50%, and − 0.85% [−5.7, −5.5, and −9.3 mmol/mol] for creatinine clearance subgroups ≥90, 60-89, or 30-59 mL/min, respectively). 53 However, a higher incidence of gastrointestinal AEs occurred with mild renal impairment vs the incidence with normal renal function.
In the Evaluation of Lixisenatide in Acute Coronary Syndrome (ELIXA) study (N = 6068), which examined cardiovascular outcomes with lixisenatide treatment in patients with T2DM who had a recent acute coronary syndrome, 23% of patients had eGFR 30 to 60 mL/min/1.73 m 2 and 0.1% had eGFR <30 mL/min/1.73 m 2 . 42 A subgroup analysis of the primary outcome (time to event for composite of death from cardiovascular causes, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for unstable angina) demonstrated no significant interactions in prespecified renal function subgroups. In the total population, lixisenatide showed improvement in terms of average difference from placebo across all visits for some cardiorenal risk factors, including modest reductions in SBP (−0.8 mmHg; P = .001) and body weight (−0.7 kg; P < .001). In addition, lixisenatide resulted in a smaller increase in the UACR vs placebo (+24% vs +34%; P = .004) after 108 weeks of treatment. Subgroup analyses demonstrated significant reductions in UACR with lixisenatide vs those with placebo among patients with macroalbuminuria (UACR >300 mg/g) at baseline (treatment difference for percent change from baseline: −39.18%; P = .0070). Further, lixisenatide showed a reduced risk of progression to macroalbuminuria compared F I G U R E 1 Composite renal outcomes with GLP-1RA treatment in patients with T2DM in cardiovascular outcome trials. 44
The effects of liraglutide on renal measurements have also been examined in patients with T2DM and impaired renal function. In a 12-month longitudinal study of liraglutide (N = 84), eGFR reached the normal range (≥90 mL/min using the Chronic Kidney Disease-Epidemiology Collaboration equation) in 7 of 41 patients with baseline eGFR <90 mL/min. 38 Furthermore, three of five patients with baseline microalbuminuria returned to normal albuminuria. Among 23 patients with DKD who had received renin-angiotensin system blockers, 12-month treatment with liraglutide significantly decreased proteinuria from 2.53 to 1.47 g/g creatinine and reduced the rate of eGFR decline from 6.6 to 0.3 mL/min/1.73 m 2 per year. 39 In a small randomized controlled crossover trial (N = 32), treatment with liraglutide for 12 weeks significantly reduced the urinary albumin excretion rate vs placebo (−32%; P = .017) in patients with persistent albuminuria (UACR ≥30 mg/g) and eGFR ≥30 mL/min/1.73 m 2 who were receiving stable renin-angiotensin system-blocking treatment, further suggesting a renoprotective role for liraglutide. 36 The Liraglutide Effect and Action in Diabetes: Evaluation of Cardiovascular Outcome Results (LEADER) trial (N = 9340), which included~23% of patients with moderate or severe renal impairment, studied cardiovascular outcomes during treatment with liraglutide vs those with placebo. 46 A prespecified subgroup analysis comparing the primary outcome of MACE in patients with moderate or severe renal impairment (eGFR <60 mL/min/1.73 m 2 ) vs patients with eGFR ≥60 mL/min/1.73 m 2 showed a greater benefit of liraglutide in the moderate or severe renal impairment group (P = .01). However, a sensitivity analysis showed no clinically meaningful treatment interaction based on renal function. The LEADER trial also showed a beneficial effect of GLP-1RAs on some renal outcomes. The incidence of nephropathy (defined as new-onset macroalbuminuria or a doubling of serum creatinine level and eGFR ≤45 mL/min/1.73 m 2 , the need for continuous renal replacement therapy, or death from renal disease) was lower with liraglutide vs that with placebo (5.7% vs 7.2%; HR, 0.78 [95% CI, 0.67-0.92]; P = .003) (Figure 1). 46 This result was driven by a 26% reduction of new-onset persistent macroalbuminuria. 47 Placebo-subtracted reductions in cardiorenal risk factors, including SBP (−1.2 mmHg) and body weight (−2.3 kg) at 36 months, were also observed. 46
| Semaglutide
In the Semaglutide Unabated Sustainability in Treatment of Type 2 Diabetes (SUSTAIN)-6 study (N = 3297), a cardiovascular outcome trial, 25%, 3%, and 0.4% of patients had eGFR 30-59, 15-29, and <15 mL/min/1.73 m 2 , respectively. 48 An analysis of the primary outcome of MACE by renal function subgroup showed no significant treatment interaction. The SUSTAIN-6 study also had a prespecified secondary outcome of new or worsening nephropathy (defined as new-onset persistent macroalbuminuria, persistent doubling of serum creatinine level, and creatinine clearance <45 mL/min/1.73 m 2 [per Modification of Diet in Renal Disease criteria], the need for continuous renal replacement therapy, or death due to renal disease). A smaller proportion of semaglutide-treated patients experienced new or worsening nephropathy vs that with placebo (3.8% vs 6.1%; HR, 0.64 [95% CI, 0.46-0.88]; P = .005) (Figure 1). This result was driven by a 46% reduction in macroalbuminuria.
2.6 | Effect of GLP-1RAs on renal outcomes across cardiovascular outcome trials A recent meta-analysis of cardiovascular outcomes trials, including EXSCEL, ELIXA, LEADER, and SUSTAIN-6, examined the effect of GLP-1RAs on progression of kidney disease. 57 GLP-1RAs were associated with an 18% reduction in the risk of a broad composite renal outcome consisting of new-onset macroalbuminuria, worsening of eGFR, ESRD, or death due to renal causes compared with placebo (HR, 0.82 [95% CI, 0.75-0.89]; P < .001). The reduction in risk was driven primarily by a reduction in macroalbuminuria, as excluding this outcome from the analysis resulted in a nonsignificant risk reduction. These results suggest that GLP-1RAs reduce renal events mainly by reducing macroalbuminuria.
| MECHANISMS OF ACTION
Renal benefits of GLP-1RAs may be attributable to favorable effects on cardiorenal risk factors, including improved glucose control, BP lowering, and weight loss. In addition, GLP-1RAs may have direct renal effects, as GLP-1 receptors are expressed in the kidney. 58 The mechanism of action of GLP-1 in the kidney is not completely understood, but may involve both neural and nonneural pathways. 59 A gut-renal axis is possible, with regulatory linkages through the gastrointestinal tract, central nervous system, and kidney ( Figure 2). 2 The main physiologic effect of GLP-1 on the kidney may possibly be to reduce prandial intraglomerular pressure to reduce macronutrient loss in the glomerular filtrate. This would allow increased time for macronutrient uptake by other tissues without having to expend energy to transport macronutrients back into the body through the proximal tubule or overwhelming the proximal tubule reuptake system for macronutrients, such as glucose, amino acids, and free fatty acids. It may do so by decreasing sympathetic activity at the glomerulus through the central nervous system or by direct effects on the mesangium and renal interstitium.
Several studies have reported GLP-1RA-induced natriuresis in healthy subjects and in patients with T2DM, [60][61][62][63] possibly resulting from decreased activity of the sodiumhydrogen exchanger 3 (NHE3). GLP-1 receptor activation has been shown to inhibit activity of NHE3 in the proximal tubule, which would increase distal tubular sodium transport in the kidney to the macula densa, resulting in tubular glomerular feedback with a reduction in intraglomerular pressure, hyperfiltration, and renin-angiotensin system activity. 2,58,64 Reducing intraglomerular pressure would be expected to have an antiproteinuric effect in the diabetic kidney and help preserve kidney function.
| CONCLUSIONS
DKD is a common comorbidity of T2DM; therefore, glucose-lowering treatments that are efficacious, do not increase hypoglycemia, and may have additional benefits for the kidney are of interest. In the limited number of studies to date investigating the effect of renal function on the efficacy of GLP-1RA treatment, GLP-1RAs improved glycemic control in patients with mild to moderately impaired kidney function, without significant differences compared with patients with normal renal function.
Hyperglycemia, obesity, and hypertension all contribute to the development of kidney and heart disease, 4 and the multiple effects of GLP-1RAs for improving glycemic control, body weight, and BP may be beneficial for delaying the onset or progression of DKD. However, GLP-1RAs may potentially have direct effects on the kidney as well.
In animal models, GLP-1RAs may have a renoprotective effect, as demonstrated by improvement in some renal function measures and histologic features. In addition, these agents were associated with a lower incidence of diabetic nephropathy and/or albuminuria compared with placebo in several large clinical studies. These observations should be the basis for continued research efforts into the long-term effects of GLP-1RAs on kidney function and mechanistic studies examining how GLP-1RAs affect the kidney, potentially through the gut-renal axis.
For now, GLP-1RAs should be considered in combination with other complementary glucose-lowering medications in patients with CKD, due to their safety and ability to lower glucose in a glucose-dependent manner.
ORCID
Lance A. Sloan https://orcid.org/0000-0001-7362-4410 F I G U R E 2 The GLP-1 gut-renal axis. The role of GLP-1 is to facilitate macronutrient storage through multiple pathways. Two of the pathways shown-the CNS and direct pathways-may affect the kidneys by decreasing intraglomerular pressure. This potentially may result in decreased nutrient loss or energy expenditure needed to reabsorb nutrients such as glucose or amino acids. GLP-1 works through the pancreatic pathway to increase macronutrient storage in the liver, skeletal muscle, and fat by increasing insulin and decreasing glucagon levels in a glucosedependent manner. The dotted lines represent proposed mechanisms whereby the brain, potentially through the autonomic nervous system, may reduce sympathetic activity, insulin resistance, and intraglomerular pressure. Green text indicates an increase; red text indicates a reduction. ATP, adenosine triphosphate; CNS, central nervous system; GI, gastrointestinal; GLP-1, glucagon-like peptide-1; NHE, sodium-hydrogen exchanger | 2019-07-19T13:21:25.270Z | 2019-08-14T00:00:00.000 | {
"year": 2019,
"sha1": "9281e429beaa5fcff89dd7e616e9277aa737e552",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1753-0407.12969",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2f4e47041744eabace076c0fdb9807fd622a74c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242024829 | pes2o/s2orc | v3-fos-license | Low-Carbohydrate Diet among Children with Type 1 Diabetes: A Multi-Center Study
Aims/hypothesis: The proportion of children with type 1 diabetes (T1D) who have experience with low-carbohydrate diet (LCD) is unknown. Our goal was to map the frequency of LCD among children with T1D and to describe their clinical and laboratory data. Methods: Caregivers of 1040 children with T1D from three centers were addressed with a structured questionnaire regarding the children’s carbohydrate intake and experience with LCD (daily energy intake from carbohydrates below 26% of age-recommended values). The subjects currently on LCD were compared to a group of non-LCD respondents matched to age, T1D duration, sex, type and center of treatment. Results: A total of 624/1040 (60%) of the subjects completed the survey. A total of 242/624 (39%) subjects reported experience with voluntary carbohydrate restriction with 36/624 (5.8%) subjects currently following the LCD. The LCD group had similar HbA1c (45 vs. 49.5, p = 0.11), lower average glycemia (7.0 vs. 7.9, p = 0.02), higher time in range (74 vs. 67%, p = 0.02), lower time in hyperglycemia >10 mmol/L (17 vs. 20%, p = 0.04), tendency to more time in hypoglycemia <3.9 mmol/L(8 vs. 5%, p = 0.05) and lower systolic blood pressure percentile (43 vs. 74, p = 0.03). The groups did not differ in their lipid profile nor in current body height, weight or BMI. The LCD was mostly initiated by the parents or the subjects themselves and only 39% of the families consulted their decision with the diabetologist. Conclusions/interpretation: Low carbohydrate diet is not scarce in children with T1D and is associated with modestly better disease control. At the same time, caution should be applied as it showed a tendency toward more frequent hypoglycemia.
Introduction
Type 1 diabetes (T1D) requires frequent insulin administration, intensive self-monitoring of blood glucose and daily control of nutrients intake to achieve tight metabolic control. Yet, the globally recommended targets are often not met, especially in children and adolescents [1]. The call is therefore strengthening for some sort of adjunctive therapy that would allow more patients achieve the metabolic goals. Most effective strategies include technological solutions [2,3], pharmacotherapy [4] or dietary interventions, including low carbohydrate diet (LCD) [5,6].
According to the latest International Society for Pediatric and Adolescent Diabetes (ISPAD) guidelines, the dietary recommendations for children with T1D are similar to those Nutrients 2021, 13, 3903 2 of 12 for general population with 45-50% of daily energy intake from carbohydrates [7]. Despite this, an alternative approach is emerging, suggesting carbohydrate restriction in the form of low carbohydrate diets (LCD). The definitions of the LCD differ and are especially challenging for the childhood. A review by Seckold et al. suggested a classification based on mean estimated energy requirements for the age of the child [8]. According to the ADA, 130g carbohydrates per day is an average minimum requirement for adults [9]. A special subgroup of very-low carbohydrate diet (VLCD) is also often defined [8,10].
The supporters of these low-carbohydrate diets argue that lower dietary carbohydrate intake eliminates postprandial hyperglycemia, decreases glycemic variability and through lower insulin dose decreases the risk of hypoglycemia [11]. In adults, several studies provide indirect evidence of the desirable effects of the LCD. Krebs et al. [12] performed a small randomized trial with long established T1D patients in which the LCD led to an improvement of their HbA1c but did not significantly affect their body weight nor continuous glucose monitoring (CGM) outcomes. In a Danish study with crossover design 10 T1D adults on sensor augmented pump therapy spent more time in range, less time in hypoglycemia and had lower glycemic variability during a week on LCD [13]. Less time in hypoglycemia and lower glycemic variability was observed on LCD by the same group of authors in a study with 14 T1D adults and 12-week duration of the study periods [14].
The evidence in children with T1D is much scarcer. In a much-cited survey by Lennerz et al. [15], 131 parents of children with T1D on LCD self-reported excellent metabolic control, high satisfaction with diabetes management and remarkably low rate of acute complications. The design of the study nevertheless does not allow us to draw any strong conclusions as mentioned in a comment by Mayer-Davis et al. [16]. On the other hand, serious concerns are often raised about the lipid profile and cardiovascular risk of patients on the LCD. Increased cardiovascular risk lipid profile and hindered growth were observed in children following the LCD [17]. Excessive weight loss and growth retardation were described in patients with epilepsy treated with ketogenic (i.e., very strict low carbohydrate high fat) diet [18,19]. On top of that, it might be that the patients on the LCD are at increased risk of severe hypoglycemia as their response to glucagon is blunted [20]. It is also necessary to mention that dietary restrictions, often imposed by the parents, might lead to diabetes distress and diabetes related family conflict which worsen the quality of life [21,22]. Dietary restrictions also increase the risk of the eating behavior disorders development [23] which were shown to increase morbidity and mortality of T1D patients [24].
As of today, there are only limited data on the prevalence of LCD use among children with T1D, let alone on its safety and efficacy. The aim of this study is to map the frequency of the LCD use among pediatric patients with T1D and retrospectively compare their metabolic and clinical features to their matched T1D controls without such dietary restrictions.
Study Population Characteristic
The setting of this study were the three largest tertiary referral centers for pediatric diabetes in the Czech Republic: two in Prague (University Hospital Motol and University Hospital Kralovske Vinohrady and one in Brno (University Hospital Brno). As of May 2020, at the course of the study, these centers collectively provided care for 1258 children and adolescents with T1D.
The desired contact information (e-mail) was available for a total of 1040/1258 (83%) parents/caregivers of patients with T1D diagnosed according to ADA criteria [25] in the study centers. The study flowchart is detailed in Figure 1. All available parents/caregivers were offered participation in a form of a structured electronic survey regarding their childrens' dietary habits. All of the respondents previously signed a written consent with the data collection and analysis for theČENDA Registry, including anonymized secondary research. An individual electronic consent was obtained from the survey respondents for data analysis and identified matching to the registry data.
were offered participation in a form of a structured electronic survey regarding their childrens' dietary habits. All of the respondents previously signed a written consent with the data collection and analysis for the ČENDA Registry, including anonymized secondary research. An individual electronic consent was obtained from the survey respondents for data analysis and identified matching to the registry data.
Assessment of Dietary Habits
Parents/caregivers were asked to report a three-day record on carbohydrate intake of their child/children with T1D and report their experience with LCD. Subjects were asked whether they voluntarily and significantly reduced their dietary carbohydrate intake (i.e., followed LCD) on a regular basis either in the present or the past (at least 3 months). LCD subjects were considered those who claimed following the LCD and their referred threeday average carbohydrate intake was <26% of the daily age-specific recommended energy intake from carbohydrates [8]. As this criterion approximately equals 130 g per day at 11 years, for children above 11 years 130 g of carbohydrates per day was considered an upper
Assessment of Dietary Habits
Parents/caregivers were asked to report a three-day record on carbohydrate intake of their child/children with T1D and report their experience with LCD. Subjects were asked whether they voluntarily and significantly reduced their dietary carbohydrate intake (i.e., followed LCD) on a regular basis either in the present or the past (at least 3 months). LCD subjects were considered those who claimed following the LCD and their referred three-day average carbohydrate intake was <26% of the daily age-specific recommended energy intake from carbohydrates [8]. As this criterion approximately equals 130 g per day at 11 years, for children above 11 years 130 g of carbohydrates per day was considered an upper limit of LCD [26]. The respondents who admitted voluntary carbohydrate restriction but did not fulfill the criteria for total LCD were considered keeping a partial LCD and excluded from further analysis. We further defined a sub-group of subjects following the very low carbohydrate diet (VLCD), i.e., below 50 g per day or <10% of daily energy intake from carbohydrate for children below 11 years [8,10]. The second part of the questionnaire consisted of nine multiple choice questions in which the respondents provided information on the initiation of LCD, sources of information and subjective changes observed after the Nutrients 2021, 13, 3903 4 of 12 start of LCD. Another two questions concerning the reasons for LCD termination were asked of the patients who were following the LCD in the past.
Clinical and Laboratory Data
The clinical and laboratory data were obtained from children currently on the LCD and their matched controls (see below). Clinical data on the subjects age, sex, age at the T1D onset, type of therapy (multiple daily insulin injection, MDI or continual subcutaneous insulin infusion, CSII) and the center of therapy as well as HbA 1 c (last available and yearly average), three days average of bolus and basal insulin doses, body height and weight and blood pressure were obtained from theČENDA Registry [3,27] for the last visit preceding the collection of the survey. Standard deviation scores for body height, weight and BMI were calculated from the population-based data [28] as well as the of the blood pressure percentile [29]. Lipid profile (total cholesterol, triglycerides, LDL cholesterol and HDL cholesterol) and data from continuous glucose monitors (CGM) were obtained from the subjects' medical records assessed by the investigators. CGM data taken from the last 14 days before the last visit before the survey distribution were downloaded using a specialized software (Diasend, LibreView, CareLink) and the values for standard CGM metrics [30] were used for the analysis-time in the target range (TIR) (between 3.9-10.0 mmol/L), time in level 1 (between 3.0-3.8 mmol/L) and level 2 (below 3.0 mmol/L) hypoglycemia, time in level 1 (between 10.1-13.9 mmol/L) and level 2 (above 13.9 mmol/L) hyperglycemia, average glycemia (AG), coefficient of variation (CV) and standard deviation of glycemia (SD).
Statistical Analysis
Continuous data are presented as medians with interquartile range (IQR). Categorical data are summarized using absolute and relative frequencies. Comparisons of the responder set to the non-responders and of the VLCD group to the LC group were carried out using Wilcoxon two-sample test for continuous variables and Fisher's exact test for categorical variables.
For more detailed comparison, the subjects currently on LCD were randomly matched with respondents who had no experience with LCD. The matching variables were center, sex, treatment type (MDI or CSII), age at the time of survey collection and age of T1D onset. For age the allowed difference was 1 year for patients younger than 12 years, 2 years for those aged 12-14.99 and 2.5 years for those aged 15 and more. For the age at T1D onset the difference was a maximum of 2 years for patients younger than 10 years and 4 years for 10 years and older. Patients' characteristics and measurements were then compared using paired Wilcoxon signed rank test for continuous variables and McNemar test for categorical variables.
Frequency of the LCD among the Responders
A total of 242/624 (38.7%) of the subjects claimed they had experience with voluntary reduction of their dietary carbohydrate intake. These include the subjects who reported experience with partial LCD (i.e., low-carbohydrate breakfast) in the present or the past (129/624, 20.7%) and those who claimed following total LCD either in the present or the past (113/624, 18.1%). At the time of the survey 36/624 (5.8%) subjects were on total LCD. Further 31/624 (5.0%) subjects reported following total LCD in the past (with referred Nutrients 2021, 13, 3903 5 of 12 carbohydrate intake fulfilling the criteria) but decided to end it before the time of the survey. The remaining 46/624 (7.4%) subjects claimed following total LCD but their referred carbohydrate intake did not fulfill the ADA criteria [26] for LCD. Five of the 36 subjects on the LCD (13.8%) followed the VLCD.
The frequency of LCD was not distributed equally among the centers with centers from Prague having 6.0% and 6.1% frequency of LCD subjects while the center in Brno showed only 2.4% frequency (Fisher Exact test p = 0.04). Total LCD was more frequent in females with 25 subjects vs. 11 male subjects (Fisher Exact test p = 0.02).
Comparison of LCD Subjects to Their Non-LCD Matched Controls
The results are shown in Table 1. The subjects on LCD started with the diet at the median of 11.2 years (95% CI 8.2-12.4 years) and kept the LCD for a median of 1.1 years (95% CI 0.6-1.9 years). The subjects on LCD had lower doses of bolus insulin (10.0 vs. 21.5 U/day, p < 0.001) but similar basal doses (12.0 vs. 13.5 U/day, p = 0.76). The total insulin daily dose per kilogram body weight was consequently lower in the LCD group (0.6 vs. 0.8 U/kg/day, p < 0.001).
Low-Carbohydrate Diet Non-LCD Diet p-Value
Bolus insulin (units daily) 10.
Questions Specific for the LCD Group
The results are shown in Table 2. The subjects' motivation for LCD initiation was mostly the desire for better diabetes control and healthy lifestyle with weight reduction being also given as one of the more frequent reasons. In the vast majority of the subjects, the parents/caregivers or the subjects themselves decided to start with the LCD. More than a third of the subjects did not consult their diabetologist prior to this decision. Subjects generally sought the Internet, books and other parents of children with T1D for information on the LCD, only a minority asked their diabetologist. The subjects referred better T1D control, lower insulin dose and weight reduction as the positive changes brought about by the LCD. On the other hand, they reported more time and money spent for meal preparation, more frequent hypoglycemia and fatigue. The decision to terminate the LCD came equally often from the parents/caregivers as the subjects themselves. The main reasons for LCD termination were the non-compliance of the child and the observed side-effects (fatigue, frequent hypoglycemia). School-related conflict 2/31 6.5 Family conflict 1/31 3.2 No answer 2/31 6.5 * ubjects were allowed to answer multiple times to this question. † Only subjects who have terminated LCD answered this question (N = 31).
Comparison of the VLCD Sub-Group to LCD Subjects
The subjects who followed the very low-carbohydrate diet (N = 5) did not significantly differ from the LCD subjects in age, duration of T1D, sex nor type of the treatment. They kept the diet longer than the subject on LCD (3.0 vs. 0.9 years, p = 0.004). They had significantly lower daily carbohydrate intake (35 vs. 100 g, p < 0.001) but did not differ in insulin doses nor their disease control as assessed by HbA 1 c or CGM values. The subjects Nutrients 2021, 13, 3903 9 of 12 on the VLCD had tendency toward higher body weight SDS (1.8 vs. 0.4, p = 0.22) and BMI SDS (1.4 vs. 0.4, p = 0.21) but the differences did not reach statistical significance. The VLCD subjects also showed disturbed lipid spectrum with marginally higher total cholesterol (5.3 vs. 4.7 mmol/L, p = 0.05) and lower HDL cholesterol (1.3 vs. 1.5 mmol/L, p = 0.08). All analyses can be found in Supplementary Table S2.
Conclusions
The results of the study on a representative cohort indicate that the use of the LCD is quite commonplace among children/adolescents with T1D with 38.7% having the experience with carbohydrate reduction and 5.8% currently keeping the LCD. Their main motivation for the initiation of the diet was to improve their glycemic curves and we have shown that children with T1D on the LCD tend to have excellent disease control at the cost of slightly higher time spent in hypoglycemia. The subjects or their parents/caregivers are mostly seeking non-professional sources for advice on this nutritional intervention, a fact that should not be ignored since restrictive diets tend to imbalanced nutritional intake with possibly harmful consequences [17].
The comparison between the LCD subjects and a well-matched control group revealed that the LCD group has excellent disease control with medians for TIR and time in hyperglycemia falling well into the recommended zones. Lower standard deviation of glycemia excursions also suggests more stable glycemia in the LCD group, possibly due to lower postprandial excursions, which were mentioned as one of the possible effects earlier [14,31]. The findings of lower blood pressure in LCD subjects were reported on an adult cohort in study by Ahola [31]. Our findings of lower systolic pressure among our LCD subjects might be linked to lower insulin doses. Higher insulinemia was found to be connected to higher increase in blood pressure in children adolescents over the course of 6 years [32] and hyperinsulinemia was also found to be an independent risk factor for the development of hypertension [33].
Among the drawbacks might be the already described tendency to increased time in hypoglycemia [34], yet the observed differences were bordering on significance only in level 1 hypoglycemia and no indication suggests increased frequency of the more severe level 2. On the other hand, hypoglycemia was one of the more common reported reasons for LCD discontinuation and two of the subjects who terminated the LCD had an episode of severe hypoglycemia which brings the findings of Ranjan [20], who described blunted response of glucagon to hypoglycemia on LCD to the front. We hypothesize that more frequent and severe hypoglycemia can occur shortly after LCD initiation as a result of inadequate lowering of insulin doses as the ones who mentioned hypoglycemia as the reason for LCD termination were the ones who ended with LCD early. No disturbances were noted in the lipid profile of the subjects which is in contrast to case-series by de Bock et al. who describes disturbed lipid spectrum in four of the six cases [17]. The probable explanation for this difference is that the patients described had daily carbohydrate intake below 50 g, whereas the median in our cohort was 96.5 g. In support of this, our analysis of the very low-carbohydrate subset within our cohort also identified a tendency towards higher total cholesterol and lower HDL as compared to the regular LCD.
Vast majority of the subjects who follow the LCD has the follow-up center in the capital Prague rather than in the regional city Brno. This might be linked to higher per capita income in Prague [35] as well as the higher knowledge on special diets and nutrition in larger urban areas [36]. More than two thirds of subjects currently on LCD were female, possibly due to the higher observed tendency for dietary interventions in females with T1D [37]. Another possible explanation is that the LCD group includes teen-aged girls whose main reason for LCD is to decrease insulin dose and consequently reduce body weight.
Despite generally not being discouraged from LCD by the diabetologists when consulted, the patients often did not discuss their decision to start LCD with them. Instead, they relied on unofficial sources like Facebook groups or pages that promote LCD in adults without T1D. This might possibly have dire consequences as the effects of carbohydrate restriction on children let alone with T1D are not fully explored. We would therefore promote a cautionary position with emphasis on the possible risks as well as the benefits of the LCD. Our study did not focus on the added psycho-social burden of the restrictive diets [38], yet some subjects reported increased family and school related conflict, creating a possibility for future research.
Among the strengths of this study are its considerably high response rate and relatively wide study population of children/adolescents with T1D. The data for theČENDA Registry are collected quarterly and thus provide very accurate and recent anthropometric and metabolic information on the patients. Furthermore, the compared groups were tightly matched to minimize potential bias.
Among weaknesses of this study is the fact that it omitted the smaller centers for diabetes care in the Czech Republic, where the prevalence of LCD would presumably be lower. Due to lack of contact information, we did reach only 83% of the children followed in the study centers. Furthermore, the carbohydrate intake in our study was self-reported by the subjects or their parents and lacked information on daily protein and fat intake. In addition, the design does not allow to infer any causality-for that, an intervention study is underway. Anthropometric features like body weight and height as well as BMI would also need to be followed longitudinally to assess.
Our study underlines the fact that carbohydrate reduction is considerably popular in children/adolescents with T1D. Patients often seek non-professional sources of information and do not consult their diabetologists with their decisions. The observed positive effects cannot be overestimated and until proper prospective trials are conducted, we should inform our patients of potential risks, especially the increased risk of hypoglycemia and possible disturbance in the lipid spectrum in the case of VLCD. We should therefore individualize the treatment and seek safer ways to optimize their glycemia together.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nu13113903/s1, Table S1: Comparison of questionnaire respondents (N = 624) to nonrespondents (N = 634). Data are shown as median (IQR). Table S2 Data Availability Statement: All data used for the analysis in this article are available on request from the authors.
CGM
continuous glucose monitoring CV glucose coefficient of variation LCD low-carbohydrate diet SD standard deviation of glycemia SDS standard deviation score VLCD very low-carbohydrate diet | 2021-11-04T15:15:30.826Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "adc1eef2aa00a66e12357214aa606b58ac753d4c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/11/3903/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c713514e49108ec9909afe345c0c3b82b9723682",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
135709581 | pes2o/s2orc | v3-fos-license | Particle Shape Modification in Comminution
The evolution of particle shape during the course of comminution processes has been investigated. Shape is characterized using a variety of quantitative shape descriptors determined from particle profiles obtained by image analysis. Descriptors related to particle elongation, roundness and angularity are emphasized. Distributions of the descriptors have been determined for a range of particle sizes, for different extents of grinding for various equipment types. For a given descriptor, the distributions of measured values generally follow a consistent pattern (often roughly log normal). Typically, the means and standard deviations show progressive changes as grinding time increases. For the most part, prolonged exposure to the grinding environment leads to rounding of the particles.
Introduction
The importance of particle shape has gained increasing recognition in recent years. It is clear that shape can play a significant role in the use of particles as abrasives and in applications involving packing in powder compacts, slurry rheology, etc. For particles produced by comminution, shape may be determined by material characteristics such as crystal cleavage and by the nature of the breakage process involved. While it is generally recognized that comminution can lead to changes in particle shape, relatively few attempts to quantify these effects have been reported. Furthermore, largely due to the lack of widely accepted measures of shape, there is some disagreement on the evolution of particle shape in grinding processes. For example, Bond [1] considered that the character of the material being broken has more inf luence on the shape of the product than the type of size reduction machine used. Similarly, Heywood [2] stated that the shape of particles produced on initial fracture is dependent upon the characteristics of the material. On the other hand, Rose [3] suggested that the type of the mill has the major effect on the particle shape although the properties of the material are also a factor. Similarly, Charles [4] observed that the shape of glass particles produced by a single fracture depended on the rate of application of stress.
The particular mode of breakage is likely to affect the shape of product particles. Massive fracture can be expected to produce highly irregular particles with sharp edges formed by the intersection of propagating cracks. Attrition of particles, by surface erosion or chipping at edges or corners, is more likely to cause rounding of particles although the small fragments removed may be quite irregular in shape. It follows that grinding conditions that favor one breakage mode over another may be critical in determining product particle shape. For any system, the prevailing conditions will generally depend on the type of machine and the properties and size of the particles being broken. Since there are normally distributions of particle sizes and applied stresses, it is reasonable to expect a distribution of product shapes. Gaudin [5] observed that large particles were often subject primarily to attrition, tending to become more rounded in shape, while finer material typically underwent massive fracture leading to more angular product particles. Based on examination of crushed glass particles, Tsubaki and Jimbo [6] concluded that particle shape varies with size.
Holt [7] reviewed the effect of comminution devices on particle shape and concluded that single-pass devices such as roll crushers generally produce angular particles while retention systems such as ball mills produce more rounded particles. Dumm and Hogg [8] showed that the rounding effect in ball mills becomes more pronounced with increased grinding time. Durney and Meloy [9] investigated the shape of particles produced by jaw crushers using Fourier analysis to provide a quantitative description of the shape characteristics. In particular, they compared the results of crushing under single-particle and choke-feeding conditions. They observed that when the particles were fed into the mill one at a time the products were highly angular, while choke feeding produced more "blocky" or rounded particles. In each case, the finer particles had more angular and irregular shapes.
In the present paper, populations of crushed particles from different comminution devices were evaluated using a computerized image analysis technique to quantify particle shape in terms of physically recognizable shape descriptors.
Shape Analysis
Several different approaches to the analysis of particle shape have been described in the literature. Fourier analysis of the vector of polar coordinates that represent the particle profile yields a set of Fourier coefficients that, in principle, define the shape of the particle [10,11]. Meloy [12] observed simple correlations among the coefficients for a variety of different shapes and defined a two-parameter particle "signature" as an overall characteristic of the particle shape. Durney and Meloy [9] used statistical procedures to detect significant differences in the Fourier coefficients obtained from different populations. Fractal analysis [13,14] has been widely applied to particle shape characterization and is especially attractive for highly complex shapes such as those of agglomerates. The approach adopted in the present work is based on an attempt to define shape in terms of physically recognizable features such as elongation and angularity. An image processing system was used to provide digitized particle profiles from optical microscopy (or from scanning electron micrographs) by means of a procedure developed by Dumm and Hogg [8] and modified by Kumar [15] and Kaya [16]. A series of straight-line segments was fitted to the set of N points representing the complete profile as illustrated in Figure 1. The intersections of these linear segments define a reduced set of n perimeter points (nN) and describe an n-sided polygon that represents the essential features of the original profile.
In order to minimize the errors associated with the use of the fitted polygon, the projected area and centroid of the image were first obtained from the original digitized profile. Designating the x-y coordinates of the original perimeter points as (x i , y i ), the crosssectional area of the particle was determined from: where the point at (x Nѿ1 , y Nѿ1 ) represents the return to the initial starting point (i҃1). The coordinates x -, y of the centroid of the image can be obtained from: Representation of a particle profile consisting of N perimeter points (N҃40 in this example) by a fitted polygon defined by a set of n reduced perimeter points (n҃7 in this case).
The equivalent-circle mean radius of the particle can be calculated using: The radial vectors R i from the centroid to each of the n perimeter points on the reduced profile are given by: The angles φ i between adjacent edges of the fitted polygon were evaluated by applying the cosine rule to the triangles defined by the edge and the adjacent radial vectors as shown in Figure 1. Thus, for the edges intersecting at the perimeter point (x i , y i ) the angle α i is given by: where L i is the length of the edge connecting points (x i , y i ) and (x iѿ1 , y iѿ1 ) (see Figure 1). Similarly, β i can be obtained from: The angle φ i is simply the sum: α i ѿβ i . Feret's diameters are defined as the distance between two parallel lines tangent to opposite sides of a particle, in some particular orientation. For any perimeter point, the corresponding Feret's diameter, (d f ) i can be obtained by projecting the other perimeter points on to the vector R i and determining the maximum distance from the original point (see Figure 1).
The following shape descriptors were defined to represent specific geometric features of the profile. 1) The elongation E was defined using the ratio of the minimum Feret's diameter to that at right angles to it. Thus, where (d F ) min is the minimum of the set of measured Feret's diameters and (d F ) π/2 is the Feret's diameter measured perpendicular to (d F ) min . As defined, the elongation is zero for a circular profile.
2) The angular variability V φ was defined to represent the variation in the angles φ i between adjacent edges on the reduced profile. Specifically, for φ i expressed in radians, The third power was used in order to emphasize the role of the smaller angles, which are considered to contribute the most to the "angularity" of a particle [15]. A many-sided polygon fitted to a circle gives a set of angles close to π with a corresponding angular variability close to zero.
3) The radial variability V R was used to describe the departure of the profile from a circle. The particular definition used was where R 0 is the equivalent spherical diameter as defined by Equation 4 and the R i are the lengths of the radial vectors from the centroid to each of the n perimeter points (Equation 5). Since each R i would be equal to R 0 for a circle, the radial variability would be zero. Calculated values of the various parameters are given in Table 1 for the schematic profile shown in Figure 1.
Experimental Systems
Shape descriptors were measured for a range of particle sizes produced by crushing and grinding under a variety of conditions. Approximately 600 particles were analyzed from each population and the distribution of each descriptor was evaluated. The Table 1 Relative dimensions and calculated shape descriptors for the schematic particle profile shown in Figure 1.
Shape Parameters:
Elongation, E: 0.41 Radial Variability, V R : 1.37 Angular Variability, V φ : 0.47 materials used were a high-volatile bituminous coal from a continuous mining operation in the Pittsburgh Coal Seam (Green Country, PA) and quartz from North Carolina. The following crushing and grinding devices were used to produce particles in different size ranges: • Jaw Crusher Ҁ used to reduce feed materials about 1 cm in size to less than about 5 mm • Hammer Mill (Holmes pulverizer) Ҁ used to crush coal particles in the 5 mm to 1 cm size range to less than 1 mm • Disk Mill (Quaker City) Ҁ used to pulverize 1 mm feed particles • Ring and Puck Mill (Bleuler Pulverizer) Ҁ used for fine grinding of disk mill product to micron sizes • Planetary Ball Mill (Retsch) Ҁ also used for fine grinding of disk mill product. The jaw crusher, hammer mill and disk mill are essentially single-pass devices, although some, lim-ited retention of broken material may occur. The ringand-puck and planetary mills are retention devices that can subject particles to repeated breakage. In the case of the latter two devices, different grinding times were used to vary the exposure of particles to the grinding environment.
Analysis of Shape for Different Materials
In order to evaluate particle shape effects for different materials, samples of the coal and quartz were prepared as 9.5҂12.7 mm size fractions and fed to a jaw crusher with a gap setting of 6.4 mm. Products in different size classes (30҂40, 50҂70, 70҂100, 140҂ 200 and 200҂270 US mesh) were analyzed using digitized images obtained by optical microscopy.
Examples of the distributions of angular variability for coal and quartz particles of different sizes are presented in Figures 2 and 3. In each case, the distribu- tions are shown graphically as cumulative plots and a visual representation to illustrate the significance of the variations is included. The particles in each column are typical of that range of angular variability while the number of particles in each column represents the number fraction in that range. The results indicate that, for coal particles, the distributions generally become somewhat narrower and shift to lower values as size decreases, implying a size dependency of the shape. On the other hand, the distributions of angular variability for the quartz particles show no clear systematic variation with size. Very similar trends were observed for the other shape descriptors: elongation and radial variability. It is interesting to note that the distributions appear to conform quite closely to the log-normal distribution.
Effect of Grinding Procedure
The particle shape is expected to be affected by the type of machine, by the specific breakage mechanism in a grinding device and by the time spent in the grinding environment. The effect of grinding device on shape was analyzed for coal particles using the Bleuler ring-and-puck pulverizer and the planetary ball mill. The grinding times were set so as to give a similar extent of grinding for each mill. The quantity of feed (Ҁ10 mesh) was 20 g for the Bleuler and 9.5 g for the planetary mill. The distributions of elongation are presented in Figure 4. The graph indicates that different grinding devices affect shape differently. Shape does not change substantially with grinding time in a high-energy mill (Bleuler). On the other hand, the shape distributions of the particles produced in the planetary mill shifted towards lower values, indicating that particles become more rounded with increased grinding time. Similar trends have been observed for particle sizes in the size range from 3 to 5 µm.
Examples of the distributions of angular variability for 70҂100 US mesh coal obtained from a single pass through different crushing and grinding devices are shown in Figure 5. It appears that the jaw crusher, for which massive fracture is the dominant breakage mechanism, produces the most irregular particles. The Quaker City mill, for which most of the breakage is probably by massive fracture of coarse feed particles, also produces quite irregular particles. The Holmes pulverizer, which allows some retention of material and may include contributions from the attrition-type mechanisms, and the Bleuler mill (30 second grinding time), generally produce particles of more regular shape. Similar trends have been ob-served for 200҂270 US mesh fractions. Changes in feed size to the devices did not lead to significant differences in product particle shape.
The effects of mill type on the average particle shape for different coal product sizes are illustrated in Figure 6. While the variations are generally small, Mill Feed Size the trends are consistent and similar for each of the three descriptors. Large particles produced by single breakage events tend to be quite irregular in shape while particles subject to repeated breakage or long exposure to the grinding environment are usually more rounded.
Conclusions
The results of this investigation indicate that, for the materials studied (quartz and coal), the shape of particles produced by size reduction is controlled by a) the nature of the material being reduced b) the type of comminution device used and the predominant breakage mechanisms involved c) the time spent in the grinding environment. In particular, it is concluded that the products of individual breakage events are typically angular and irregular in shape. Continued exposure to the grinding environment leads to rounding of the particles. For a given product size distribution, devices that employ high energy input yield products containing a high proportion of newly-created particles. Such devices, therefore, favor the production of irregular particles. Grinding machines for which the energy input is relatively low, on the other hand, rely on repeated breakage action for size reduction and tend to produce more rounded product particles. The differences in the size dependence of productparticle shape for coal and quartz suggest that the existence of structural features such as cleats in coal or cleavage planes in crystals may lead to more angular and irregular products of comminution. Furthermore, since the effect appears to be more pronounced for low-energy devices, it may be possible to achieve some degree of control over shape through appropriate equipment selection. However, considerable additional work would be needed to establish the basis for such control.
Acknowledgements
The work described in this paper was supported in part under the Mineral Institutes Program by Grant Nos. G1105142, G1115142 and G1125142 from the Bureau of Mines, US Department of the Interior, as part of the Generic Mineral Technology Center for Respirable Dust. [m] α i : Angle defined in Figure 1 [radians] β i : Angle defined in Figure 1 [radians] φ i : Angle between adjacent edges on a particle profile [radians] | 2019-04-28T13:12:44.781Z | 2002-01-01T00:00:00.000 | {
"year": 2002,
"sha1": "26781d7a139f422edad6efeea5c242b95d0bf58d",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/kona/20/0/20_2002021/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fca1b682e60cb321aab8c51fb39c8e680427c747",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
118490184 | pes2o/s2orc | v3-fos-license | Non-classical correlations in a class of spin chains with long-range interactions and exactly solvable ground states
We introduce a class of spin models with long-range interactions---in the sense that they extend significantly beyond nearest neighbors---whose ground states can be constructed analytically and have a simple matrix product state representation. This enables the detailed study of ground state properties, such as correlation functions and entanglement, in the thermodynamic limit. The spin models presented here are closely related to lattice gases of strongly interacting polar molecules or Rydberg atoms which feature an excluded volume or blockade interaction. While entanglement is only present between spins that are separated by no more than a blockade length, we show that non-classical correlations can extend much further and analyze them through quantum discord. We furthermore identify a set of seemingly critical points where the ground state approaches a crystalline state with a filling fraction that is given by the inverse of the blockade length. We analyze the scaling properties in the vicinity of this parameter region and show that the correlation length possesses a non-trivial dependence on the blockade length.
Introduction.-Finding exact ground states of quantum many-body Hamiltonians with interactions that extend far beyond nearest neighbors is typically a very challenging tasks in condensed matter physics. This is because exactly solvable cases, e.g. the Haldane-Shastry model [1], are extremely rare, and the numerical treatment of long-range interactions is computationally demanding even with modern numerical tools such as the Density Matrix Renormalization Group [2,3]. From the experimental side ensembles of cold atoms, ions and molecules offer a very promising route towards the controlled study of long-range interactions in quantum many-body systems. Very recent experimental approaches employ atoms in highly excited states -socalled Rydberg atoms -where they exhibit strong dipolar interactions. A characteristic feature of these systems is the presence of the dipole blockade which prevents the excitation of an atom in the vicinity of an already excited one [4,5]. In typical experimental setups one can achieve situations in which a single excitation blocks tens or hundreds of atoms and can in this sense be regarded as long ranged. In the extreme case the blockade can extend over an entire cloud which leads to the formation of so-called "super atoms" -an entangled state of a single delocalized excitation [6]. Of particular interest is the more involved case in which the system size is larger than a blockade region. Clearly, the blockade leads to a strong anti-correlation of excitations at small distances. However, the nature of the correlations at longer distances is at present not fully understood. A number of recent investigations suggest that the emerging states behave essentially classically in the sense that their properties can be understood by drawing an analogy to arrangements of classical hard objects [7][8][9][10][11][12][13]. To what extent there exist quantum correlations that go beyond the aforementioned "super atom" states is so far unclear.
One motivation of this paper is conduct a largely analytical study to shed some light on these questions. A (1) is closely related to ensembles of interacting Rydberg atoms or polar molecules. They can be modelled by two-level systems whose excited state |e and ground state |g are coupled by a laser or microwave field of strength Ω and detuning ∆. (b) Spins in the up-state interact with the interaction potential V (r) [Eq. 2] which can be regarded as an approximation of power-law potential of the form Vα(r) = Cα/(a r) α where a is the lattice spacing. For further explanation see text.
second one is to introduce a class of long-range interacting one-dimensional spin models whose ground state in some regime can be solved exactly. The models, which are all of Ising type, manifestly display the blockade effect due to an excluded volume interaction that encompasses R spins. Moreover, they possess a potential tail which extends further than R and therefore mimic to a good extent the typical features present in strongly interacting Rydberg gases. In the exactly solvable regime their ground state has the form of a matrix product state, which permits the convenient calculation of the correlation properties in the thermodynamic limit. This unique property allows us to perform a scaling study of the correlation length which is shown to exhibit a non-trivial power-law dependence on R. We find that the expectation values of classical observables can indeed be understood from analogous classical arrangements of hard objects, and that entanglement between two spins is only present when they are separated by at most one block-arXiv:1403.2866v1 [cond-mat.str-el] 12 Mar 2014 ade length R. When separated further, despite the absence of entanglement, non-classical correlations remain in form of quantum discord [14,15], which is regarded as key resource for conducting quantum operations in the presence of noise such as quantum illumination [16][17][18], and metrology with noisy probes [19,20]. The fact that in the systems studied here quantum correlations extend over distances larger than R also hints towards the possibility to implement non-classical operations between distant particles mediated by Rydberg interactions in experimental ensembles which are not fully blockaded.
Hamiltonian. -The class of Hamiltonians we are considering is that of one dimensional lattice spin-1 2 models with transverse and longitudinal magnetic fields and an Ising-type interaction potential: Here σ x and σ z are Pauli matrices and n = (I + σ z )/2. The interaction energy V km between spins positioned at sites k and m is given by the potential V (|k − m|) with (2) It features a hard core interaction between up-spins up to a distance R. Beyond that the potential decays linearly until it reaches the distance 2R from where onwards it is zero. With this potential it is energetically forbidden to dynamically access configurations in which the separation between any two spins is smaller than R sites.
Such potential can be linked to current studies of strongly interacting lattice gases of cold Rydberg atoms or polar molecules [8]. These systems can be described in terms of ensembles of interacting two-level systems with the ground state |g ≡ |↓ , and and excited state |e ≡ |↑ . The transition between the two levels is driven by a coherent laser or microwave field with detuning ∆ and Rabi frequency Ω, as shown in Fig. 1a. The single spin terms in Eq. (1) correspond the Hamiltonian of non-interacting driven two-level systems when setting h x = Ω and h z = ∆/2. The interaction between excited atomic or molecular states typically decays as a power-law V α (r) = C α /(a r) α with power α being 3 or 6 and a being the lattice spacing. Due to this interaction certain spin configurations become dynamically inaccessible. In particular the simultaneous excitation of two particles is strongly suppressed if their interaction energy is larger than the value of the Rabi frequency. This defines a blockade length R b ∼ (C α /Ω) 1/α [21] which can be identified with the parameter R in V (r) [Eq. (2)]. Due to the power-law decay the interaction potentials extend beyond R b . These tails can be thought of being mimicked by the linearly decaying part of V (r). To approximately connect V (r) to the power-law potentials . A comparison of these potentials is shown in Fig. 1b.
Exactly solvable parameter manifold. -As V (r) forbids the simultaneous excitation of spins at distances closer or equal than R the physically relevant subspace of the Hilbert space is spanned by all states |ψ ν which obey n k n k+1 |ψ ν = n k n k+2 |ψ ν = ... = n k n k+R |ψ ν = 0. Within this physical sector it can be shown that the Hamiltonian (1) acquires a frustration free or Rokhsar-Kivelson form [22], provided that the system parameters obey Specifically, on this exactly solvable manifold Eq. (1) can be brought into the form Here we have abbreviated the string operators P L k = P k−1 P k−2 ...P k−R , and P R k = P k+1 P k+2 ...P k+R which are products of the projector P k = 1 − n k that projects on the spin-down state of the k-th spin. This Hamiltonian is a generalization of the ones presented in Refs. [23][24][25][26]. It is composed by local positive-semidefinite Hamiltonians H k , which in general do not commute. They all annihilate the ground state |z (see further below for discussion), i.e. H k |z = 0, and hence the ground state energy on the parameter manifold (3) is given by Ground state wave function and correlations-The ground state wave function can be explicitly written as where |0 = |↓↓ ... ↓ is the spin vacuum. The state |z is a superposition of all classical spin configurations in which up-spins are at least separated by a distance R.
The relative weight of each configuration is given by z 2m , with the parameter z = V 0 /h x (which we take to be positive in the following) and m being the number of up-spins contained in the configuration. The state space is equivalent to that of hard R + 1-mers on a lattice and hence the normalization constant Z(z, N ) is given by the classical grand-canonical partition function of hard R+1mers with fugacity z 2 .
For given R the ground state in Eq. (5) assumes an exact matrix product state (MPS) form [27,28], With the MPS representation it is a relatively simple task to characterize the properties of the ground state, e.g. its correlation functions and entanglement properties in the thermodynamic limit: To this end we define the trans- The vectors l α | and |r α form the left and right eigenbasis of the transfer operator E I , while ξ −1 α = log |λ 1 /λ α |, and φ α = arg (λ 1 /λ α ), where |λ α | ≥ |λ α+1 | are the eigenvalues of E I . In Fig. 2 we display the density-density correlation function n 0 n r and the spatial coherence σ + 0 σ − r . The density-density correlation function exhibits decaying oscillations at a length scale that is approximately given by R. With increasing R, and keeping z fixed, the amplitude of the oscillations decreases. Keeping R constant and varying z, we observe that the oscillation become increasingly pronounced with growing z. In the limit of z → ∞ configurations that contain the highest possible number of excitations (compatible with the blockade) carry almost all the weight, and the ground state approaches a superposition of R + 1 "crystalline" states |c m each of which contains a regularly ordered arrangement of up-spins with nearest neighbor distance R + 1 and the first up-spin being located at site m: √ R + 1. The correlation length of |z → ∞ R is infinite and in fact the two largest eigenvalues of the transfer operator E I have the same magnitude when z approaches infinity, which we from now on refer to as critical limit. The spatial coherence σ + 0 σ − r shows some qualitative analogies with the density-density correlation (see Fig. 2). It is strongly decaying with increasing spin separation, with an oscillatory pattern whose contrast is more and more suppressed as R increases. Opposite to the behaviour of the densitydensity correlation function the spatial coherence is more strongly suppressed the larger z, and vanishes at the critical point. These numerical results confirm the nature of the state |z → ∞ R , for which one can show that the spatial coherence is identically zero, for any pair of spins.
To conclude the discussion on the correlations we consider the situation in which we keep the blockade length constant while decreasing the lattice spacing a. In practice this can be achieved experimentally in Rydberg (molecular) gases by increasing the density of atoms (molecules). This scenario is in fact interesting because recent studies of driven Rydberg gases [9][10][11] suggest that spatial correlations can become enhanced by increasing the atomic density. To study whether this also applies here we define the dimensionful blockade lengthR = aR and introduce a continuous set of coordinates x = ka, by the help of which we can express the ground state (5) as |z = Ξ(z,R,L) being the step function. The normalization is Ξ(z,R,L) = L /R n=0z 2n ξ(n,R,L), where ξ(n,R,L) is the microcanonical partition function of a Tonks gas [29], i.e. of n hard rods of lengthR arranged in a system of lengthL = aL. The correlation length of the above state is controlled by the parameterz which diverges as a tends to zero. Hence, for fixed z the density-density correlations become longer ranged when a is decreased, i.e. the density is increased. In order to define the state (6) with a finiteR we need to consider a diverging blockade length R. In this limit the bond dimension of |z becomes infinite, such that (6) stands as an example of a continuous limit of an MPS which is not expressible as a continuous matrix product state [30,31].
Entanglement and non-classicality-Due to the structure of the ground state (5) the expectation value of classical observables, e.g. the density-density correlation function, is equivalent to that of classical hard R+1-mers with fugacity z 2 [24]. However, as we have shown before the ground state also exhibits quantum coherence.
We are therefore interested in the question as to whether it also features non-classical correlations, such as entanglement.
To find an answer we start by considering two figures of merit of entanglement, namely the block entropy and the concurrence [32,33]. The first captures the collective properties of entanglement of a block of a certain number of contiguous spins, while the second quantifies the entanglement shared by a pair of spins. The block entropy, defined as S r = −Trρ r log 2 ρ r , depends on the reduced density matrix ρ r = 1 λ r 1 {ij ,i j } Tr B r j=1 E ij ,i j |i 1 , i 2 , ..., i l i 1 , i 2 , ..., i l |, whereB = lim L→∞ E I /λ 1 L , and E ij ,i j = X ij ⊗ X i j .
The ground state (5) factorizes at the point z = 0 for any value of R, leading to zero entropy as a result, signaling an overall classical state. In the limit z → ∞, on the other hand, the entropy approaches an asymptotic values, which can be extracted from the state |z → ∞ R : In [24] the single atom entropy was considered in the case R = 1, and it was found to decrease monotonically with increasing z. This is not true for general block sizes and blockade lengths R, as shown in Fig. 3a: If r ≤ R the entropy is a monotonously increasing function in z. However, as soon as the block size exceeds the blockade length S r exhibits a maximum in z, whose precise location depends on R and r. This qualitative change in the behavior is due to the fact that within the blockade length the state space is restricted to configurations with at most one up-spin whereas as soon as r > R the number of accessible configurations grows fast therefore allowing for an entropy larger than the asymptotic value S r (R).
To quantify the entanglement between pairs of spins separated by a distance r we study the concurrence which is defined as C(r) = max{2λ 1 − TrB, 0}, where λ 1 is the largest eigenvalue of the matrix B = ρ(r)ρ(r) ρ(r). Here ρ(r) = ρ k,k+r is the reduced density matrix of the two spins, andρ(r) is the this matrix expressed in the Bell basis [34]. The concurrence is plotted Fig. 3b (cut for z = 0.3 and z = 2 at fixed R = 20) as well as in the bottom panel of Fig. 3c. Clearly there is no entanglement shared by two spins separated by a distance r > R, since here C(r) drops sharply to zero. Hence the blockade length R is equal to the range of entanglement. Linking back to the systems of interacting Rydberg atoms this shows that entanglement indeed only extends over the size of a "super atom".
Entanglement though does not represent all possible quantum correlations between two spins. They are instead captured by the quantum discord which we study in the following. We use the local quantum uncertainty [20] as a measure of discord, defined as D(r) = 1 − Λ max , where Λ max is the largest eigenvalue of the 3 × 3 matrix of entries W ij = Tr ρ(r)(σ i ⊗ I) ρ(r)(σ j ⊗ I) . The discord is shown in Fig. 3b (cut for z = 0.3 and z = 2 at fixed R = 20) and in the top panel of Fig. 3c. Surprisingly, quantum correlations in form of discord extend much further than entanglement. Furthermore, it is interesting to note that quantum discord shows an actual oscillatory behavior [ Fig. 3b] as function of r. Critical limit. -As discussed previously the limit z → ∞ can be thought of as a critical limit where in fact all "crystalline configurations" |c m are valid ground states. Translational symmetry is broken as the states |c m are only invariant under translations by R sites. The formation and melting of such crystalline states realized in a one-dimensional gas of interacting Rydberg atoms has been investigated in Ref. [35]. Here the authors identified a "devils stair case" in the phase diagram formed by "crystals" with different filling fraction. Linking to this study, we can now actually understand how within our model the correlation length ξ diverges as z approaches infinity, i.e. when the crystal is formed. Interestingly, this does strongly depend on the value of R. For all blockade lengths ξ diverges with a characteristic power law, ξ R ∼ z ν R as z → ∞, but with an R-dependent power ν R . In the cases R = 1, 2, 3 the characteristic polynomial of the transfer matrix is less than quintic, and we are able to extract this exponent analytically, finding ν 1 = 1, ν 2 = 2/3, and ν 3 = 1/2. Numerical studies suggest that this power decreases monotonically with increasing R.
Summary and outlook.-We introduced and studied the exact ground states of a class of Hamiltonians with "blockade interaction". We showed, among other results, that entanglement is only present within the blockaded region while non-classical correlations extend significantly further. One might speculate that this could find practical implications for the use of chains of Rydberg atoms or polar molecules as physical platforms for quantum information processing, communication or metrology [36]. | 2014-03-12T10:02:02.000Z | 2014-03-12T00:00:00.000 | {
"year": 2014,
"sha1": "3eab53468bc003600ed14476b159f1142f5a6712",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/16/9/093053",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "3eab53468bc003600ed14476b159f1142f5a6712",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.