id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
218891518 | pes2o/s2orc | v3-fos-license | Rapid divergence of the copulation proteins in the Drosophila dunni group is associated with hybrid post-mating-prezygotic incompatibilities
Proteins involved in post-copulatory interactions between males and females are among the fastest evolving genes in many species and this has been attributed to reproductive conflict. Likely as a result, these proteins are frequently involved in cases of post-mating-prezygotic isolation between species. The Drosophila dunni subgroup consists of a dozen recently diverged species found across the Caribbean islands with varying levels of hybrid incompatibility. We sought to examine how post-mating-prezygotic factors are involved in isolation among members of this species group. We performed experimental crosses between species in the dunni group and find evidence of hybrid inviability. We also find an insemination reaction-like response preventing egg laying and leading to reduced female survival post-mating. To identify that genes may be involved in these incompatibilities, we sequenced and assembled the genomes of four species in the dunni subgroup and looked for signals of rapid evolution between species. Despite low levels of divergence, we found evidence of rapid evolution and divergence of some reproductive proteins, specifically the seminal fluid proteins. This suggests post-mating-prezygotic isolation as a barrier for gene flow between even the most closely related species in this group and seminal fluid proteins as a possible culprit.
Introduction 15
Numerous groups of recently diverged species have been used to study speciation across multicellular taxa 16 crosses producing only female offspring, or sterile male offspring (HEED 1962). 51 Here we perform experimental crosses in the dunni group and find that in some crosses, 52 heterospecific matings reduces female survival compared to conspecific matings, potentially caused by an 53 insemination reaction-like effect (PATTERSON 1947). Using a combination of long-read and short-read 54 sequencing, we assembled the genomes of four species in the dunni group to identify proteins driving this 55 incompatibility. We find these genomes are of similar quality and composition as other higher quality We set the reference level as the conspecific cross (e.g. D. arawakana ♂ x D. arawakana ♀) and looked 102 for significant differences from these for interaction terms to determine if unmated females (e.g. D. 103 arawakana ♀ not mated) or heterospecifically crossed females (e.g. D. arawakana ♂ x D. nigrodunni ♀) 104 show significant differences from the conspecific cross. To consider the effect of Wolbachia infection on 105 these crosses, we repeated these initial crosses alongside the same crosses with Wolbachia cured flies (cured 106 as described above) and a Cox's Hazard Ratio was used to determine the effect of Wolbachia on survival, 107 and to test for differences in survival between sets of crosses after accounting for Wolbachia: 108 Post-mating dissection of the female reproductive tract 111 We collected virgin males and females for tetracycline-cured D. arawakana and D. nigrodunni as described 112 above and aged them 2-3 days. We then established conspecific and heterospecific experimental crosses 113 for 6 replicates of 10 males and 10 females at 10AM central time, as well as virgin control females for 6 114 replicates of 10 females. Following 24 hours of cohabitation, for 3 replicates of each cross, we separated 115 the females for each cross and dissected the reproductive tract. Based on previous work describing the 116 insemination reaction (PATTERSON 1947;GRANT 1983; MARKOW AND ANKNEY 1988), we scored the 117 reproductive tract for each female, identifying if the female had mated (by the presence of sperm), if the 118 reproductive tract appeared to be swollen (relative to the unmated virgin females) or if the reproductive 119 tract was destroyed or damaged (alongside a swollen tract, if possible to tell). We repeated this scoring for 120 the remaining 3 replicates of each cross 24 hours later (48 hours total). We then compared conspecific and 121 heterospecific crosses for rates of mating and rates of insemination reaction occurrence. 122
Genome sequencing, assembly and annotation 123
We extracted DNA following the protocol described in ( short-read data for each species, which we used to make a second map of reference genome repetitive 168 regions using RepeatMasker. For both sets of repeat content assemblies we identified which TE families 169 were shared between species and which were unique to species using blastn (e-value < 10e-5, hsps = 1, 170 alignments = 1). We then identified what proportion of the genome each TE family constituted across 171
species. 172
Placing the dunni group in the Drosophila phylogeny 173 To find the consensus species tree despite the differing evolutionary histories of different genes (MENDES 174 AND HAHN 2016), we randomly sampled 100 genes conserved across Drosophila and humans from and 175 extracted these from our four focal species, as well as from several other Drosophila species, taken from 176 We then took outlier genes (e.g. genes above the 97.5 th percentile in each category) and looked for 204 enrichments in gene ontology categories compared to non-outlier genes using GOrilla (EDEN et al. 2009). 205 For GO categories of interest, such as those enriched for duplications or for high levels of dN/dS, we 206 compared dN/dS of genes in these categories to the nearby genomic background. For each gene we 207 extracted nearby genes (within 100kbp up or downstream on the same chromosome), of similar divergence 208 levels on each branch (within 0.01 dS), we then found the difference in dN/dS between the median of the 209 background genes and the focal gene. We then used a Wilcoxon-Rank Sum test to identify GO categories 210 on each branch with significantly higher (or lower) dN/dS than the background. 211 Using the annotations of all species and D. innubila, we identified genes with more than one copy 212 in one species, relative to all other species. We confirmed this by estimating copy numbers of genes in each 213 species using short read information and dudeML (following the tutorial pipeline for N = 1) with the short 214 read information mapped to the genome of the sister species (HILL AND UNCKLESS 2019). We then used 215 GOrilla (EDEN et al. 2009) to identify Gene ontology categories that are enriched for duplicates on specific 216 branches, which we confirmed using PANTHER (THOMAS et al. 2003). 217
Statistics 218
We used R for all statistics in this analysis (R-CORE-TEAM 2013), and ggplot2 for data visualization and 219 figure production (WICKHAM 2009). 220
Results 221
The Drosophila dunni group shows varying levels of hybrid compatibility 222 The Drosophila dunni group is a species group endemic to islands in the Caribbean, with each island 223 = -2.948, p-value = 0.00319). In several mated females when compared to virgin females, we find a swelling 295 of the reproductive tract consistent with the insemination reaction ( Figure 3C). Exclusively in several 296 heterospecifically crossed females, we also saw damaged and destroyed reproductive tracts ( Figure 3D).
Genes involved in copulation and immune defense have high rates of divergence between species 313
We reasoned that these incompatibilities between species could be caused by a divergence in copulation 314 proteins. Previous work has suggested that females may be susceptible to bad reactions following hybrid 315 matings due to no protection from the other species accessory gland proteins (MARKOW AND ANKNEY 316 1988; KNOWLES AND MARKOW 2001). Specifically, that there is an arms race between sexes to 317 block/unblock the female reproductive tract and that females of other species have not evolved to suppress 318 these reactions. Based on this, we sought to examine the levels of divergence and identify rapidly evolving 319 genes between species. We sequenced, assembled and annotated the genomes of each species involved (see 320 Materials and Methods), producing two high quality genomes with high synteny to each other and to D. Table 6, p-value < 0.05 after multiple testing 351 correction). This is consistent with rapid evolution occurring in genes involved in the reproductive conflict 352 between the sexes (Figure 4) (HAERTY et al. 2007). While not significant outliers, we also find that immune 353 recognition proteins, antiviral RNA and piRNA pathways are also rapidly evolving in some species, 354 consistent with arms races between the species and their parasites (Supplementary Table 6). 355 Here, we assessed the extent of hybrid incompatibilities between species of the dunni subgroup, 417 focusing on post-mating-prezygotic incompatibilities. We then sequenced and assembled the species 418 genomes to identify highly divergent and rapidly evolving genes. Between D. nigrodunni and D. 419 arawakana, we find elevated divergence of several immune system pathways, as well as divergence in 420 genes involved in copulation. This divergence fits with the hybrid male inviability between these two 421 species, as well as the reduced survival of females following insemination by a heterospecific male. The functional annotation of the more diverged genes may also provide us with clues as to how 434 these species are diverging. As we find premating-behavior proteins are divergent between D. arawakana 435 and D. nigrodunni, this may result in a divergence in premating behavior, resulting in the reduced rate of 436 hybrid matings scored (Figure 3). We also see no difference in the proportion of hybrid matings after 24 437 hours and 48 hours, suggesting that in these cases, if a female has rejected all males, she may not change due to this species requiring a well-adapted stress response pathway, given its negative reaction to 453 heterospecific matings (Figures 1-3). 454 Several of the functional gene categories identified in this study as highly divergent between 455 species are also promising regions for future study, particularly when focusing on immune evolution. Our (Figures 4 and 5). 467 The repetitive content also appears to be diverging rapidly across this species complex 468 (Supplementary Figure 5). This is commonly seen between species, given the elevated mutation Overall, our findings suggest that the rapid divergence of reproductive genes has led to 485 incompatibilities between species in the dunni group, including inviable male offspring and the 486 insemination reaction associated with reduced female survival. We also find multiple areas for further 487 investigation in the D. dunni group, either in immune evolution of continuing to investigate the speciation 488 in this species group, suggesting promise in the future of research for this group. 489 | 2020-05-27T13:20:06.822Z | 2020-05-22T00:00:00.000 | {
"year": 2020,
"sha1": "208c656ea3fb08ec43740e75609fa6c5fbcb9366",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/g3journal/article-pdf/11/4/jkab050/37083018/jkab050.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "208c656ea3fb08ec43740e75609fa6c5fbcb9366",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
182442281 | pes2o/s2orc | v3-fos-license | Genome-wide mapping and profiling of γH2AX binding hotspots in response to different replication stress inducers
Background Replication stress (RS) gives rise to DNA damage that threatens genome stability. RS can originate from different sources that stall replication by diverse mechanisms. However, the mechanism underlying how different types of RS contribute to genome instability is unclear, in part due to the poor understanding of the distribution and characteristics of damage sites induced by different RS mechanisms. Results We use ChIP-seq to map γH2AX binding sites genome-wide caused by aphidicolin (APH), hydroxyurea (HU), and methyl methanesulfonate (MMS) treatments in human lymphocyte cells. Mapping of γH2AX ChIP-seq reveals that APH, HU, and MMS treatments induce non-random γH2AX chromatin binding at discrete regions, suggesting that there are γH2AX binding hotspots in the genome. Characterization of the distribution and sequence/epigenetic features of γH2AX binding sites reveals that the three treatments induce γH2AX binding at largely non-overlapping regions, suggesting that RS may cause damage at specific genomic loci in a manner dependent on the fork stalling mechanism. Nonetheless, γH2AX binding sites induced by the three treatments share common features including compact chromatin, coinciding with larger-than-average genes, and depletion of CpG islands and transcription start sites. Moreover, we observe significant enrichment of SINEs in γH2AX sites in all treatments, indicating that SINEs may be a common barrier for replication polymerases. Conclusions Our results identify the location and common features of genome instability hotspots induced by different types of RS, and help in deciphering the mechanisms underlying RS-induced genetic diseases and carcinogenesis. Electronic supplementary material The online version of this article (10.1186/s12864-019-5934-4) contains supplementary material, which is available to authorized users.
Background
Faithful and complete DNA replication is vital for cell survival and genetic transmission. Replication fork progression is constantly challenged and may be stalled by environmental insults and endogenous stress arising from normal cellular metabolism, leading to replication stress (RS) [1][2][3]. These challenges can arise from various genotoxic mechanisms, such as depletion of nucleotide pools, deficiency of replication complex, conflicts between replication and transcription, R-loop formation, DNA damage, and others (reviewed in [3]). Replisomes need to overcome these obstacles in order to complete DNA replication in a timely and accurate manner.
Fork stalling elicits the activation of the ATM-and Rad3-related (ATR) kinase, a member of the phosphoinositide 3-kinase (PI3K)-like protein kinase [4]. ATR activation arrests cell cycle, promotes fork stability to prevent fork collapse, and regulates DNA repair pathways to rescue stalled forks. One of the critical downstream target of ATR is histone H2AX [5]. Phosphorylation of H2AX at the serine residue 139 (γH2AX) by ATR is an early event in response to fork stalling [6]. Once phosphorylated, γH2AX marks stalled forks prior to DSB formation [6], presumably setting up a favorable chromatic environment that facilitates the recruitment of fork repair proteins to stalled sites. γH2AX also accumulates at break sites after fork collapse [6][7][8], consistent with its function in double-strand break (DSB) repair. The importance of γH2AX in fork rescue is supported by the yeast study demonstrating that a mutant of the HTA gene that abrogates γH2A (γH2AX ortholog in yeast) confers hypersensitivity to camptothecin, a potent inhibitor of the topoisomerase I that causes the collisions between topoisomerase-DNA complex and replication forks and therefore stalls replication [9]. The same mutant only shows mild sensitivivity to ionizing radiation, suggesting that γH2AX is particularly important in rescuing stalled replication.
Fragile sites (FSs) refer to chromosomal loci that are prone to breakage upon RS. They are hotspots for genome instabilities including sister chromatid exchanges, deletions, translocations, and intra-chromosomal gene amplifications [10][11][12][13][14][15], and their instability is frequently involved in early stages of tumorigenesis [16,17]. Due to the importance of FSs in genome stability and carcinogenesis, several methods have been developed to analyze the genome-wide distribution and characteristics of FSs. While early studies used conventional cytogenetic method (G-banding) to map FSs to regions that span megabases in human chromosomes [14,17,18], employment of recent sequencing technologies has allowed for fine mapping of FSs sensitive to aphidicolin (APH), hydroxyurea (HU), or ATR inhibition in various human cell lines and murine B lymphocytes [7,[19][20][21]. An approach using direct in situ break labeling, enrichment on streptavidin and next-generation sequencing (BLESS) has identified > 2000 APH-sensitive regions (ASRs) in HeLa cells and revealed that ASRs contain significant enrichment in satellites of alpha-type repeats in pericentromeric and centromeric regions, as well as in the large transcribed gene regions [19]. Another distinct group of FSs known as early replication fragile sites (ERFSs) have been identified in murine B lymphocytes using RPA and γH2AX ChIP-seq. ERFSs are induced predominantly in early replicating and actively transcribed gene clusters. ERFSs contain high densities of replication origins, have high GC content and open chromatin configuration, and are also gene rich [7,22]. Nucleotide-resolution analysis of chromosome damage sites has been established with end-seq and found that long (> 20 bp) poly (dA:dT) tracts are prone to HU-induced fork collapse in mouse splenic B cells [21]. Finally, RPA ChIP-seq has identified over 500 high-resolution ATR-dependent fork collapse sites in mouse embryonic fibroblast cells, which are enriched in microsatellite repeats, hairpin-forming inverted retrotransposable elements and quasipalindromic AT-rich minisatellite repeats, suggesting that structure-forming repeats are also DNA sequence prone to produce fork collapse [20]. However, it is worth noting that FS breakage displays cell and tissue type-specificity [23,24], and thus it is difficult to directly compare FS location and features measured in data derived from various cell types from different organisms.
In this study, we hypothesized that different fork stalling mechanisms may stall forks at different loci and induce or exacerbate fragilities at different sequences in the genome. This, in turn, would affect the regulation and expression of different sets of genes residing within/ near the fragile loci in a manner dependent on the fork stalling mechanism. For instance, fork stalling can be induced by collision between replication and transcription in large genes, R-loop formation or other replication stressors. Due to the cell type and tissue specificity of FS breakage [23,24], this hypothesis needs to be tested in a cell type-specific manner. Here, we used ChIP-seq to map and characterize γH2AX binding sites induced by three distinct fork stalling mechanisms in one human lymphocyte cell line. The lymphocyte cell line was chosen because historically FSs have been primarily studied in cultured lymphocytes and lymphoblastoid cells. Although γH2AX spreads to large regions and its binding sites may not reflect the exact location of broken sites, mapping and characterizing γH2AX binding may still reveal important information on fragile genomic loci. Three commonly used fork stalling agents were used, namely APH, HU, and methyl methanesulfonate (MMS). APH is a DNA polymerase α inhibitor, HU is the ribonucleotide reductase inhitor that depletes nucleotide pool, and MMS is thought to stall fork progression by binding to and methylating DNA. Our γH2AX ChIP-seq mapping reveals that APH, HU, and MMS treatments induce non-random γH2AX chromatin binding at discrete regions, suggesting that there are γH2AX binding hotspots in the genome. The three treatments induce γH2AX binding at largely nonoverlapping regions, supporting that different fork stalling mechanisms likely cause fork stalling at different genomic loci. We also find that γH2AX binding hotspots are depleted from CpG islands (CGIs) and transcription start sites (TSSs), but are enriched at compact chromatin regions. In addition, significant enrichment of SINEs is found in γH2AX sites in all treatments, indicating that SINEs may be a common barrier for replication polymerases. Our results provide novel insights into γH2AX binding specificity in the human genome in response to different DNA replication stressors, which will help in deciphering the mechanisms underlying carcinogenesis and RS-induced genetic diseases.
Results
Mapping of γH2AX binding sites induced by APH, HU, MMS with ChIP-seq Prior to ChIP-seq, we tested the specificity of γH2AX antibody to ensure high specificity of ChIP (Additional file 1: Figure S1). Exponentially growing cells were treated with APH (0.3 μM), HU (2 mM), and MMS (200 μM) for 24 h to induce RS using conditions widely reported in literatures [25][26][27][28][29][30]. Following treatment, cells were crosslinked, lysed, and DNA was sonicated to 100-500 bp. Immunoprecipitation was then performed to pull down γH2AXbound DNA, and ChIP DNA was used for library construction and Illumina sequencing (Fig. 1a). To ensure reproducibility, two independent biological replicates were carried out, and peak calling and alignment were performed for each replicate. Since it is known that γH2AX binding to DNA spreads into large regions, broad peaks were called using MACS2 broad peak calling program [31]. Signals from ChIP samples were normalized to pre-ChIP input signals, and ChIP-seq peaks with p values of < 10 − 3 were selected for further analysis. Spearman correlation coefficient between untreated and treated samples were conducted. The coefficient between replicates in each treatment was ≥0.9 ( Fig. 1b and Additional file 1: Figure S2), suggesting the high reproducibility of γH2AX binding and a high confidence of ChIP-seq data. Snapshots of ChIP-seq peaks in each treatment are shown in Fig. 1c and Additional file 1: Figure S3. We observed that ChIP-seq peaks in both untreated and treated samples showed a nonrandom distribution pattern ( Fig. 1c and Additional file 1: Figure S3), suggesting that these γH2AX binding sites may represent genome instability hotspots sensitive to RS. About 4700 γH2AX binding sites were identified in the untreated sample, indicating a high level of spontaneous DNA damage in this cell line. Compared to other cell lines, GM07027 displayed a high level of endogenous γH2AX expression (Additional file 1: Figure S4A). We identified~18,000,~80,000,~12,000 γH2AX binding sites in APH, HU, MMS treated samples, respectively (Fig. 1d). We observed little overlap between APH (6.4%) and MMS (9.3%) data sets. HU treated sample contained regions shared with all other stressors, but this overlap only accounted for a small portion of the HU data set due to the number of peaks (6.2% of overlap with APH treatment and 4% of overlap with MMS treatment) (Fig. 1d).
Because HU induced four to seven times as many significant peaks as other treatments, we then checked whether such high peak number was due to the high level of damage induced by HU. All treatments induced CHK1 phosphorylation at S317, indicating ATR was activated in response to fork stalling (Additional file 1: Figure S4C). Interestingly, MMS induced strong γH2AX and CHK1 phosphorylation comparable to HU treatment (Additional file 1: Figure S4C) but showed the fewest γH2AX peaks among all three treatments. In contrast, APH induced the lowest level of γH2AX and CHK1 phosphorylation but had more γH2AX binding sites than MMS (Additional file 1: Figure S4C). These results suggest that the heterogeneity of ChIP-seq peaks produced from the three drug treatments was unlikely caused by dose effect of the stressors.
Since the drug treatments could induce cell death, and dead cells in suspension culture were difficult to remove prior to crosslinking, it was possible that damaged DNA in apoptotic cells might have given rise to γH2AX ChIPseq peaks measured here. However, two lines of evidence strongly argue against the significant contribution of DNA damage from apoptotic cells to γH2AX ChIP-seq peaks measured here. First, if the γH2AX ChIP signals from the drug treatments were mainly from apoptotic cells, a random distribution of ChIP-seq peaks in the genome would be expected, because there are no preferred breakage sites when genomic DNA is degraded upon apoptosis. Moreover, a great overlap in ChIP-seq peaks would be expected in all three treatments. However, none of the treatments showed random γH2AX binding, and there is little overlap of γH2AX ChIP-seq peaks among three treatments (Fig. 1c, d). In addition, after performing annexin V staining to detect apoptotic cells, we found that although HU increased cell apoptosis, MMS induced comparable level of apoptosis (Additional file 1: Figure S4E). If γH2AX ChIP signals were mainly from dead cells, it would be predicted that similar numbers of γH2AX peaks in HU and MMS samples would be obtained. In striking contrast to this prediction, MMS treatment produced < 1/7 γH2AX peaks of the HU treated sample. Taken together, it is unlikely that γH2AX ChIP signals detected here were mainly from apoptotic cells.
To further understand the nature of the large discrepancy in ChIP-seq peak numbers among the three RS inducers, we then performed cell cycle analysis and BrdU incorporation assay to detect the impact of drug treatments on replication. Treated and untreated cells were pulse labeled with BrdU for 30 min prior to collection, followed by flow cytometry analysis as described in Materials and Methods. In MMS treated cells, both the number of replicating cells and BrdU intensity were similar to the untreated sample (Additional file 1: Figure S4F), suggesting a lower level of replication stress, which corresponded to the low number of observed γH2AX ChIP-seq peaks. In contrast, HU treatment dramatically hindered BrdU incorporation (Additional file 1: Figure S4F). This was expected because at the end of the 24 h HU treatment, the dNTP pool was expected to be largely depleted by HU and therefore BrdU incorporation into DNA should be minimal due to the lack of DNA synthesis substrates. This result suggests that HU treatment perhaps stalled the majority of replication forks, and thus explaining the highest γH2AX ChIP peaks in the HU sample. While APH enriched the number of replicating cells (BrdU+ cells), the majority of BrdU+ cells showed lower BrdU intensity than untreated cells, indicative of a slowdown in replication fork movement (Additional file 1: Figure S4F). This was consistent with the higher number of γH2AX peaks in APH treated sample than MMS, despite the lower level of damage and CHK1 phosphorylation (Additional file 1: Figure S4C). It is also possible that APH only stalled a subset of forks under the condition used in this study (0.3 μM). the cell cycle progression and arrested the cell cycle at the late G1/early S boundary (Additional file 1: Figure S5), and therefore were not used to study RS. Together, our results suggest that the number of γH2AX ChIP-seq peaks is largely consistent with the level of RS caused by these stressors. Furthermore, the little overlap of γH2AX binding sites between each treatment suggests that γH2AX binds at specific genomic regions in a manner likely dependent on the fork stalling mechanisms.
Increasing APH concentration severely interfered with
γH2AX binding is enriched in large genes and regions encoding long transcripts Our results showed that γH2AX binding was enriched at genes longer than the genomic median, regardless of the stressor ( Fig. 2a and Additional file 1: Figure S6, Kruskal Wallis with post hoc paired Wilcoxan signed rank test, p < 2 × 10 − 16 ). This result supports that large genes/transcripts have the potential to stall replication under RS induced by different treatments, presumably because replication machinery more likely collides with RNA polymerases transcribing long genes [10]. Interestingly, while HU induced γH2AX enrichment at genes longer than the genomic average, such enrichment was found at shorter genes when compared to APH or MMS treated samples (Fig. 2a, Additional file 1: Figure S6), indicating that HU treatment may sensitize shorter genes to breakage.
γH2AX is enriched at CFSs under exogenous genotoxin treatment Common fragile sites (CFSs) are specific chromosomal regions that are prone to break under APH-induced RS.
They are present in all individuals, are characterized by gene poor, heterochromatic, late replicating, non-B-form DNA structures like hairpins [15,[32][33][34][35]. CFSs are not precisely mapped breaks, but rather are megabase regions defined by G-banding using APH treated lymphocyte metaphase spreads [14]. Using permutation analysis, we compared γH2AX enrichment at consensus CFS G-band positions (Additional file 2: Table S1). We found that CFSs accumulated γH2AX at a low level in the absence of RS and breakage was further enhanced with exogenous genotoxic stress ( Fig. 2b and c). While CFSs were originally described under APH treated conditions, we found that both HU and MMS could induce significant γH2AX enrichment when compared with untreated samples (Fig. 2c). This result confirms previous findings that RS may preferentially cause damage at regions containing CFSs, and that these regions may be sensitive to a wide variety of stressors.
Sequence features in γH2AX binding regions
It is thought that repetitive sequences are intrinsic barriers of replication machinery and replication forks are prone to stall at repetitive regions [36]. Thus, we analyzed ChIP-seq peaks in the context of repetitive genomic elements using the RepeatMasker data set [37].
In addition to areas of low complexity (defined as > 100 nt stretch of > 87% AT or 89% GC, and > 30 nt stretch with > 29 nt poly(N) n , N denotes any nucleotide, and those containing short tandem repeats [37]), we also looked at γH2AX accumulation in the context of common transposable elements: SINEs (short interspersed nuclear elements), LINEs (long interspersed nuclear (Fig. 3). SINEs are 80-500 bp nonautonomous elements in the genome, with 3′ ends often composed of simple repeats like poly-dA, poly-dT, or tandem array of 2-3 bp unit [38]. A recent study identifies that poly (dA:dT) tracts are natural replication barriers and are a common cause for DNA breakage in HU-treated mouse B-lymphocytes [21], and SINEs are significantly enriched in early replicating fragile sites identified in HU-treated mouse B-lymphocytes [7]. Another study shows that repetitive DNA sequences that give rise to non-B-form structures impede DNA replication [20]. The enrichment of SINEs but not simple repeats in γH2AX binding indicate that in addition to the 3′ poly (dA:dT), abundant transposable elements in SINEs may contain features prone to non-B-form structure formation that make SINEs particularly susceptible to fork stalling. Compared to untreated sample, SINEs, LINEs, simple repeats, and DNA transposons were enriched in γH2AX binding sites under HU treatment, while LTRs and simple repeats were reduced in MMS treatment (Fig. 3). Binding patterns in APH treated sample did not significantly differ from untreated cells in any repetitive elements (Additional file 1: Figure S7). Future studies using a high-resolution sequencing method will be helpful to pinpoint sequence composition and features under different replication stress inducers.
Epigenetic features in γH2AX binding regions
Poor replication initiation has been proposed to cause instabilities [35]. Given that replication timing and initiation can be epigenetically controlled rather than directed by specific sequence motif [12,39], we examined common epigenetic marks including H3K9Ac, H3K4me3, H3K27me3, and H3K9me3 that modulate chromatin structures at γH2AX binding sites. H3K9Ac and H3K4me3 are euchromatic marks and are tightly associated with active transcription and histone deposition, while H3K27me3 and H3K9me3 are found mainly at inactive gene promoters and are associated with compact chromatin [40]. After aligning γH2AX ChIP-seq peaks with histone modification ChIP-seq datasets from human B-lymphoblastoids [GSM733677 (H3K9ac), GSM733708 (H3K4me3), GSM945196 (H3K27me3), GSM733664 (H3K9me3)], we found depletion in γH2AX at H3K9Ac and H3K4me3 marks, and enrichment in all samples at H3K27me3 and H3K9me3 marks (Fig. 4), suggesting that γH2AX sites induced by the three stressors coincide with more compact chromatin regions.
Depletion of CGIs and TSSs in γH2AX binding regions
CGIs are DNA elements with high CpG content. Roughly 50% of these regions are associated with gene expression regulation, and can be located at or near TSSs [41][42][43]. Early studies have shown a strong association of replication initiation and CGIs in mammalian genomes, with half of origins residing within or near CGIs [44,45]. Replication origin activity is also significantly enriched at and around TSSs [46,47]. Thus, we next examined the relationship between γH2AX binding and CGIs and TSSs in our samples. Using permutation analysis, we searched for enriched or depleted binding at CGIs genome-wide and found that γH2AX did not associate with CGIs. Rather, these regions were noticeably unbound (Fig. 5a). Similarly, we found consistent local depletion of γH2AX at TSSs (Fig. 5b), while no depletion or enrichment at transcription termination sites (TTS) or gene bodies was observed ( Fig. 5c and Additional file 1: Figure S8). Together with the enrichment of γH2AX binding at more compact chromatic regions (Fig. 4), our data suggest that γH2AX tends to bind to transcriptionally inactive regions upon fork stalling. Deviation from zero indicates enrichment (positive value) or depletion (negative value). γH2AX binding to SINEs is significantly higher than expected by random, and binding to simple repeats and low complexity repeats is lower than expected by random. * indicates p < 0.001 (permutation analysis)
Discussion
While γH2AX binding to DSBs has been mapped and profiled in high-resolution [48], systematic characterization and comparison of γH2AX chromatin binding in response to RS is lacking. This is further complicated by the fact that fork stalling can be induced by a diverse of mechanisms, and FS instability also displays cell-and tissue-type specificity. In this study, we generated a large set of γH2AX binding data from a single human cell line treated with three genotoxins that stall replication with distinct mechanisms. This study design allows us to directly compare γH2AX binding under different RS conditions, revealing a number of notable features of γH2AX binding in response to fork stalling. We find that only a small portion of γH2AX binding sites resulting from MMS (9.3%) and APH (6.4%) treatment overlap, suggesting that the two different fork stalling mechanisms produce RS-sensitive damage hotspots at discrete locations. This is not completely unexpected, since these two chemicals induce RS with distinct mechanisms. APH inhibits DNA polymerases α and slows DNA polymerization during replication, generating stretches of single stand DNA at stalled forks [14,16,49]. Thus, APH is expected to cause forks to stall or collapse at vulnerable regions containing natural barriers for DNA polymerases. These regions likely require additional efforts to avoid the pausing or dissociation of polymerases. Consistently, several studies have shown that specialized DNA polymerases, including Pol η, Pol ζ, and Pol κ that facilitate DNA synthesis and promote the stability of APH-inducible FSs [50][51][52][53]. In contrast, RS induced by the DNA methylating agent MMS is more complex. Although MMS is capable of reacting with a number of nucleophilic sites on DNA including ring nitrogen and exocyclic oxygen on purines and pyrimidines, the reactivity towards electrophiles varies substantially by the position of the nucleotide, whether the nucleotide is at the major or minor groove, and whether the DNA is single or double stranded [54]. Consequently, it is difficult to pinpoint where the methylation adducts are formed. HU reduces or depletes the overall cellular nucleotide pool, and therefore is expected to stall all DNA synthesis and impact replication more globally. In agreement with this view, we find that HU induces several times more damage sites than other treatments. HU induces γH2AX binding hotspots at regions overlapping with APH or MMS treated samples, but this overlap only accounts for a small portion of data set due to the large peak numbers.
We observed that SINEs are enriched in γH2AX binding sites induced by all three treatments (Fig. 3), suggesting that SINEs may contain features that easily stall DNA polymerases. One such feature may be the poly (dA:dT) tracts at the 3′ end of SINEs, which have been implicated as a natural replication barriers and is a common cause for DNA breakage in murine lymphocytes [21]. Cumulating evidence indicates that SINEs regulate gene expression, affect chromatin structure, and are involved in genome rearrangement [55,56], and therefore they have been implicated in many diseases including cancer [57]. It will be interesting to investigate the potential role of RS-induced SINE instability in disease development.
Despite different localizations of γH2AX binding, we find that they share a few obvious common features. First, all three conditions induce γH2AX binding at regions with the median transcript length longer than the median human transcript size (Fig. 2), indicating that regions with large transcripts are prone to break under RS. It has been shown that transcription of large genes often requires more than one complete cell cycle to complete. Collisions of transcription machinery with a replication fork and the formation of R-loops impede fork movement, causing FS instability [10]. Thus, our results reinforce transcription/replication collision as a crucial theme causing RS regardless of the RS mechanism.
In addition to increased binding at long genes, we also find that APH, HU, and MMS-induced γH2AX binding shows depletion of H3K9Ac and H3K4me3 marks, while being slightly enriched with H3K27me3 (Fig. 4), suggesting that chromatin within FSs may be more compact than non-fragile regions. It has been postulated that epigenetic feature regulates replication density and timing, with compact chromatic regions being poorly represented at replication initiation regions [12,39]. In support of this, previous report shows that the six most break-prone human CFSs display an epigenetic pattern of histone hypoacetylation [11]. The same study also examines H3K9Ac acetylation pattern of large genes and find that acetylation coverage of large genes is substantially lower than that of the human genome on average. Our results therefore extend this finding to genome-wide FSs and support that compact chromatin may be a common epigenetic feature contributing to FS instability. Previous research suggests that unprogrammed formation of R-loops impairs fork progression, causing fork stalling that contributes to DSB formation [58,59]. A recent study has reported widespread R-loop formation at unmethylated CGI promoters in the human genome [60]. Therefore, our observation that γH2AX peaks flank but are not located at CGIs and TSSs is somewhat surprising (Fig. 5). In order to explain this observation, it is worth revisiting studies of mapping γH2AX distribution after DSB induction. DSBs trigger H2AX phosphorylation over large domains (0.5 to 2 Mb) surrounding the DSB [48]. Anti-correlation between RNA Pol II occupancy and γH2AX enrichment has been observed in both S. cerevisiae and the human U2OS cell line [48,61], suggesting that TSSs and promoter regions may be particularly resistant to either the establishment or maintenance of H2AX phosphorylation. In addition, γH2AX enrichment at transcriptionally repressed genes seems to be dependent on HDACs [61]. Thus, it is highly likely that specialized chromatin structures at TSSs and CGIs prevent γH2AX accumulation despite R-loop formation. It will be interesting to determine the role of γH2AX depletion and specialized chromatin in stabilizing stalled forks at TSSs.
In conclusion, our study demonstrates that different types of replication stresses produce γH2AX binding at non-overlapping loci. By characterizing sequence and epigenetic features of these loci, our analysis provides a global view of the characteristics of genomic regions sensitive to various replication stress conditions. It is conceivable that cells may use different molecular mechanisms involving different protein molecules and repair pathways to rescue forks stalled at different types of fragile sequences. Since chromosome rearrangements found in cancer cells often result from genome instability caused by RS, deciphering the molecular mechanisms protecting RS-induced genome stability represents an important issue in the field.
Cell culture
Human B-lymphocyte cell line (GM07027) was obtained from Coriell Institute. 174xCEM was obtained from American Type Culture Collection (ATCC). GM07027 and 174xCEM lymphocyte cells were cultured in suspension and passaged in the RPMI1640 medium (Life Technologies) supplemented with 2 mM L-glutamine and 15% fetal bovine serum (Atlanta Biologicals) at 37°C under 5% CO 2 . HeLa and HEK293T cells (ATCC) were cultured in DMEM media supplemented with 10% cosmic calf serum (ThermoFisher) at 37°C containing 5% CO2. No antibiotics were used to avoid possible antibiotics-induced stress.
Flow cytometry
All flow cytometry analysis was performed with a Beckman Coulter Gallios flow cytometer. Samples were filtered through 40 μm nylon mesh prior to flow cytometry. All analysis was performed using Kaluza analysis software (Beckman Coulter).
BrdU incorporation
Cells were treated with 2 mM HU, 0.3 μM APH or 200 μM MMS for 24 h, and then incubated with 10 μM BrdU at 37°C under 5% CO 2 for 30 min. Cells were collected by centrifugation, washed with cold PBS once, resuspended and fixed with 70% ethanol at 4°C overnight. Cells were then washed with PBS and resuspended in 2 N HCl supplemented with 0.5% Triton X-100 and incubated for 30 min at room temperature. Cells were centrifuged to remove HCl and then neutralized with 0.1 M sodium borate buffer pH 8.5. Cells were washed by cold PBS, resuspended in BrdU staining buffer (PBS, 0.5% Tween-20, 1% BSA), incubated with anti-BrdU antibody for 1 h at room temperature, followed by cold PBS wash and incubation with the secondary antibody goat anti-mouse Alexa Fluor 488 (ThermoFisher Scientific, A11029) for 1 h at room temperature. After centrifugation, cell pellets were washed by PBS and stained with propidium iodide (PI) solution (PBS, 0.1% NP-40, 2% fetal bovine serum, 50 μg/ml PI and 50 μg/ml RNase A) for 30 min in dark.
Cell cycle analysis
Cell cycle was analyzed by PI staining. Briefly, cells were treated with genotoxic drugs for 24 h, and then collected by centrifugation, washed with cold PBS once, resuspended and fixed with 70% ethanol at 4°C for 1 h. Cells were then washed by PBS once, resuspended with PI solution and incubated at 37°C for 1 h.
Apoptosis assay
Quantification of apoptosis was measured by annexin V staining (BioLegend, #640905) by following as the manufacturer's instruction. Briefly, after cells were treated with 2 mM HU, 0.3 μM APH or 200 μM MMS for 24 h, cells were collected by centrifugation, and washed with cold PBS, resuspended in 100 μl binding buffer (10 mM HEPES pH 7.4, 140 mM NaCl and 2.5 mM CaCl 2 ) supplemented with 5 μl annexin V-FITC solution, and incubated for 20 min at room temperature in the dark. Binding buffer (BioLegend, 400 μl) was then added to each sample and subject to FACS.
NGS library preparation and sequencing
Libraries were prepared according to Illumina's TruSeq® ChIP Sample Preparation Guide (Part# 15023092 Rev. B). Briefly, ChIP DNA was end-repaired using a combination of T4 DNA polymerase, E. coli DNA Pol I large fragment (Klenow polymerase) and T4 polynucleotide kinase. The blunt, phosphorylated ends were treated with Klenow fragment (32 to 52 exo minus) and dATP to yield a protruding 3-' A' base for ligation of Illumina's adapters which have a single 'T' base overhang at the 3′ end. After adapter ligation, DNA fragments with sizes of 250-300 bp were selected on 2% agarose gels and were PCR amplified with Illumina primers for 18 cycles. The libraries were captured on an Illumina flow cell for cluster generation and sequenced on HiSeq 2500 (Illumina) with paired-end 100 bp read length following the manufacturer's protocols. For each genotoxin, two independent treatments were performed, followed by independent ChIP experiments. This resulted in a total of eight ChIP samples (untreated, APH, HU, MMS) that were sequenced simultaneously.
ChIP-seq reads processing and sequence analysis
Prior to sequence analysis, adaptor sequences in reads were trimmed. Paired-end reads in fastq format were aligned to the GRCh38 reference genome using Bowtie2 default settings [62]. Reads were checked for quality control using Samtools [63], and reads below q40 were removed. PCR duplicates were also removed. Following alignment, broad peaks were called using MACS2 peakcalling program [31] (with settings --broad --no-model, −-broad-cutoff 10e-3 -p) to give the final peak list per replicate. Shift size was determined using gel quantification from library quality controls. Shift sizes were determined to be: APH-treated replicate 1: 251; APH-treated replicate 2: 257; HU-treated replicate 1: 248; HU-treated replicate 2: 243; MMS-treated replicate 1: 222; MMStreated replicate 2: 241; Untreated replicate 1: 214; Untreated replicate 2: 229. Blacklisted regions were removed from analysis [64]. Reproducibility between replicates was assessed using Spearman Rank Correlation of tags per 1000 bp bin. All ChIP-seq data is available at the Gene Express Omnibus at accession no. GSE113020.
Enrichment or depletion of γH2AX ChIP-seq peaks in repetitive elements, CGIs, and CFSs were assessed using 1000 iteration permutation analysis with the regioneR Bioconductor package [65]. Repetitive elements were defined by RepeatMasker [37], which uses RepBase Update, the database of repetitive sequences throughout multiple species to define repetitive sequences [66]. This database contains transposable elements (SINES, LINES, DNAtransposons, and LTRs), and non-mobile DNA repeat elements which include the canonical TTAGGG telomere sequence (simple repeats/microsatellites), regions of low complexity such as the known fragile poly-T motif, and (x) RNA sources found throughout the genome. Positions and categories of repetitive elements were obtained from the RepeatMasker data set [37]. Positions of CGIs were obtained from the CGI track in the UCSC Genome Browser [42]. CFSs in human lymphocytes [15,18,67] were sorted using the G-band positions from the UCSC Chromosome band track [68,69]. The NCBI RefSeq dataset was used for gene lengths, TSS, and TTS analyses [70]. Gene length was analyzed using a Kruskal-Wallis test and post-hoc paired Wilcoxon signed-rank test with a Holm-Bonferroni correction for family-wise error. Graphs for gene length were generated using the ggplot2 R package [71]. Graphs for ChIP-seq data relationships to TSS and histone marks were generated using Deeptools2 [72]. Histone mark data was taken from GSM733677 (H3K9ac), GSM733708 (H3K4me3), GSM945196 (H3K27me3), GSM733664 (H3K9me3) [64,73]. Sample data was realigned to hg19 using identical Bowtie2 settings prior to comparison with histone marks. | 2019-06-07T21:33:38.379Z | 2019-05-20T00:00:00.000 | {
"year": 2019,
"sha1": "231d346f96bcad77d262029b3920e720b86f2500",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-019-5934-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "231d346f96bcad77d262029b3920e720b86f2500",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
11210051 | pes2o/s2orc | v3-fos-license | Warm Molecular Layers in Protoplanetary Disks
We have investigated molecular distributions in protoplanetary disks, adopting a disk model with a temperature gradient in the vertical direction. The model produces sufficiently high abundances of gaseous CO and HCO+ to account for line observations of T Tauri stars using a sticking probability of unity and without assuming any non-thermal desorption. In regions of radius R>10 AU, with which we are concerned, the temperature increases with increasing height from the midplane. In a warm intermediate layer, there are significant amounts of gaseous molecules owing to thermal desorption and efficient shielding of ultraviolet radiation by the flared disk. The column densities of HCN, CN, CS, H2CO, HNC and HCO+ obtained from our model are in good agreement with the observations of DM Tau, but are smaller than those of LkCa15. Molecular line profiles from our disk models are calculated using a 2-dimensional non-local-thermal-equilibrium (NLTE) molecular-line radiative transfer code for a direct comparison with observations. Deuterated species are included in our chemical model. The molecular D/H ratios in the model are in reasonable agreement with those observed in protoplanetary disks.
Introduction
It is well established from millimeter and infrared observations that the birth of solar-mass stars is accompanied by the formation of a circumstellar disk (Beckwith & Sargent 1996, Natta et al. 2000. These disks are important both as reservoirs of material to be accreted onto growing stars and as sites of planetary formation. Because the gas and dust in the disk are the basic components from which future solar systems are built, studies of their chemistry are essential to investigate the link between interstellar and planetary matter. Moreover, the chemical abundances and molecular excitation depend on physical parameters in the disks such as temperature and density, and on processes such as radial and vertical mixing. Thus, studies of the chemistry in protoplanetary disks can help to constrain their physical structure. Although CO millimeter lines are routinely used to trace the gas and Keplerian velocity field in disks around classical T Tauri stars (e.g., Kawabe et al. 1993, Koerner et al. 1993, Dutrey et al. 1994, Koerner & Sargent 1995, Saito et al. 1995, Guilloteau & Dutrey 1998, detections of other molecules are still rare. Dutrey et al. (1997) and Kastner et al. (1997) were the first to report observations of molecules such as HCO + , HCN, CN, HNC, H 2 CO, and C 2 H in the disks around GG Tau, DM Tau and TW Hya. Dutrey et al. found that the abundances of these species relative to hydrogen in the DM Tau and GG Tau disks are lower than those in molecular clouds by factors of 5-100 (Dutrey et al. 1994, Dutrey et al. 1997, Guilloteau et al. 1999. The abundance ratio of CN/HCN, on the other hand, is significantly higher than in molecular clouds in all three disks. More recently, Qi (2000), van Zadelhoff et al. (2001) and Thi et al. (in preparation) have reported observations of molecules other than CO in the disks around the T Tauri stars LkCa15 and TW Hya, and in those around the Herbig Ae stars MWC 480 and HD 163296, confirming the trends of high CN/HCN ratio Send offprint requests to: Yuri Aikawa and molecular depletion; the disk masses derived from CO (and its isotopes) are significantly smaller than those estimated from the dust continuum assuming a gas to dust ratio of 100.
Early models of the chemistry in disks considered mostly the one-dimensional radial structure in the cold midplane (e.g. Aikawa et al. 1997, Willacy et al. 1998). Subsequently, it has been recognized that the vertical stratification of the molecules is equally relevant. Aikawa & Herbst (1999a) investigated the two-dimensional chemical structure within the so-called Kyoto disk model, representative of the minimum mass solar nebula. This model has low midplane temperatures and is isothermal in the vertical direction. The molecular abundances were found to vary significantly with height Z from the midplane. At large radii (R >100 AU), the temperature is so low that most species, except H 2 and He, are frozen out onto the grains. This depletion is most effective in the midplane region (Z ≈ 0) because of the higher density and hence shorter timescales for molecules to collide with and stick to the grains. In regions above and below the midplane, significant amounts of molecules can remain in the gas phase for longer periods because of the lower densities, and because of non-thermal desorption by cosmic rays and/or radiation (e.g., X-rays) from the interstellar radiation field and the central star. Aikawa & Herbst (1999a) suggested that the observed molecular line emission comes mostly from this region. There is a height distinction between stable molecules and radicals, however. In the surface region of a disk, radicals such as CN are very abundant because of photodissociation via ultraviolet radiation, whereas the abundances of the stable molecules such as HCN peak closer to the midplane. The molecular column densities obtained by integrating over height at each radius compare well with those derived from observations of DM Tau (Dutrey et al. 1997), although detailed comparison through radiative transfer (i.e. calculation of molecular line intensities from model disks) is still lacking. The freeze-out of molecules in the midplane explains the low average abundances of heavy-element-containing species relative to hydrogen, whereas the high abundance ratio of CN to HCN is caused by photodissociation in the surface layers.
To obtain this good agreement with observations, Aikawa & Herbst (1999a) were forced to use the simplifying assumption that the probability S for sticking upon collision of a molecule with a grain is significantly smaller than unity. If the sticking probability is unity and if only thermal desorption is considered, the molecular column densities obtained in the Kyoto model are much smaller than observed, which suggests that either there is some efficient non-thermal desorption mechanism, or that the disk temperature is higher than assumed in the Kyoto model. Aikawa & Herbst (1999a) adopted an artificially low sticking probability S = 0.03 in order to reproduce the observed CO spectra, without specifying the non-thermal desorption process or modifying the temperature distribution in the Kyoto model. Since adsorption is such a dominant process in the disk, the cause of the apparently low sticking probability should be considered more seriously.
In order to explain the strong mid-infrared emission from disks, several groups suggested the possibility of higher dust temperatures than assumed in the Kyoto model due to efficient reception of stellar radiation by flared disks (e.g., Kenyon & Hartmann 1987, Chiang & Goldreich 1997, D'Alessio et al. 1998. In the twolayer Chiang & Goldreich (1997) model (C-G model hereafter), the upper layer (the so called "super-heated" layer) is directly heated by stellar radiation from the central star to temperatures T ∼ > 50 K at radii of ∼ 100 AU. Also, recent observations of high-frequency lines (e.g. CO J = 6 − 5) support the possibility that the disk temperature is higher than assumed in the Kyoto model , van Zadelhoff et al. 2001. Willacy & Langer (2000) investigated the molecular distributions in the C-G model to see if the super-heated layer can maintain enough gaseous organic molecules to account for the observations. It was found, however, that molecules in this layer are destroyed by the harsh ultraviolet radiation from the star, whereas they are frozen out onto the grains in the cold lower layer. These authors therefore had to adopt a very high photodesorption rate in the lower layer to keep the molecules off the grains.
In this paper we report an investigation of molecular distributions in another disk model with a vertical temperature gradient: the model of D' Alessio et al. (1998Alessio et al. ( , 1999. These scientists obtained the temperature and density distribution in steady accretion disks around T Tauri stars by solving the equations for local 1-D energy transfer (including radiation, convection and turbulence) and hydrostatic equilibrium in the vertical direction. Whereas in the C-G model, the disk is divided into two discrete layerssuper-heated and interior -the model of D'Alessio et al. gives continuous distributions of temperature and density. The differences in the temperature and density distribu-tions between these two models have a significant effect on the gaseous molecular abundances in the disk. Since a vertical distribution of density in the super-heated layer is not given explicitly in the C-G model, Willacy & Langer (2000) assumed a Gaussian density distribution with a scale height determined by the mid-plane temperature. In the model of D'Alessio et al., the gas is more extended than in a Gaussian distribution owing to the high temperature in the surface region. The higher densities (and thus the higher column densities) at large Z shield the lower layers from stellar ultraviolet radiation. In addition, the temperature variation in the vertical direction is more gradual in the model of D'Alessio et al. than the step function assumed in the C-G model. Therefore, compared with the C-G model, the model of D'Alessio et al. contains more gas in a warm and shielded layer, in which high abundances of gaseous molecules are expected.
The rest of the paper is organized as follows. In §2 we describe the adopted model for protoplanetary disks and the chemical reaction network. Numerical results on the distributions of molecular abundances and column densities are discussed in §3. In §4, molecular column densities and line intensities in the D'Alessio et al. model are compared with observations. Our conclusions and a discussion are given in §5.
Model
The disk model by D' Alessio et al. (1998Alessio et al. ( , 1999 has been adopted in our work. In this model, the two-dimensional structure (R, Z) of the disk is obtained by solving for hydrostatic equilibrium and energy transport in the Zdirection. Various heating sources are considered, such as viscous dissipation of accretion energy, cosmic rays, and stellar radiation. Among them, stellar radiation is dominant at R > 2 AU, assuming typical parameters of T Tauri stars: T * = 4000 K, M * = 0.5 M ⊙ , and R * = 2 R ⊙ . We adopt a disk with an accretion rateṀ = 10 −8 M ⊙ yr −1 and viscosity parameter α = 0.01 as the fiducial, or standard, model, but also consider cases with the same α and differing accretion ratesṀ = 10 −7 M ⊙ yr −1 anḋ M = 10 −9 M ⊙ yr −1 (D' Alessio et al. 1999). The distributions of density and temperature in these three models are shown in Fig. 1. The surface density in the fiducial model is similar to that in the minimum-mass disk, which was adopted by Aikawa & Herbst (1999a), and is approximately an order of magnitude larger (smaller) in the model with the larger (smaller)Ṁ . In the fiducial disk, the masses inside 100 AU and 373 AU are about 0.017 M ⊙ and 0.06 M ⊙ , respectively.
We have selected a few radial points from the model of D' Alessio et al. (viz. R = 26,49,105,198,290,and 373 AU), divided the disk at each radius into several (30-40) layers, or slabs, depending on height Z, and calculated the molecular abundances in each layer as functions of time (Aikawa & Herbst 1999a). We have not included any , which corresponds to a gas density of n(H 2 )= 1.4 10 4 cm −3 with T = 50 K. From left to right the mass accretion rate is 10 −7 , 10 −8 , and 10 −9 M ⊙ yr −1 , respectively. In the top row, the grey scale shows the region in which the CO abundance relative to hydrogen is 10 −6 − 10 −5 (dark grey) and 10 −5 − 10 −4 (light grey). In the bottom row, the fractional HCO + abundance is 10 −11 − 10 −10 (dark grey) and 10 −10 − 10 −9 (light grey).
hydrodynamic motions in the disk, such as accretion or turbulence. The main goal of this paper is to investigate the effect of a vertical temperature gradient on molecular abundances and line intensities. Although the model of D'Alessio et al. is an accretion disk model, the temperature distribution is determined by the irradiation from the central star and radiation transfer in the vertical direction, while the contribution of the accretion energy as a heat source is negligible in the region we are concerned with. In addition, we find that the chemical time scale is shorter than the accretion timescale, which is ∼ 10 6 yr, in a large fraction of the molecular layers (section 3.2), so that molecular distributions obtained in this paper can be a reasonable approximation of reality.
The chemical model and chemical reaction equations adopted in this paper are almost the same as those described in Aikawa & Herbst (2001). We use the "new standard model" for the gas-phase chemistry , Osamura et al. 1999, extended to include deuterium chemistry (Aikawa & Herbst 1999b). The ionization rate by cosmic rays is assumed to be the "standard" value in molecular clouds, ζ = 1.3 10 −17 s −1 , because the attenuation length for cosmic-ray ionization is much larger than the column densities in the outer regions of the disks, with which we are concerned (Umebayashi & Nakano 1981). Photoprocesses induced by ultraviolet radiation from the interstellar radiation field and from the central star are included. The ultraviolet flux from the central star varies with time and object, and can reach a value 10 4 times higher than the interstellar flux at R = 100 AU (Herbig & Goodrich 1986, Imhoff & Appenzeller 1987, Montmerle et al. 1993. This maximum value is adopted in this paper, as in Aikawa & Herbst (1999a). We assume that the ultraviolet radiation from the central star is not energetic enough to dissociate CO and H 2 . Self-and mutual shielding of H 2 and CO from interstellar UV is considered as in Aikawa & Herbst (1999a). Chemical processes induced by X-rays from the central star are not included in this paper.
Regarding gas-grain interactions, the surface formation of H 2 , the surface recombination of ions and electrons, and the accretion and thermal desorption of ice mantles are included, but all other grain-surface reactions are not included. The sticking probability for accretion is assumed to be 1.0, unless stated otherwise. We have not considered any modifications to the grain-surface rate equation for H 2 formation (cf. Caselli et al. 1998), because, given our initial conditions, the rate of molecular hydrogen formation is not important to the model. The total number of species and reactions included in our network are 773 and 10446, respectively. The adopted elemental abundances are the so-called "low-metal" values (e.g., Lee et al. 1998, Aikawa et al. 1999. The initial molecular abundances are obtained from a model of the precursor cloud with physical conditions n H = 2 10 4 cm −3 and T = 10 K at 3 10 5 yr, at which time observed abundances in pre-stellar cores such as TMC-1 are reasonably well reproduced .
Vertical Distribution
Figure 2 contains assorted vertical distributions at the outermost radius R = 373 AU in our fiducial disk model. The solid lines in Fig. 2 (a − c) show the physical parameters: temperature, density, and A v . The attenuation of interstellar radiation (A IS v ) is obtained from the equation where N H is the vertical column density of hydrogen nuclei from the disk surface to each point in the disk. The attenuation of stellar radiation (A star v ) is obtained via the same equation, but with N H replaced by the column density from the central star. Fig. 2 (d − e) shows assorted molecular abundances at a disk age of t = 1.0 10 6 yr, which is typical for T Tauri stars. Significant amounts of molecules exist in the gas phase due to thermal desorption at Z ∼ > 100 AU, while most molecules are adsorbed onto grains below this height, where T ∼ < 20 K. As discussed by van Zadelhoff et al. (2001), this molecular layer covers the region in which the lines of the main isotopes of the observed species become optically thick and thus where most of the observed emission arises. The results for models with differentṀ are similar except that the height of the molecular layer is shifted in accordance with the distribution of A IS v , which is the main determinant of the vertical temperature distribution.
The vertical distribution of the physical parameters (dashed lines) and molecular abundances in the Kyoto model are also shown in Fig. 2 (a − c, and f ) for comparison. The mass of the central star in the latter model is 0.5 M ⊙ , as in the D'Alessio et al. model, so that the density distribution is modified from that adopted in Aikawa & Herbst (1999a). The sticking probability of neutral species onto grain surfaces is assumed to be 0.03, as in Aikawa & Herbst (1999a). It can be seen that the density distribution in the D'Alessio et al. model is more extended than the Gaussian profile assumed in the Kyoto model (as well as in Willacy & Langer 2000), causing more efficient ultraviolet shielding of the warm layers just below the surface. Although we have not calculated the molecular distributions in the C-G model, the width of its warm molecular layer at R = 373 AU can be estimated. In the C-G model, the Gaussian density distribution is similar to that in the Kyoto model, but the disk surface with A star v ∼ < 1 mag is "super-heated" by stellar radiation. At R = 373 AU, the boundary between the super-heated layer and the interior region is located at Z ∼ 280 AU, a value estimated from the Z−A star v relation in the Kyoto model (Fig. 2 c). Since the temperature in the interior region is lower than 20 K at this radius in the C-G model, warm gaseous molecules exist only at heights larger than 280 AU. The upper boundary of the molecular layer, which is determined by photoprocesses, is estimated to be Z ∼ 350 AU, again from the Kyoto model (Fig. 2 f ). Therefore in the C-G model, the warm molecular layer, if any, is much narrower than in the D'Alessio et al. model. This estimate is consistent with the conclusion of Willacy & Langer (2000) that they need non-thermal desorption in the cold, more shielded layer to account for the observed molecular abundances within the C-G model.
Radial Distribution of Column Densities
Column densities are obtained by integrating the molecular abundances in the vertical direction. The column densities of assorted species as functions of disk radius are shown in Fig. 3 for three accretion disk models at t = 1.0 10 6 yr with different mass accretion rates. Stable neutrals such as H 2 CO and HCN show little dependence on radius, because these molecules are abundant only in regions with certain physical conditions and the mass contained in the layer with these physical conditions does not vary much with radius. For example, the HCN abundance is high (n(HCN)/n H ∼ 10 −10 − 10 −9 ) only when n H ∼ < 3 10 7 cm −3 , T ∼ > 20 K, and A IS v ∼ > 0.2 mag (see Fig. 2). The critical temperature of ∼ 20 K is not the sublimation temperature of HCN, but that of CO, which is the dominant form of carbon in the gas phase. For HCN, attenuation of interstellar radiation is more important than that of stellar radiation because the interstellar radiation penetrates deeper into the disk due to the effect of geometry. Radicals such as CN and C 2 H increase in column density with radius because of the lower density and lower flux of the destructive stellar UV in the outer regions. Radical column densities are more sensitive than HCN to stellar UV, because their abundances peak at greater heights. Carbon monoxide is abundant (n(CO)/n H ∼ 10 −4 ) in regions with T ∼ > 20 K and A IS v ∼ > 0.1 mag. The column densities of CO and HCO + abruptly change at R ∼ 100 AU, inside of which the temperature in the midplane is higher than 20 K. These characteristics of the radial distribution are similar to those in the Kyoto model (Aikawa et al. 1996, Aikawa & Herbst 1999a. The amount of gas existing under physical conditions conducive for large abundances of molecules does not vary significantly among the three disk models with different accretion rates (and thus with different disk mass), either. Therefore, most (but not all) molecular column densities vary only by a very small factor among the three disk models, even though the total (H 2 ) column density varies by two orders of magnitude. In the model with a mass accretion rate of 1.0 10 −9 M ⊙ yr −1 , the region with density n H ∼ 10 5 − 10 6 cm −3 , at which CN is abundant, is more shielded from stellar UV (Fig. 1), and thus the CN column density is higher than in the other two models.
Molecular column densities in our fiducial disk model at an earlier time of t = 1.0 10 5 yr are shown with thick dot-dashed lines in Fig. 3. The variation in column density for most molecules during 10 5 − 10 6 yr is less than a factor of 2; two exceptions are CS and H 2 O. In regions with A IS v smaller than a few mag, which covers a large fraction of the molecular layer, the chemical timescales are short ( ∼ < 10 5 yr) because of photoprocesses. In the more shielded portion of the molecular layer at smaller Z, S-bearing gaseous molecules decrease in abundance after 10 5 yr, since most sulfur is adsorbed onto grains in the form of CS, SO and OCS. Similarly, H 2 O gas decreases at ∼ 10 6 yr, because most oxygen which is not in CO is adsorbed as H 2 O ice. On the other hand, abundances of other C-bearing molecules reach pseudo-steady-state values in a relatively short timescale ( ∼ < 10 4 yr), and do not show significant time variation during 10 5 − 10 6 yr in the more shielded region because CO gas, their chemical precursor, remains the dominant component of carbon for more than 1 10 6 yr. The pseudo-steady-state gas-phase chemistry of non-volatile carbon-containing species is balanced for a considerable period by formation reactions starting from CO and depletion onto the dust particles.
Values of the sticking probability S are estimated to lie in the range 0.1 ∼ < S ∼ < 1.0 (Williams 1993, and references therein). In order to check the dependence of the molecular column densities on S, we have performed calculations with S = 0.1 in addition to our fiducial value of unity. Column densities of radicals such as CN and C 2 H do not depend on S, because they are abundant in the surface layer, in which adsorption is not the dominant process. Among the more stable species, some show significant dependence on S; the column densities of CO 2 and OCS are larger by an order of magnitude in the model with the lower sticking probability at R = 373 AU and t = 1 10 6 yr. But the effect of S is smaller for many other species; at R = 373 AU and t = 1 10 6 : for example, the column densities of CS, HCN, and H 2 CO are larger only by factors of 3.2, 1.7, and 1.3, respectively, in the case of lower S. There are two reasons for this small dependence on S. First, adsorption is not always the dominant process in the molecular layer, depending on species and height from the midplane. Second, the higher abundance of gaseous O 2 in the case of lower S reduces the abundance of the carbon atom, and thus reduces the formation rate of organic molecules, which counteracts the lower adsorption rate.
b Derived from interferometer data by Qi (2000). The values in this column do not necessarily refer to 373 AU (see text). c Derived from single-dish data by Thi et al. (in preparation) rises, because the energy differences are less significant at higher temperatures. Thus, the D/H ratio decreases inwards (Fig. 4), because the temperature in the gaseous molecular layer is higher at inner radii. Because the temperature in the molecular layer does not depend much on the disk mass (see Fig. 1), the differences between models with differentṀ are small.
The ratio of HDO/H 2 O shows a more complicated behavior than described above. It has a local peak at R ∼ 100 AU, and the peak is higher in the more massive disk model. At R ∼ 100 AU, the midplane is at 14-16 K for the three disk models, at which temperature CO is almost completely frozen onto grains but O can marginally be kept in the gas phase to produce water vapor. The D/H ratio in the vapor is enhanced by the CO depletion in the region, since CO is one of the main destruction channels of H 2 D + (Brown et al. 1989). The layer at this critical temperature (14 K ∼ < T ∼ < 16 K) is thicker in the more massive disk. At smaller radii, the midplane temperature is higher, which lowers the D/H ratio. At larger radii for disks witḣ M = 10 −7 and 10 −8 M ⊙ yr −1 , the midplane temperature is so low that O cannot remain in the gas phase to produce water vapor. The abundance of HDO has a sharp peak in a thin layer of 14 K ∼ < T ∼ < 16 K offset from the midplane (Fig. 2 d). In the outer disk of theṀ = 10 −9 M ⊙ yr −1 model, the temperature is slightly higher than 20 K even in the midplane (see Fig. 1 Fig. 4. Column density ratios of deuterated species to normal species as functions of radius in the three accretion disk models at an age of t = 1.0 10 6 yr. The models are represented by the same lines as in the previous figure.
Column densities
In the subsequent paragraphs, we discuss comparisons of our calculated column densities for assorted species at a particular radius with both single dish and interferometric data. It must be recognized, however, that it is not really possible to derive reliable columm densities at a particular radius from unresolved single dish data, making much of the subsequent discussion less quantitative than would be desirable. Table 1 compares calculated molecular column densities obtained with three different accretion rates at a time of t = 1.0 10 6 yr and a radius of R = 373 AU with those estimated from the observations of DM Tau (Dutrey et al. 1997) and LkCa15 (Qi 2000, Thi et al. in preparation). In general, it is difficult to estimate the total H 2 column density in the disk, especially at the outer radius. Dust observations suffer from uncertainties in the dust opacity at millimeter wavelengths, which depends on the grain size distribution. Moreover, the dust continuum is very weak at the outer radii and difficult to detect. The H 2 column density cannot be directly estimated from molecular lines, because the molecular abundances relative to hydrogen are not known. Therefore Dutrey et al. (1997) estimated the H 2 column density and averaged molecular abundances for DM Tau in a different way, paying attention to the critical density for excitation of the molecular lines and their optical depth. For the DM Tau disk, interferometric observations of 12 CO (J = 2 − 1) show that the CO gas extends to ∼ 800 AU from the central star. Combining the different constraints, they obtained an H 2 density n(R, Z) = 5 10 5 (R/500AU ) −3 exp[−(Z/H) 2 ] cm −3 over this range, in which H is the scale height of the disk, with H ≈ 175 AU at R = 500 AU. From the line intensities obtained in single dish telescopes, they subsequently derived molecular abundances with respect to hydrogen assuming that the abundances are constant over the entire disk. We obtained the molecular column densities of DM Tau in Table 1 by vertically integrating this disk model using the average abundances listed in Table 1 of Dutrey et al. (1997).
Although the DM Tau disk extends to about 800 AU from the central star, and the outer radius would have a larger contribution to the line intensities because of the larger surface area, we list the calculated column density at R = 373 AU, since the D'Alessio et al. model extends only out to this radius (the H 2 column density at R = 800 AU is about 3 times less than that at R = 373 AU in the DM Tau disk). The model with an accretion rateṀ = 10 −9 M ⊙ yr −1 reproduces reasonably well the vertical H 2 column density of DM Tau, and agrees with other DM Tau observations to within a factor of 2, except for CO and C 2 H. The former is overestimated, while the latter is underestimated. It should be noted that we have assumed that the stellar UV is not hard enough to dissociate CO, so that our model may overestimate the CO column density. Inclusion of CO photodissociation by stellar UV might also increase C 2 H and CN abundances, since more carbon is released from CO in the photodissociation region, in which C 2 H and CN are mainly formed (van Zadelhoff et al. in preparation). Qi (2000) performed interferometric observations on LkCa15 and estimated beam averaged molecular column densities based on the velocity-integrated intensity measured over a much smaller beam than used to determine the DM Tau abundances. The vertical column densities are lower than the listed values by a small factor, which is less than 2 if the inclination is ∼ < 60 degrees. Since the beam size is about 0.6 ′′ − 13 ′′ depending on the frequency of the line, the estimated values do not necessarily refer to 373 AU. If the emission is resolved and if we assume that the molecular column densities do not vary much with radius (as indicated by our disk models for R ∼ > 100 AU), we can take the values listed by Qi (2000) to apply to 373 AU. The typical beam size of Qi (2000) is 3 ′′ − 4 ′′ (∼300 AU radius at the distance of LkCa15), so the majority of the results will be just resolved, making it reasonable if not perfect to compare the observations with our result at R = 373 AU.
In addition to the interferometric work on LkCa15, Thi et al. (in preparation) derived molecular column densities for this disk from high-frequency single dish observations of various species. Their beam-averaged values obtained with their assumption that the disk has a radius of 100 AU are included in Table 1 and differ from the interferometric column densities by up to an order of magnitude. Both values are much higher than the column densities found for DM Tau, so the agreement between our model and observations is worse for LkCa15 than for DM Tau. For CO and HCO + the difference is less than a factor of 2, but the column densities of other species are about 1-2 orders of magnitude higher than the model results for all three mass accretion rates. Methanol in LkCa15 appears to be a special case; its calculated column density is more than five orders of magnitude too low, presumably because it is produced by the hydrogenation of CO on grain surfaces, which is not included in our model, and then evaporated into the gas.
The use of 100 AU for LkCa15 as a reference point by Thi et al. is somewhat arbitrary. If the values obtained by Thi et al. are taken to refer to a 373 AU radius disk, their LkCa15 column densities need to be reduced by a factor (100/373) 2 , and become closer to those found for DM Tau. Thi et al. and Qi (2000) also present data for a few other disks (TW Hya, HD 163296, and MWC 480) and determine column densities and abundances in a consistent way. Indeed, the DM Tau column densities and abundances are generally lower than the values for other disks, whereas those for LkCa15 are among the highest. Thus, these two sources appear to bracket the range of observed values.
Line intensities and profiles
In addition to the comparison with estimated molecular column densities at a single radius, a more direct comparison via line intensities from the entire model disk has been made. Since the most complete single-dish and interferometer data set is available for LkCa15, we restrict our efforts to this source. In estimating the molecular column densities from the observations, Qi (2000) assumed local thermal equilibrium (LTE) with an excitation temperature of 40 K throughout the LkCa15 disk, thereby deriving a mean column over the beam. In this paper, however, we have shown that a temperature gradient is important for the characteristics of the gaseous molecular layer in the disk and that the abundances are strongly varying with R and Z. Also, the excitation of the molecules and the line emission depend on the density structure; van Zadelhoff et al. (2001) have shown that the assumption of LTE is not always valid for high frequency lines of molecules with high critical densities. Hence, it is better to compare directly simulated line emission with observations.
We calculate here the excitation of the molecules using the 2-dimensional (2D) NLTE molecular line radiative transfer code of Hogerheijde & van der Tak (2000). From the resulting level populations, the line profiles can be computed, taking into account the inclination of the source. NLTE molecular line radiative transfer in more than one dimension has been used in star-formation research only recently (e.g., Park & Hong 1995, Juvela 1997. The need for a full treatment of the radiative transfer in disks follows from the non-locality of the problem. The level populations depend both on the local parameters (temperature, density and radiation field) and on the global radiation field, which in turn depends heavily on the optical depth of the medium. The main problem is the slow convergence of the level populations. The adopted code by Hogerheijde & van der Tak (2000) uses an accelerated Monte Carlo method to enhance convergence. A more elaborate discussion of the methods is given in van Zadelhoff et al. (2001).
The main input parameters to the radiative transfer calculations in addition to the molecular data (collision rates, Einstein A coefficients) are the 2D distributions of temperature, density and abundance of the molecule of interest at each position (R, Z). The latter is taken directly from the chemical models at a chosen age. Other important parameters are the turbulent width, taken to be 0.2 km s −1 (van Zadelhoff et al. 2001), the systematic Keplerian velocity field for an assumed stellar mass, the thermal line width, the size of the disk, and the inclination of the source. Qi (2000) estimates a disk inclination of ∼ 58±10 degrees, and an outer radius of the CO disk of 435 AU for LkCa15. In the following, we assume a disk inclination of 60 degrees and a size (outer radius) of 400 AU. The dependence of line emission and profiles to disk radius and inclination can be roughly estimated (Omodaka et al. 1992, e.g., for optically thick species the peak intensities of molecular lines are proportional to the square of the disk radius. Dependence of the integrated intensities on disk inclination is also discussed in van Zadelhoff et al. (2001), and is found to be less than a factor of two when the inclination is varied from 0 to 60 degree. In spite of the fact that the column densities of CO and HCO + in our model are slightly smaller than those estimated by Qi (2000), the calculated intensities of these species are higher than observed in LkCa15 by a factor of 2 − 3, which is caused by the higher disk temperatures in our model. The vertical temperature gradient lowers the optical depth of the disk, which further enhances emission intensities (van Zadelhoff et al. 2001). As opposed to CO and HCO + , the model intensities of CN and HCN are much lower than those observed in LkCa15. Those lines are optically thin in our model, and about 10 times more HCN and at least 50 times more CN are needed to fit the observed profiles, which is consistent with the comparison of column densities. The dotted lines in the CN and HCN panels show model profiles in which the molecular abundances are artificially enhanced. We conclude that CN and HCN in LkCa15 are much more abundant than in our model.
Discussion
What are the causes of the disagreement between our model results and LkCa15, and the difference between DM Tau and LkCa15 ? Let us first discuss the uncertainties in the estimate of the molecular column densities and size of the emission region. Although there are some differences in observed intensities, the lines in LkCa15 are, in fact, not much stronger than those in DM Tau, at most a factor of 2-3 after scaling the data to the same beam. The main reason for the different column densities in DM Tau and LkCa15 then seems to be the adopted size of the emission region, because the column densities are inversely proportional to the square of the size (radius) of the emission region. For CO, where interferometric observations have been performed both for DM Tau and LkCa15, the DM Tau disk is found to be somewhat larger than the LkCa15 disk -DM Tau is about 800 AU in radius, while LkCa15 is 435 AU. But whether this larger emission region in DM Tau holds for other species is not yet known without some degree of ambiguity because the DM Tau data come mainly from single-dish observations, and the size of the region and average molecular abundances are estimated from a model with constant abundances throughout the disk. For LkCa15, in contrast, the sizes of the emission regions are directly derived from interferometric observations and are at most a few arcseconds. Assuming that the interferometer resolves the emission, these column densities should therefore not be affected by uncertainties in the size of the emission region. Although interferometry is more direct, and thus seems more reliable for obtaining the size of the emission region, it should be noted that the derived size depends on the signal to noise ratio and dynamic range achieved in the observations. In fact, the size of the emission region obtained via interferometry seems to contain uncertainties; the size of the CO (J = 2 − 1) emission region around LkCa15 is estimated to be ∼ 600 AU by Duvert et al. (2000), which is larger than the 435 AU obtained by Qi (2000).
Since our 2D modeling procedure convolves the calculated molecular emission with the actual beam of the observations, a larger disk size cannot explain the discrepancy with the LkCa15 interferometer observations, but it does affect the calculated single dish intensities. Such an increase would not be sufficient to explain the discrepancies for LkCa15, however. For example, if we assume an outer disk radius of 600 AU, the calculated HCN and CN line intensities from our model disk (Fig. 5) are increased by a factor of 2 at most, while the calculated CO and HCO + lines become even stronger compared with the observed profiles. There are a few possible explanations for these discrepancies. First, our model might overestimate the CO column density and underestimate the radical column density because we do not consider dissociation of CO via stellar UV radiation, as mentioned above. Detailed consideration of stellar UV radiation, including the CO and H 2 dissociation, will be reported in a forthcoming paper. Indeed, CN is enhanced by an order of magnitude depending on the treatment of the radiation field, although HCN is not much changed. The inclusion of X-rays might be another solution. X-rays cause ionization and dissociation, which enhance chemical activity, and hence increase the transformation of CO to other organic molecules such as CN and HCN (Aikawa & Herbst 1999a. Different X-ray fluxes might also account for the differing molecular column densities in DM Tau and LkCa15, if they are intrinsic. X-rays also cause non-thermal desorption, which might enhance the CN and HCN abundances in the gas phase (Najita, Bergin & Ullom 2001). Finally, of the disks around T Tauri stars surveyed so far, LkCa15 stands out as the disk with the strongest molecular lines and richest chemistry in the interferometer and single-dish data (see §4.1, Qi 2000, Thi et al. in preparation).
Conclusion and Discussion
We have investigated the molecular distributions in protoplanetary disks by combining the "new standard" chemical model with the physical model of D'Alessio et al., which has a temperature gradient in the vertical direction. The calculated molecular column densities are in reasonable agreement with those estimated from single-dish observations of the DM Tau disk without the assumptions in previous calculations of non-thermal desorption and/or an artificially low sticking probability. In the warmer intermediate layers of our disk models, there are large amounts of gaseous molecules owing to thermal desorption and efficient UV shielding caused by large gas densities at large heights from the midplane, a phenomenon known as flaring. Gaseous molecules are abundant only in regions with certain physical conditions. The volume of the layers with these conditions, and thus the column densities of gaseous molecules, are not proportional to the total (H 2 ) column density. Column densities of abundant molecules such as CN, HCN and HCO + do not vary by more than a factor of three during the period t ∼ 10 5 − 10 6 yr. Sulfur-bearing molecules and H 2 O show larger temporal variations. Comparison of our model results with those of Willacy & Langer (2000), who adopted the C-G disk model with a Gaussian density distribution, indicates that gaseous molecular abundances are sensitive to the vertical structure of the disk model; in their model, molecules in the super-heated upper layer are destroyed by the harsh ultraviolet radiation from the star, while sufficient UV shielding is available in the warm upper layers of the D'Alessio et al. model. Deuterated species are also included in our chemical model. The molecular D/H ratios we obtain are in reasonable agreement with those observed in protoplanetary disks.
Despite our agreement with observations of DM Tau, the molecular column densities obtained in our models are smaller than those observed around LkCa15, except for CO and HCO + . The estimated column densities of all observed molecules around LkCa15 are higher than those around DM Tau by roughly an order of magnitude. This difference in derived molecular column densities in the two objects seems to derive, at least partially, from different and/or uncertain sizes of the emission regions. Comparison with other sources shows that some of the difference is likely to be intrinsic, and other physical parameters or processes, such as X-ray ionization and dissociation, are needed to account for the high column densities in the LkCa15 disk.
In addition to the calculation of molecular abundances and column densities, we have solved the equation of radiation transfer to obtain line profiles from our model disks, which can be directly compared with observations. Such a comparison is a much more detailed test of theory than is a comparison of column densities, since line intensities depend not only on the molecular column densities but also on the density and temperature of the molecular layer and the variation of the abundance of the molecule with R and Z. The line intensities of HCN and CN obtained from the theoretical models are lower than the observed intensities in LkCa15, as expected from the comparison of column densities.
There are still several uncertainties in the vertical structure of protoplanetary disks, which might affect our results. Firstly, Chiang & Goldreich (1997) argue that the gas temperature could be lower than the dust temperature in the upper layers; although an important heat source of the gas is collisions with super-heated dust particles, gasdust collisions are not frequent enough to equilibrate the gas temperature and grain temperature because of the low density. With a lower gas temperature, the disk would be less flared than in the model of D'Alessio et al., which assumes equal temperatures of gas and dust. Glassgold & Najita (2001) have pointed out, however, that gases in the upper layer can be heated by X-rays, which were not considered by Chiang & Goldreich (1997). Since Glassgold & Najita (2001) have listed the temperature only in the inner radius (R = 1 AU), we made a rough estimate of the disk surface temperature for the outer radii R ∼ 100 − 300 AU based on the work of Maloney et al. (1996), which suggests that the surface temperature of the X-ray irradiated disk is indeed higher than the midplane (interior) temperature of the C-G and Kyoto models. Moreover, the UV photons can heat the gas through the photoelectric effect on grains and PAHs, as in models of photon-dominated regions. Hence we can at least expect disks to fall off more slowly in density with increasing height than in a simple Gaussian distribution, although detailed studies on the heating and cooling balance between gas and dust are desirable in order to obtain an accurate vertical structure for the disk. Another uncertainty lies in the size and distribution of dust particles. D'Alessio et al. (2001) and Chiang et al. (2001) have noted that their original models are geometrically too thick compared with the observations of edge-on disks, which suggests dust sedimentation and/or growth in the disk. Because the molecular abundances in our model depend on the efficiency of UV shielding by "small" (i.e. interstellar) dust grains (Aikawa & Herbst 1999a), we might have overestimated these abundances. Although a more detailed approach with dust sedimentation is beyond the scope of this paper, we emphasize that molecular abundances can help to resolve uncertainties in dust evolution and disk structure. | 2014-10-01T00:00:00.000Z | 2002-02-03T00:00:00.000 | {
"year": 2002,
"sha1": "bbcaa33a394005732eb7d2c7167028ee43cd81b7",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2002/17/aa1996.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "bbcaa33a394005732eb7d2c7167028ee43cd81b7",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
271926383 | pes2o/s2orc | v3-fos-license | Photodynamic therapy of early stage oral cavity and oropharynx neoplasms: an outcome analysis of 170 patients
The indications of photodynamic therapy (PDT) of oral cavity and oropharynx neoplasms are not well defined. The main reason is that the success rates are not well established. The current paper analyzes our institutional experience of early stage oral cavity and oropharynx neoplasms (Tis-T2) to identify the success rates for each subgroup according to T stage, primary or non-primary treatment and subsites. In total, 170 patients with 226 lesions are treated with PDT. From these lesions, 95 are primary neoplasms, 131 were non-primaries (recurrences and multiple primaries). The overall response rate is 90.7% with a complete response rate of 70.8%. Subgroup analysis identified oral tongue, floor of mouth sites with more favorable outcome. PDT has more favorable results with certain subsites and with previously untreated lesions. However, PDT can find its place for treating lesions in previously treated areas with acceptable results.
Introduction
Photodynamic therapy (PDT) is a relatively new method for management of head and neck neoplasms. Surgery and radiotherapy are long accepted as the standard treatment for head and neck neoplasms with long successful track records [1,2]. This standard approach has an established success rate approaching 95% for early stage neoplasms [3,4]. The upper aerodigestive tract has important functions, such as respiration, swallowing and phonation. Standard treatment of the neoplasms of head and neck region, although not frequent with superWcial neoplasms, might compromise one or more of these functions. Especially with repeated treatments for recurrent and multiple primaries, these problems become more evident [5][6][7][8]. New reconstructive techniques, less invasive surgical modalities such as laser or robot-assisted endoscopic surgery, more precise delivery of radiation with techniques, such as intensity-modiWed radiation treatment (IMRT) all aim to decrease the morbidity rate [9][10][11][12].
PDT is searching its role in the era of conservation surgery. The advantage of PDT is local treatment without long-term systemic eVects. The photosensitizing drug is activated by the light delivered directly onto the neoplasm sparing the surrounding normal mucosa. Protection of the surrounding tissue is further assured by shielding with wet sponges or special shielding waxes.
An additional advantage of PDT is that it shows its eVect via cytotoxicity rather than destructive eVects. This means that when cancer cells are eliminated via apoptosis the extracellular matrix remains forming a scaVold for the surrounding mucosal tissue to advance over [13,14]. Scar formation is minimal and the native tissue that replaces the cancer cells maintains its normal functions limiting the functional loss signiWcantly.
Perhaps, the most important aspect of PDT is its repeatability. PDT can be applied to the same area without accumulative destructive eVects [15][16][17]. It also does not negatively eVect further treatments, such as radiation or chemotherapy [15], leaving further treatment options open.
Despite these advantages PDT remains unknown to many head and neck oncologists [17][18][19][20]. Because PDT is searching its place in the management cascade of head and neck oncology, we believe it is important to have a basic knowledge of what degree of success can be expected in various clinical scenarios. The current paper is aiming to analyze retrospectively a subgroup of patients with early stage oral cavity and oropharynx neoplasms to demonstrate what percentage of success to expect with PDT. This analysis includes neoplasms of diVerent subsites of the oral cavity and oropharynx, primary neoplasms versus recurrences or multiple primary neoplasms in previously conventionally treated Welds.
Materials and methods
The registry of patients (294 patients between 1996 and 2008) treated with PDT was retrospectively screened to identify patients with early stage (Tis, T1, T2) oral cavity and oropharynx squamous cell cancers or carcinoma in situ. One hundred and seventy patients were included in the analysis.
All patients were subject to pretreatment evaluation according to the guidelines of our institute.
Pre-treatment work-up Standard minimum evaluation consisted of biopsy to determine the histological diagnosis of squamous cell cancer or carcinoma in situ (Tis); X-ray of the thorax; ultrasound (US) of the neck and Wne-needle aspiration (FNA) to determine nodal metastasis situation. Tumor thickness is measured either by US or magnetic resonance imaging (MRI). The whole surface of the oral cavity tumors are screened with an oral US probe to determine the deepest point of inWltration. Only tumors that have less than or equal to 5 mm depth of invasion were treated with surface PDT. Oropharyngeal tumors could not be reached by US and, therefore, evaluated by the less accurate method of MRI. If the tumors cannot be visualized on the MR images or there is a surface of contrast uptake, but no evidence of deep invasion, the patients received PDT. The cut-oV depth of 5 mm is decided to enable treatment of 5 mm margin deeper than the tumor. The total 10 mm is the average depth of penetration of the 652 nm light used to activate the photosensitizer.
The work-up for regional lymphatic status is US of the neck and FNA of suspected lymph nodes. All patients who received PDT had N0 stage according to US criteria. The guidelines of our institute involve no prophylactic neck dissections for T1 and T2 oral cavity tumors if they are candidates for transoral excisions or PDT. The neck node status is followed by US combined with FNA at 6-month intervals. Patients with suspected lymph nodes receive modiWed radical neck dissection.
Patients were informed of potential beneWts and risks of the treatment and instructed about light avoidance measures and supplied with a light meter to measure ambient light.mTHPC (temoporWn, Foscan ® , Biolytec Pharma Ltd., Dublin, Ireland) at a dose of 0.15 mg/kg was injected to a deep vein with slow infusion. Patients were discharged home after the injection. Illumination took place 96 h after the mTHPC injection. Light is delivered by a diode laser at 652 nm. Dose delivered is 20 J/cm 2 with a Xuence rate of 100 mW/cm 2 . Preferred method is to deliver light with one spot delivered via a microlens. If one spot illumination is not applicable due to the shape or location of the neoplasm multiple spots were used. The lesion plus 5 mm margin of normal appearing mucosa is illuminated. All patients received corticosteroids and pain management. Hospital stay depended on the functional limitations after PDT due to location and extent of the neoplasm, as well as edema due to PDT.
Tumor response is analyzed according to the World Health Organization (WHO) criteria [20]: Complete response (CR): the disappearance of all known disease Partial response (PR): 50% or more decrease in the dimensions of the tumor No response (NR): less than 50% decrease or less than 25% increase in tumor dimensions Progressive disease (PD): 25% or more increase in the size of one or more measurable lesions or the appearance of new lesions All responses have to be conWrmed by two observations not <4 weeks apart. Overall response (OR) is the sum of complete and partial responders.
Follow-up
There is a follow-up schedule of 1, 2, 4, 8, 16, 24, 36, 52 weeks after PDT, followed by routine controls every 4 months for a total of 5 years.
Patients
Of the 170 patients evaluated, 80 were female and 90 were male. The average age at treatment was 60.5 years. A total of 226 neoplasms was treated with PDT, 35 patients are treated for 2 or more neoplasms. Of the evaluated neoplasms, 95 were primary neoplasms, 131 were non-primary neoplasms consisting of 65 recurrences, 46 s, 9 third and 3 fourth primary neoplasms and of 8 residual neoplasms after initial treatment. Previous treatments within the non-primary neoplasm group (n = 131) include: radiotherapy 48.1%, chemoradiation 22
Sub-site analysis
In Table 1 . Oral tongue has signiWcantly better response rate than the rest of the group (P < 0.05) and alveolar process has signiWcantly lower response rate than the rest of the group (P < 0.05).
The mean local disease-free interval for the primary neoplasms with a CR is 117.8 months (95% CI 102.1-133.6 months). For the non-primary neoplasms with a CR, the interval is 84.9 months (95% CI 67.2-102.7 months). 1, 2 and 5-year survival of patients with primary neoplasms are 90, 85 and 74%, respectively. For non-primary neoplasms disease-free survival at 1, 2 and 5 years are 81, 64 and 48%, respectively (Fig. 2). The diVerence in local disease-free survival for primary neoplasms versus non-primary neoplasms is statistically signiWcant (P < 0.001).
Adverse events
Three patients had second degree burn wounds as a result of failure to comply by light protection guidelines. All of the burn wounds healed with conservative measurements. Nine patients had permanent discoloration at the injection site of mTHPC. Five patients had scar formation in the buccal mucosa leading to mild/moderate trismus. These patients were managed with conservative measures involving stretching exercises. Twelve patients had to receive temporary nasogastric sonda feeding for an average time of 5 days because of inability to swallow. The healing process after PDT takes 2-5 weeks and the wound heals with minimum scarring after this period. There were no systemic or serious adverse events.
Discussion
Photodynamic therapy has been used in head and neck oncology for some time with reported success [17][18][19][21][22][23][24][25][26][27][28][29][30][31]. Unfortunately, there are no studies comparing PDT with standard treatment methods, such as surgical resection and radiation therapy. Up to date, all data are over developing the technique of PDT and not actually comparing it with other methods. The overall clinical response rate after PDT as determined by the present analysis is below the reported success rates of surgery or radiation treatment [1][2][3][4]. The real reason is very diYcult to determine. The patient groups are not comparable and furthermore additional treatments, such as re-excision for positive resection margins and postoperative radiation is often included in the disease-free survival analysis. The current analysis is focusing on Wnding favorable conditions for PDT and, therefore, excluding additional treatments, such as surgical resections from disease-free survival analysis (these patients are computed as failures). The ideal setup for such a comparison would be randomised study comparing surgical resection with PDT. However, for this purpose favorable situations, where PDT can be preferred over surgery, have to be identiWed. The PDT literature fails to identify these favoured areas because of the small numbers of treated patients and the heterogeneity of treated groups making it impossible to give a clear answer to the question of what to treat and maybe more importantly what not to treat. These are all preliminary studies that were done to establish treatment protocols for PDT and subsequently treat patients to reach larger series. The results of such larger series are recently becoming available to us.
The longest experience is with photofrin-mediated PDT with the largest series published by Biel with 276 patients [28]. This series include laryngeal and oral cavity neoplasms with T1-3 N0 cancers. The CR rates are impressive with 91% for laryngeal neoplasms and 94% for oral cavity tumors. However, there is no subgroup analysis to show if certain subgroups of neoplasms react better. It is possible to extend the limits of PDT with mTHPC, which is activated with 652 nm light enabling deeper tissue penetration and providing treatment of deeper neoplasms [32]. A prospective multinational multi-institutional study with mTHPC-PDT was carried out with the participation of our institute [29]. The aimed group was patients with small neoplasms of the oral cavity with <25 mm of diameter and <5 mm depth of invasion. This group of well selected oral cavity tumors showed CR to mTHPC-PDT at a rate of 85% which is surprisingly lower than photofrin PDT [28].
The 5 mm cut-oV limit is arbitrary. The international multicentre study carried out by D'Cruz et al. applied PDT to tumors deeper than 10 mm depth of invasion [30]. Treatment of deeper tumors (up to 50 mm) was justiWed by the aim of the study, which was to provide palliation to incurable head and neck cancers by conventional methods. This great range of thickness enabled the authors to compare tumors with <10 mm depth to tumors with >10 mm depth. Among completely illuminated lesions with <10 mm depth had a much better response than deeper tumors, with CR of 60 versus 26% and OR of 75 versus 40%. It is a pity that they did not choose 5 mm depth as their comparison parameter, but we still get the idea that depth of the lesion is very important.
The choice method we use to determine the depth of the tumor is ultrasound. Endoscopic US is being used to stage several digestive system cancers, such as anal cancers with proven accuracy [33]. Shintani et al. [34] showed that intraoral US measurements of depth of tumor correlate well with histologic specimens. The whole surface of the tumor should be screened with US not to miss any spot deeper than 5 mm. The inaccuracy of the US (if there is any) is compensated by PDT reaching 10 mm depth. The real problem is with oropharyngeal cancers, where the US probe cannot reach or cannot scan the entire surface. MRI can be used to determine depth. If the tumor cannot be seen on MR images or appear just as a contrast uptake, although there is no evidence to support it and it can be assumed that the tumor is <5 mm deep. This method is by no means accurate and has to be replaced by another method. Introduction of newer methods, such as optical coherence tomography (OCT) might help in the future.
Location of the neoplasm
Oral cavity and oropharynx are not homogenous structures that have constant tissue characteristics. The tissue composition of the alveolar process is clearly diVerent than that of the tongue or soft palate. Furthermore, some areas in the oral cavity and pharynx are Xat while others have a complex geography. Multiple spots might have to be used to illuminate all the extensions of the neoplasm, theoretically making the chance of geographic miss greater.
It is, therefore, important to designate the favorable subsites of the oral cavity and oropharynx. The only sub-site that reacts signiWcantly better is oral tongue. The reason is probably the relative homogeneity of the tissue with the absence of nearby bony structures, as well as the relative ease of delivering the light in a homogeneous manner on the Xat surface.
Floor of mouth and soft palate have a more complex anatomy than the oral tongue with proximal bony structures (i.e. mandible and hard palate) probably causing lower CR rates. Buccal mucosa has a relatively Xat surface providing ease of illumination with a rather homogenous structure with no apparent reason for a poorer outcome than the oral tongue. Although a statistic comparison is not carried out, buccal mucosa is also observed to have more scar tissue causing mild trismus (5/23 patients).
As can be expected, the neoplasms of the alveolar process have less CR rate than the overall mean. Alveolar process has a more complex anatomy with underlying mandible and has three surfaces making homogenous illumination harder to achieve. However, it can be argued that the OR rate of 73.7% helps in reducing the size of the neoplasm and enable a smaller excision subsequently.
The study by Hopper et al. [29] is the only publication reporting success rates for subsites, with CR rates of 89% for Xoor of mouth, 83% for lip, 93% for anterior tongue and 83% for buccal mucosa. There are publications that report failures in certain sites, but the numbers are very low to draw any conclusions [27].
The numbers of retromolar trigon, hard palate and nasal cavity neoplasms treated with PDT are not enough to do a statistical analysis (9, 8 and 8 patients, respectively). However, the CR rates of the retromolar triangle (66.7%) and hard palate (62.5%) are near the overall mean (70.8%).
The tumors of the oral tongue, which has the best results to PDT, are also usually easily resectable with minimum morbidity making PDT an unlikely candidate for initial therapy. Sites that would have functional problems after surgical resection, such as alveolar process and soft palate, although have lower response rates to PDT might be treated with PDT as initial treatment to avoid the functional problems, reserving surgery and/or radiation treatment for failures.
T stage
It can be an expected result because as the neoplasm area gets larger delivering light evenly gets harder and the risk for geographic miss gets greater. In our series, we do not observe such a diVerence. Although T1 tumors have a better CR rate than T2 tumors this is not statistically diVerent (Table 1). Therefore, we can say that the size of the tumor does not make a diVerence as long as all of the tumor can be fully illuminated and the depth of invasion is <5 mm.
In our series, Tis show a higher CR rate than T1 and T2 tumors with 79.5%. Tis recur much earlier than T1 and T2 tumors (mean disease-free interval of 65.7 months for dysplasia versus 109.1 months for T1 and 113.4 months for T2 tumors) (Fig. 1). It is well known that patients with Tis of the upper aerodigestive tract are prone to develop new leukoplakia a number of times [35]. It should be kept in mind that PDT can be repeated a number of times with minimal morbidity to treat leukoplakia as they recur. The success of repeated treatments as lesions recur becomes evident by the similar survival rates of patients with Tis and T1 neoplasms. The diVerence in disease-free interval is made up for either by successful re-treatments or the relatively less lethal nature of Tis.
Recurrences and multiple primary neoplasms
Recurrences and multiple primary neoplasms pose a challenge to the head and neck oncologist. Most of these lesions occur in the previously irradiated or operated Welds [36,37]. The study by D'Cruz et al. focused on such patients who had refractory neoplasms of the head and neck area that were unsuccessfully treated or unsuitable for conventional treatments [30]. They report 38% OR and 16% CR. It should be noted that this study was a multicentre, multinational study with vague inclusion criteria, resulting in treatment of neoplasms that may not be suitable for surface PDT. When the lesions that were not fully illuminated and deeper than 10 mm were excluded they report an OR rate of 54% with 30% CR rate.
In our series, a total of 131 recurrent, second, or multiple primary neoplasms were treated with PDT. Even though non-primary neoplasms respond less favorably to PDT than primary neoplasms (65.9 vs. 77.9% CR, respectively, P < 0.05), 65.9% CR is a considerable success if we take into account that the treated area received radiation in 48.1%, chemoradiation in 22.7% and previous surgery in 75.6% of the patients. Furthermore, there is an 86.3% OR which means that an additional group of lesions decrease in size which makes subsequent surgical resection smaller. The diVerence with the study by D'Cruz et al. can be accounted to the more conservative selection criteria by our group.
Management of neck nodes
Elective treatment of N0 neck in case of early oral cavity cancers is a point of debate. Studies comparing elective neck dissection with wait and see policy show no diVerence in overall survival [38]. In experienced hands, US of the neck combined with FNA can reach a sensitivity of 73% and a speciWcity of 100% [39]. We adopted the wait and see policy for patients staged as N0 by US of the neck. Neck recurrences can be detected by regular US and FNA controls and the patients can receive subsequent neck dissections. This approach resulted in 24.7% patients receiving neck dissections. All of these patients had resectable neck nodes which were adequately removed after neck dissections. The number is quiet high if you take into consideration that the tumors in question are thinner than 5 mm. This phenomenon can be explained with the association of the neck recurrence with second primary or recurrent tumors in almost half of the cases. It can be speculated that only 13% of the patients had neck recurrences that are associated with the tumor treated by PDT. Depending on the preference and policies of the center, elective neck dissection can be combined with PDT. The neck dissection has to be done either before PDT or 2-3 weeks after PDT. There is no evidence to prefer one timing over the other.
Conclusion
Our institutional experience supports the value of temo-porWn-PDT in carefully selected patients. The success rate of PDT is independent of T stage as long as the depth of invasion does not exceed 5 mm. Although tongue tumors respond best to PDT, areas that would have functional problems after resection such as palatum molle, alveolar process, retromolar trigon can be Wrst treated with PDT to avoid morbidity and reserving surgery and/or radiation for failures/recurrences. DiYcult to treat lesions such as recurrent neoplasms and multiple primary neoplasms located in previously irradiated or operated Welds have a very acceptable CR rate with minimum morbidity, provided that they are carefully selected for eligibility. | 2014-10-01T00:00:00.000Z | 2010-08-13T00:00:00.000 | {
"year": 2010,
"sha1": "2c5f981b623d879dbd3452fe924cd1ba629dfd0e",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-010-1361-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbdde5174b3551369d7adaf34aecb226ffc4f591",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118357239 | pes2o/s2orc | v3-fos-license | Impurity effects in a vortex core in a chiral p-wave superconductor within the t-matrix approximation
We study the effects of non-magnetic impurity scattering on the Andreev bound states (ABS) in an isolated vortex in a two-dimensional chiral p-wave superconductor numerically. We incorporate the impurity scattering effects into the quasiclassical Eilenberger formulation through the self-consistent $t$-matrix approximation. Within this scheme, we calculate the local density of states (LDOS) around two types of vortices:"parallel"("anti-parallel") vortex where the phase winding of the pair-potential coming from vorticity and that coming from chirality have the same (opposite) sign. When the scattering phase-shift $\delta_0$ of each impurity is small, we find that impurities affect differently low energy quasiparticle spectrum around the two types of vortex in a way similar to that in the Born limit ($\delta_0\rightarrow 0$). For a larger $\delta_0(\leq \pi/2)$ however we find that ABS in the vortex is strongly suppressed by impurities for both types of vortex. We found that there are some correlations between the suppression of ABS near vortex cores and the low energy density of states due to impurity bands in the bulk.
Introduction
Quantized magnetic fluxes in type II superconductors are one of the most important phenomena in the superconductivity. They dominate various properties of superconductors under high field, and therefore to know their behavior has a significant meaning for industrial use of superconducting materials. Recently, importance of topological phase of condensed matter is recognized and such materials are attracted very much by their novel features for both theoretical and applicational interests. In superconductors, the topological phase exhibits exact zero energy states localized on the surface or within vortices. Because this states are topologically protected, they can be tolerant against perturbations.
It is well known that within a vortex in clean superconductors, there are discrete low energy bound states known as Caroli-de Gennes-Matricon (CdGM) mode 1 . The order of energy gap between CdGM states are represented by bulk's order parameter amplitude ∆ b and Fermi energy E F as ∆ b 2 /E F , and in topologically non-trivial superconductors, the energy lowest states of the CdGM mode is about (1/2)∆ b 2 /E F contrary to the topological superconductors of which lowest energy level is exactly zero 2,3,4 . In ordinary superconductors the energy spectrum of CdGM mode are almost continuous, for usually E F is much larger than ∆ b . CdGM mode is also considered as a kind of Andreev bound states and the continuous energy spectrum is given well by this picture 5 . Because E F ∆ b , there are many nearly zero energy states in the vortex and these states are not protected by the topology even in topological superconductors. So it is not clear how tolerant toward the perturbations the zero energy states is, even if being protected itself.
Two-dimensional chiral p-wave superconductivity, which is believed to be realized in Sr 2 RuO 4 6,7,8,9 and thin film superfluid 3 He-A phase 10 , is one of topological superconductivities and can have an exact zero energy state in its vortex 3,4 . This order breaks the time reversal symmetry spontaneously and has two degenerate ground states, each of which corresponds to the internal angular momentum of its Cooper pair. It is considered that there are two types of isolated single vortex in this superconductor perpendicular to the plane 11 . One is that whose winding of vorticity is the same as the winding of chirality of the Cooper pair ("parallel vortex"), and the other is opposite ("anti-parallel vortex") 8,12,13,16 . The total angular momentum of the system l z is ±2 about the first one and 0 about the second one.
Within an isolated vortex in this system, it is theoretically predicted that in the anti-parallel vortices the effect of non-magnetic spatially-averaged impurities are dramatically suppressed compared to the parallel vortices 13, 14,15,16 . This effect was also reported in a similar system 17 and some authors are trying to treat this robustness more generally as "odd-frequency pairing" 16,18 .
However, these studies have been done only in the Born limit, which corresponds to the situation where a lot of very weak scatterers exist randomly, and the scattering phase-shift of a single impurity potential δ 0 is 0 in t-matrix formulation. Recently it is reported that the core-shrinkage effect 19 of vortex in the twodimensional s-wave or chiral p-wave superconductors are different between Born and unitary limit (δ 0 → π/2) 20,21 , and there may be another difference between them besides this effect. Because the phase-shift depends on the superconducting material and species of impurities in general (for example, see Ref. 22 ), it is therefore not sufficient for adapting these theories to the real materials. In addition, because some authors have been reported a different result 23 , we consider that the impurity effect on this system is still unclear.
To clarify impurity effects on the ABS in the vortex, in this paper we study both Born and unitary limit and the intermediate regime between them in a fully self-consistent way. We use Eilenberger's quasiclassical theory with t-matrix formulation 24 . The quasiclassical formulation is a kind of coarse-grained model of Gor'kov theory (or Bogoliubov-de Gennes theory in the pure case) over the scale of the order of Fermi wavelength. This is because the quasiclassical theory is well suited to study real systems such as 3 He and Sr 2 RuO 4 . We think that it is important to search for the novel phenomena that survive even in the quasiclassical regime, in order to find new phenomena related to topological superconductors accessible in experiments.
Model and Method
We consider two-dimensional chiral p-wave superconductors with isotropic circular Fermi surface in the type II limit, i.e. the ratio of the magnetic penetration depth to the coherence length is taken to be infinity.
In the quasiclassical theory of superconductivity, the electronic structure of quasiparticles is described in terms of the quasiclassical Green functioň which is defined by Gor'kov Green function integrated over the magnitude of quasiparticle energy. Throughout this paper, we use the following dimensionless parameters:r is the coherence length at zero temperature without impurities, k F is the Fermi wave number, T c0 is the critical temperature without impurities and ∆ 0 is the amplitude of zero temperature order parameter in the bulk without impurities. The quasiclassical Green function (1) satisfies the Eilenberger and the normalized conditionǧ 2 = −π 2 1 1 1. Hereτ i (i = 1, 2, 3) denote the Pauli matrices in the particle-hole space. The symbolsΣ and∆ denote, respectively, the impurity self-energy and the pair-potential. Within the t-matrix approximation, the character of impurities is parametrized by the scattering rate in the normal stateΓ n =h(2τ n ∆ 0 ) −1 and the phase-shift of a single impurity δ 0 . The impurity self-energy is expressed in terms ofǧ, Γ n and δ 0 as 13 .
(4) 4 The notation A denotes averaging A over a Fermi surface, and in this case, it can be expressed as A = 2π 0 dαA(α)/(2π) wherek k k = (cos α, sin α). The pair-potential has the matrix form of ∆ (r r r,k k k) = 0∆ (r r r,k k k) −∆ * (r r r,k k k) 0 , where∆ (r r r,k k k) satisfies the gap equation Hereω n = (2n + 1)e γT /h is the Matsubara frequency, λ is the (dimensionless) coupling constant that satisfies 13 and γ ≈ 0.5772 is the Euler constant, which comes from the equation k B T c0 = e γ ∆ 0 /π. The symbolω c is a cut-off frequency.
In the absence of external magnetic fields, the chiral p-wave superconductors have two-fold degenerate thermodynamic states with the pair-potential ∆ (k k k) ∝ exp(±iα); Each state has the Cooper pairs with internal angular momentum ±h.
In the chiral p-wave states with a single vortex with positive vorticity atr r r = 0, the pair-potential ∆ (r r r, k k k) has the asymptotic form e i(φ ±α) far away from vortex center. In the intermediate regime with finiter r r around the single vortex, both Cooper pairs with ±h coexist. Taking account of two-dimensional rotational symmetry around r r r = 0, we can write the pair-potentials 13 generally in the forms ∆ (p) (r r r,k k k) =∆ for anti-parallel vortex (where the angular momentum is l z = 0). The subscripts + and − describe dominant and induced components of pair-potential; The latter vanishes far away from the vortex center. We numerically calculate the quasiclassical Green's functionsǧ around the isolated vortex in a self-consistent way through a successive iteration of the Eilenberger equation (2) , the Dyson equation (4) and the gap equation (6).
We solve eq. (2) with use of Ricatti-transformation 26,27 . Equation (2) is solved on the line (so called "quasiclassical trajectory") with a constantb =r r r · (ẑ × k k k). Note thatb can be regarded as the impact parameter, which is the quasiparticle angular momentum divided by the Fermi momentum.
We adopt the classical fourth-order Runge-Kutta method to solve (2). We denote by x =r r r ·k k k the spatial coordinate along the trajectory. For the initial value at cut-off x = ±x c , we use bulk solution. We set x c = 100 when calculate pairpotential in the gap equation (6). We set the cut-off frequencyω c = 10 as the same value in the earlier studies 13,16 . 2 show LDOS of the vortex core atT = 0.1,Γ n = 0.3 in the Born limit (δ 0 → 0) and unitary limit (δ 0 → π/2) respectively. In the Born limit, there is a sharp peak at zero energy at the center of anti-parallel vortex (l z = 0) but the peak is suppressed for the parallel vortex (l z = 2). This result implies that low energy bound states (vortex ABS) in the anti-parallel vortex (l z = 0) is more robust against impurities than those in the parallel vortex (l z = 2). This behavior is consistent with earlier results 13,14,15,16 .
On the other hand, in the unitary limit the zero energy peak is severely suppressed even in the anti-parallel vortex (l z = 0) as well as the parallel vortex (l z = 2).
We quantify the effects of impurities on vortex ABS by the peak value of LDOS at the vortex center N(r r r = 0,ε = 0)/N 0 , which is shown in fig. 3. The peak is suppressed more severely when δ 0 approaches from the Born limit to the unitary limit.
For fig. 3-(d), the peak of anti-parallel vortices (l z = 0) is however suppressed even in the Born limit. We will discuss this behavior later.
In the following, we discuss the results shown in figs. 1-3, considering the impurity effects on quasiparticle density of states in the bulk chiral p-wave superconductors. Both temperature and the non-magnetic impurities cause pair-breaking effect in chiral p-wave superconductors. As a result, the modulus of the pair-potential in the bulk ∆ b is suppressed regardless of the type of impurities when T is high and Γ n is large. The reduction of ∆ b makes vortex ABS more extended spatially and lower the peak of LDOS near vortex core, regardless of the value of δ 0 .
Even when ∆ b is not so small (i.e., T is sufficiently low and Γ n is sufficiently small), there exist quasiparticles with energy smaller than ∆ b , which stem from impurity bands. Following the standard calculation of impurity effects in the spatially uniform unconventional weak-coupling superconductors 28 , we can obtain the band edge of the impurity band. When the δ 0 is increasing toward π/2, the energy of the band edge decreases as shown in fig. 4, and it becomes zero (i.e., the impurity band has a finite density of states at the Fermi level) when δ 0 exceeds a critical value δ c . The value of δ c is given as a solution of the equation under the assumption that∆ b is monotonically decreasing of δ 0 . At the energy where the impurity band has the finite density of states, there is a resonance between the localized wave functions near vortex cores and wave functions spatially extended outside vortex and thus we can naturally understand the reason why the spectra of vortex ABS are heavily broadened. The suppression at the Born limit in fig. 3-(d) is also understood in this point of view. For this parameter, the edge value of impurty band is finite but very small as in fig. 4. The narrow gap is insufficient to inhibit resonance between inner and outer states, and the peak is broadened even at the Born limit.
When ∆ b is not so small and δ 0 is smaller than but not too close to δ c , quasiparticles with energy lower than the band edge of impurity band predominate the vortex ABS. Considering the impurity scattering between vortex ABS only 13, 14 , we see that the impurity effects on the vortex ABS strongly depend on the type of vortex.
We indicate δ c by the arrows in fig. 3 (a)-(d). We can see in fig. 3 (a)-(c) that δ c moderately well matches the crossover phase-shift from the regime (with smaller δ 0 ) where the impurity effects depend strongly on the type of vortices to the regime (with larger δ 0 ) where the vortex ABS on both types of vortices are heavily suppressed. This result is consistent with our argument in the above. | 2013-10-15T08:20:50.000Z | 2013-05-22T00:00:00.000 | {
"year": 2013,
"sha1": "7b91cd625fec414b1f26b30b15a9452014332132",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1305.5171",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7b91cd625fec414b1f26b30b15a9452014332132",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251540587 | pes2o/s2orc | v3-fos-license | Associations of Duration of Preadoption Out-of-home Care, Genetic Risk for Schizophrenia Spectrum Disorders and Adoptive Family Functioning with Later Psychiatric Disorders of Adoptees
The objective was to examine the impacts of duration of preadoption out-of-home care and adoptive family functioning on later psychiatric morbidity of adoptees with high (HR) and low (LR) genetic risk for schizophrenia spectrum disorders. The study uses nationwide data from the Finnish Adoptive Family Study of Schizophrenia. The study population in this substudy consisted of 43 h adoptees and 128 LR adoptees. Of these adoptees, 90 had spent 0–6 months and 81 over 6 months in preadoption out-of-home care. The family functioning of adoptive families was assessed based on Global Family Ratings and psychiatric disorders on DSM-III-R criteria. The results showed that among the adoptees with over 6 months in preadoption out-of-home care, the likelihood for psychiatric disorders was significantly increased in HR adoptees compared to LR adoptees. In adoptees with 6 months or less in preadoption out-of-home care, an increased likelihood for psychiatric disorders was found among those living in adoptive families with dysfunctional processes. These findings indicate that especially for HR children, a well-functioning early caregiving environment is crucial in terms of subsequent mental wellbeing. The results emphasize that when adoption is necessary, early placement and well-functioning adoptive family environment are beneficial to children.
Introduction
It has been estimated that approximately 3.2 million children are placed in out-of-home care worldwide yearly [1].Studies have reported that in the U.S., about 660,000 children [2] and in the EU, nearly one million children [3] experience out-of-home care placement every year.In Finland, approximately 1.1% of all the children in the population are placed or live in out-of-home care every year, and of infants aged 0-2 years, 0.2% were in out-of-home care in the year 2019 [4].Heino et al. [5] reported that in Finland, the main reasons for placement in out-of-home care are parents' mental health problems (33%) and substance abuse (26%).Likewise, parental mental health problems and substance use, especially those of biological mothers, have been reported to increase the risk for out-of-home care for offspring [6].
Previous research has documented that children of mothers with schizophrenia are at 12.6-23.75times higher (incidence rate ratio = IRR) risk of being placed in out-of-home care during their infancy and childhood compared to children of mothers from general population [7,8].The study of Ranning et al. [7] showed that the risk of being placed in out-of-home care is particularly high (IRR = 80.2) during the child's first year for children of mothers with schizophrenia.It has been reported that around half of the mothers with schizophrenia lose custody of their children either temporarily or permanently [9], which predisposes children to out-of-home care and also adoption.
Preadoption out-of-home care, such as institutional or foster care, which lasts over 6 or 12 months [10][11][12][13][14] has been found to cause instability and disruptions in early caregiving [14] and increase the risk for psychiatric care in adulthood [15].Institutional rearing has been considered to be a suboptimal caregiving environment for young children [16][17][18][19] due to features such as low levels of caregiver-child interaction and shifting caregivers [18,19].The nature of institutional rearing is also proposed to weaken children's opportunities to form attachment relationships with caregivers [20].
According to attachment theory [21,22], by the end of the first year of life, most infants develop an attachment towards the early caregiver, which is shown to be dependent upon the quality of the received care [12,23].Early attachment relationships have been documented to be important not only for the development of stress and affect regulation capacities [24] but also for the infant's neurobiological development [25][26][27][28].
Genome-wide association studies (GWAS) have shown that many of the genetic variants that associate with the development of schizophrenia are related to early pre-and perinatal neurodevelopment [29,30].Studies have reported that early perinatal adversities contribute to the development of schizophrenia [29].The neurodevelopmental models of schizophrenia have proposed that early developmental insults interact with genetic factors in aberrant brain development that may mediate the risk for the development of schizophrenia [31].
After placement in adoptive families with improved caregiving, adoptees show notable developmental catchup and recovery following the possible adverse effects of early institutionalization [12,[32][33][34].However, although the negative effects of early adverse experiences are shown to attenuate over time [35], not all adopted children show equal developmental catch-up [10] regardless of the time spent in adoptive families [11,17].Van IJzendoorn et al. [32] have suggested that the degree of recovery following institutionalization may depend on the characteristics of the adoptive family, such as parental sensitivity or socioeconomic status of the adoptive family.On the other hand, the study of Finet et al. [36] did not find adoptive parenting to moderate the associations between preadoption experiences and children's behavioral adjustment.
Generally, the research evidence on the role of family functioning in mitigating the possible maladjustments caused by preadoption experiences has so far remained sparse and inconclusive.Although adopted children have been shown to benefit from improved caregiving in adoptive families [32,33], earlier studies have rarely been able to assess the quality of adoptive families' functioning.Furthermore, in studies that have focused on the impacts of institutionalization, the genetic background of the adopted children and its impact on the later development have remained unknown, although their importance has been widely acknowledged [12,27,37].The earlier findings from the Finnish Adoptive Family Study of Schizophrenia have shown that the quality of adoptive family functioning, assessed with the Global Family Ratings (GFRs), associates with adoptees' later psychiatric morbidity, especially in adoptees with high genetic risk for schizophrenia spectrum disorders [38][39][40].However, in these studies the time in preadoption out-of-home care has not been considered.
In this study, the impacts of genetic risk for schizophrenia spectrum disorders and adoptive family functioning on later psychiatric morbidity of the adoptees were assessed, separately, for those exposed to short (≤ 6 months) and longer (> 6 months) preadoption out-home care.The preadoption out-of-home care was provided by municipal social services.In Finland, institutional care has been typically the most common alternative for out-of-home care and there has been notable differences in the quality of care [41] but unfortunately detailed information was not available.The study used national data from the Finnish Adoptive Family Study of Schizophrenia, that allowed the analysis of genetic liability for schizophrenia spectrum disorders and 1 3 environmental factors separately in the development of psychiatric disorders.
Subjects
The current study utilizes the nationwide data from the Finnish Adoptive Family Study of Schizophrenia.The study design has been described in detail elsewhere [42][43][44] and it is addressed here shortly.In the study design, the adoptees who are not reared by their biological parents, are examined to identify the genetic (schizophrenia liability) and environmental contributions (rearing environment), as well as their interaction, in the development of psychiatric disorders.Initially, the population of the study was based on the hospital records that covered all the women (n = 19,447) who were admitted to psychiatric care in Finnish hospitals during the years 1960 ÷ 1979.After that, the women who had been diagnosed at least once with schizophrenia or paranoid psychosis were identified from this study population.These women were further scrutinized through census and parish registers with the aim of identifying the ones who had given up a child or children (high-risk adoptees) for adoption.The adoptees who were adopted after the age of four, adopted abroad or were adopted by relatives were excluded from the study population.No diagnostic exclusion criteria were applied to adoptive parents and therefore they represent an epidemiological sample of adoptive parents 42,44].
The high-risk adoptees (HR) and their adoptive families were demographically matched with low-risk (LR) control adoptees and their adoptive parents.The control adoptees, to be scrutinized further, were selected among those who were given up for adoption by biological mothers who in the primary phase of the study either had no psychiatric diagnosis or were diagnosed with a psychiatric disorder other than a schizophrenia spectrum disorder.The matching criteria included sex and age of the adoptee, the age of the adoptee at the time of the placement, and socioeconomic status of the adoptive family [42,43].The adoptive families of the study were assessed by experienced psychiatrists with broad research procedures which included family observations, interviews and psychological tests both individually and with different family combinations [42].
The diagnoses of all biological mothers were verified.Broad schizophrenia spectrum disorders (DSM-III-R) [45] comprised the following disorders according to Kendler et al. [46]: schizophrenia, the odd-cluster personality disorders (schizotypal, schizoid and paranoid personality disorders plus avoidant personality disorder), non-schizophrenic nonaffective psychoses (schizoaffective, schizophreniform, and delusional disorders and psychotic disorder not otherwise specified) and affective psychoses (bipolar and depressive disorders with psychotic features) [43].
The adoptees were defined to have HR for schizophrenia spectrum disorders, if they were given up for adoption by a mother with a verified diagnosis of a broad schizophrenia spectrum disorder.The LR adoptees had a biological mother who had either a non-spectrum diagnosis or no psychiatric diagnosis [42].The final study population (n = 382) of the Finnish Adoptive Family Study of Schizophrenia was comprised of 190 adoptees at HR and 192 adoptees at LR for schizophrenia spectrum disorders [42,44].
The current study sample consists of 171 adoptees (43 h adoptees, 128 LR adoptees) and their adoptive families for whom the information of time in preadoption out-of-home care, family functioning and time with biological mother was available for statistical analyses.
An attrition analysis (Table S1, available online) was conducted to examine whether the current study sample (n = 171) differed from those not included in the analyses.The current study sample differed statistically significantly from the not-included adoptees with regard to their genetic status (HR adoptees: 25.1% vs. 69.7%,p < 0.001) and time spent with biological mother (one month or more, 49.7% vs. 30.6%,p = 0.010).
Psychiatric disorders of the adoptees
The adoptees' psychiatric disorders were based on DSM-III-R [45] diagnostic classification and its criteria, which were in use when the research diagnoses were verified.DSM-III-R is a descriptive classification system, compatible with the current DSM-5 version [47], and their criteria are mainly similar.The category of schizophrenia spectrum disorders included diagnoses for schizophrenia, the odd-cluster personality disorders (schizotypal, schizoid and paranoid personality disorders plus avoidant personality disorder), non-schizophrenic non-affective psychoses (schizoaffective, schizophreniform, and delusional disorders and psychotic disorder not otherwise specified), and affective psychoses (bipolar and depressive disorders with psychotic features).The psychiatric disorders other than schizophrenia spectrum disorders included all the non-schizophrenia spectrum disorders [43].
The DSM-III-R diagnoses of the adoptees were made by personal interviews and from hospital records and other diagnostically significant sources available [43].At the time of the initial assessment, the adoptees' median age was 23 years (IQR 17-33 years).The final diagnostic evaluation of the adoptees took place 21 years after the initial assessment initial assessments were later re-evaluated and a sample of 40 recorded interviews was rated by three research interviewers to define the reliability of the ratings.The interraterreliability was regarded reasonable (0.72) on the scale from 0 = poor to 1 = high concordance [40].
Each adoptive family was evaluated on the following factors: (1) Anxiety, (2) Basic trust, (3) Boundaries, (4) Conflicts, (5) Empathy, (6) Flexibility of homeostasis, (7) Interaction and its quality, (8) Parental coalition, (9) Power relations, (10) Reality testing, and 11) Transactional defenses.Initially, the GFRs were used as a basis to classify the families into five categories on a scale from 1) healthy families to 5) severely disturbed and chaotic families [40].This categorization is based conceptually on the hypothetical continuum of the five-level Global Assessment of Relational Functioning (GARF), published originally in DMS-IV [52].Contemporary family research literary was used in the formulation of the GFRs categories [53].
Due to significant similarities between the GFRs categories 1 and 2, and categories 4 and 5 [38,40], the five GFRs categories of family functioning were regrouped into three groups: (1) Families with functional processes (GFRs categories 1 & 2), in which, for example, the levels of anxiety and quality of interaction were considered healthy; (2) Families with mildly dysfunctional processes (GFRs category 3), in which, for example, the levels of anxiety and quality of interaction were estimated as moderately dysfunctional; and (3) Families with dysfunctional processes (GFRs categories 4 & 5), in which, for instance, the levels of anxiety and patterns of interaction were deemed to be detrimental for the family members [38,40].
Time with biological mother
The time with biological mother was also explored.It was categorized into two groups: (1) less than one month with biological mother, (2) one month or above.The aim of this categorization was to explore whether immediate out-ofhome placement after birth impacted children differently compared to later placement away from biological mother.The first group of adoptees may also indicate a group of children that were in urgent need for an out-of-home placement because they spent less than a month with their biological mothers.
For the adoptees who were in preadoption out-of-home care 0-6 months, the median time with the biological mother was 3.3 months (sd = 8.8 months; IQR 0-9.3 months).Among the adoptees who were in preadoption out-of-home care over 6 months, the median time with the biological mother was 0 months (sd = 4.6 months; IQR 0-4.5 months).when adoptees' median age was 44 years (IQR 38-52).The psychiatrists who made the diagnostic evaluations were blinded to the adoptees' genetic risk status and prior psychiatric evaluations.The psychiatric status of the adoptees was defined as the hierarchically most severe lifetime diagnosis [40,43,46].The kappa coefficient for interrater reliability of the adoptees' diagnoses was 0.71-0.80[42].
In this study, the adoptees' psychiatric disorders were classified into two categories: (1) any diagnosed psychiatric disorder (both schizophrenia spectrum disorders and other than schizophrenia spectrum disorders) and ( 2) no psychiatric diagnosis.The classification was based on the accumulating evidence that genetic liability for schizophrenia increases the risk not only for schizophrenia but also for other psychiatric disorders in the offspring of a parent with schizophrenia [48,49].Also, many adverse childhood stressors, such as aberrant mother-child interaction and dysfunctional family relationships, are documented to be common precursors for a wide range of psychiatric disorders [50,51].
Preadoption out-of-home care
The time spent in preadoption out-of-home care refers to the time span which the adoptees spent neither with their biological mothers nor in adoptive families.The time in preadoption out-of-home care before placement into the adoptive family was categorized into two groups: (1) 0-6 months and (2) over 6 months.This categorization was based on previous findings showing that, compared to noninstitutionalized children, institutionalization which lasts over six months associates with multiple developmental deficits and attachment-related problems [10,11].
Among the adoptees who were in preadoption outof-home care 0-6 months, the median age at the time of the placement in adoptive families was 6 months (sd = 8.3 months; IQR 3.9-10 months).For the adoptees who were in preadoption out-of-home care over 6 months, the median age at the time of the placement in adoptive families was 20 months (sd = 12 months; IQR 14-29 months).
Family functioning
This study used Global Family Ratings (GFRs) to assess the broad level of functioning of adoptive families.The method has been described in detail elsewhere [38,40] and will be discussed here briefly.GFRs were assessed multi-methodically on interviews, observations and tests which measure the functioning of families comprehensively from different perspectives.The experienced researchers conducted the evaluations during the home visits to adoptive families and were blinded to the genetic status of the adoptees.The All tests were two-tailed and the limit for statistical significance was set at p = 0.05.The statistical software used in the analyses was IBM SPSS Statistics Version 26.
Results
Table 1 presents the characteristics of the adoptees stratified into two groups according to the preadoption out-of-home care time.In both study groups, over 70% of the adoptees belonged to the low risk for schizophrenia spectrum disorders (LR) group.Those adoptees with 6 months or less in preadoption out-of-home care spent more time with their biological mother, whereas among the adoptees over 6 months in preadoption out-of-home care spent shorter time with biological mother (p = 0.011).
Tables 2a and 2b show the characteristics of the adoptees in relation to psychiatric disorders, stratified by the preadoption out-of-home care time.Among the adoptees with over 6 months of preadoption out-of-home care (Table 2b), the likelihood for any psychiatric disorder was significantly increased in HR adoptees (adj.OR 3.12, 95% CI 1.06-9.20)compared to LR adoptees.For the adoptees with 6 months or less of preadoption time (Table 2a), an increased likelihood for any psychiatric disorder was found among those living in an adoptive family with dysfunctional processes (adj.OR 5.09, 95% CI 1.60-16.18).
In an additional exploratory analysis (Table S2, available online), the bivariate association between adoptive family functioning and psychiatric morbidity of the adoptees was further explored in the data stratified both by the length of preadoption out-of-home care and genetic risk for schizophrenia spectrum disorders.The only statistically significant associations were found between adoptive family functioning and psychiatric morbidity in both HR (p = 0.037) and LR adoptees (p = 0.028) in the subgroups with 6 months or less in preadoption out-of-home care (Table S2, available online).In the subgroups of HR (n = 22) adoptees and LR (n = 68) adoptees with 6 months or less in preadoption outof-home care (Table S2, available online), the prevalence of psychiatric morbidity was significantly (p < 0.05) low in the adoptees raised in adoptive families with functional processes (HR 25%; LR 31%).Among the early placed adoptees who were exposed to dysfunctional processes in the adoptive families, the prevalence of psychiatric morbidity was particularly high (HR 80%; LR 44%).Corresponding results for adoptees over 6 months in preadoption out-ofhome care were non-significant.
The results of the sensitivity analysis are presented in the Table S3 (available online).When preadoption out-of-home care time was re-categorized using 12 months as cut-off time (Table S3, available online), in adoptees with 12 months or
Statistical analyses
Statistical significance of group differences in categorical variables was assessed with Pearson's Chi-Square test or Fisher's Exact Test.A logistic regression model was used to examine the association of genetic risk, family functioning (GFRs), gender and time with biological mother with the follow-up diagnosis of any psychiatric disorder of the adoptees, separately, for the two preadoption out-of-home care groups (0-6 months, over 6 months).An additional exploratory analysis (Table S2, available online) was conducted to explore the bivariate association between adoptive family functioning and psychiatric morbidity of the adoptees in the data stratified both by the length of preadoption out-of-home care and genetic risk for schizophrenia spectrum disorders.
A sensitivity analysis was performed to assess the robustness of the findings based on the categorization of the adoptees into two groups according to the preadoption outof-home care time (≤ 6 months, > 6 months).In the sensitivity analysis (Table S3, available online), the cut-off value of 12 months for preadoption out-of-home care time was used to stratify the adoptees into two groups (≤ 12 months, > 12 months).The choice of 12 months as a cut-off time was based on earlier studies which have suggested that institutionalization may require a longer time to have detrimental impacts on children's development [11,54].behaviors that the adoptees may have adapted to during the long preadoption period [16].
Consequently, the second main finding of this study is that when the duration of preadoption out-of-home care was 6 months or less, adoptees' subsequent psychiatric morbidity was associated with the functioning of the adoptive families.Our results showed that among the adoptees with 6 months or less in preadoption out-of-home care, the dysfunctional family processes in adoptive families, but not genetic risk, per se, were associated with an increased likelihood of any later psychiatric disorder of the adoptees.The results of the additional exploratory bivariate analysis (Table S2, available online) showed significant associations between adoptive family functioning and psychiatric morbidity in both HR and LR adoptees in the subgroups with 6 months or less in preadoption out-of-home care.These findings emphasize further the role of the early caregiving environment in modifying the trajectory of children's development, which is prominent especially for the adoptees with high genetic risk for schizophrenia spectrum disorders.Also, this finding may indicate that early well-functioning caregiving environment can be protective against later psychiatric morbidity for both HR and LR adoptees.
Thus, it may be possible that for adoptees who spent only 6 months or less in out-of-home care before permanent placement, the functioning of the adoptive families at least partially attenuated the negative effects of the adoptees' genetic background.Knudsen et al. [26,21,22] have suggested that humans are most sensitive to environmental influences during the early infancy.This could explain why the high risk for schizophrenia spectrum disorders, per se, did not associate with psychiatric morbidity among the adoptees with 6 months or less in preadoption out-of-home care, since their development was influenced significantly by the functioning of the adoptive family.The neurobiological development of humans is shown to be both geneticallydriven and experience (environment)-dependent [26,28], the quality and stability of early caregiving being of great importance [25].
This study used national data from the Finnish Adoptive Family Study of Schizophrenia, which enabled the examination of genetic and rearing environment factors separately [42][43][44].This is to be considered a major strength for this study, as the data offers a unique opportunity to examine the impacts of genetic and environmental causes in the development of psychiatric disorders.Although it is plausible that the adopted children also had an impact on the functioning of the adoptive families, the earlier studies from the Finnish Adoptive Family Study of Schizophrenia have demonstrated that the HR adoptees are not the cause of dysfunctional processes in the adoptive families [57].Also, there was no diagnostic exclusion criteria applied to the adoptive parents.biological mother, on the associations of high (HR) and low (LR) genetic risk for schizophrenia spectrum disorders and adoptive family functioning with the adoptees' any later psychiatric disorder.This information will facilitate the development of more secure out-of-home care for children of mothers with a schizophrenia spectrum disorder who are not able to foster their children.
This study has two main findings.The first one is that HR for schizophrenia spectrum disorders was found to associate with increased risk for any later psychiatric disorder in the adoptees with over 6 months in preadoption out-ofhome care.This may indicate that, compared to adoptees with LR for schizophrenia spectrum disorders, HR adoptees are especially vulnerable to deficiencies and instability in early caregiving.Indeed, many of the genetic variants that associate with the development of schizophrenia are related to early neurodevelopment [29,30].Furthermore, in the neurodevelopmental models of schizophrenia, early developmental insults have been suggested to interact with genetic factors to produce deviant brain development which enhance the risk for the development of schizophrenia [31].
This finding supports the earlier studies that have discussed the role of genetic risk and gene-environment interaction as explaining factors for the outcomes of institutional rearing and inadequate early caregiving [12,27,37].Furthermore, it is also possible that the LR adoptees with prolonged stays in out-of-home care were more resilient than HR adoptees towards early instable and possibly deficient caregiving.Unfortunately, our data lacked more detailed information to confirm this plausible explanation.Earlier studies have suggested that some adoptees show extensive resilience in early adversities [55].
Furthermore, among these adoptees with over 6 months in preadoption out-of-home care, the subsequent psychiatric morbidity did not associate with adoptive family functioning.In our study the adoptees who were in preadoption outof-home care 0-6 months were placed in adoptive families at the median age of 6 months, whereas the adoptees who were in preadoption out-of-home care over 6 months came in adoptive families at the median age of 20 months.Tottenham [19] has suggested that instable early caregiving, such as institutional care, may preclude children from forming an early attachment to any specific caregiver during the sensitive phase between 6 and 12 months of age.Also, later age at adoption is argued to complicate the attachment processes between the adopted child and adoptive parents [14,56], and impair children's ability to respond to new changing caregiving environments when adopted [18][19][20].Although positive characteristics of the adoptive family, such as sensitive parenting, may enhance adoptees' later development [32], it has been noted that improved caregiving in adoptive families may not be sufficient to reduce some of the deviant Furthermore, the attrition analysis (Table S1, available online) showed that the current sample differed significantly from the non-included adoptees with regard to genetic risk status for schizophrenia spectrum disorders (p = < 0.001) and time with biological mother (p = 0.010).In the current study, HR represented only 25.1% of the adoptees, compared to 50% in the total data.It may be possible that the dysfunctional adoptive families with HR adoptees were less willing to participate in the study, since not only are there less HR adoptees than LR adoptees in the study sample but there are also less dysfunctional than functional adoptive families in the study sample.Therefore, these circumstances could have affected our results, and particularly in HR adoptees, conclusions must be made with caution.However, it is also possible that our results could be more pronounced if more HR adoptees and their adoptive families had participated in the study.
Although the size of the current study sample in analyses was moderate, lack of statistical power in subgroup analyses (type 2 error) may have occurred.Due to lack of data, we are not able to confirm if the adoptees had multiple placement breakdowns before they were placed permanently in the adoptive families.It has been shown that early placement breakdowns can be detrimental for children's attachment security [56].Finally, there is a possibility that the HR children who expressed more abnormal traits and behaviors may have been institutionalized for longer periods [59], which may have impacted our results.It may be possible that the HR children in this study who experienced more extensive out-of-home care expressed some deviant behaviors and because of those, were adopted later.
It is important that future research with larger study populations aim to confirm our findings.Especially, the finding regarding the impacts of genetic liability for schizophrenia spectrum disorders, needs to be confirmed by other studies.Future studies that can elaborate the quality of preadoption out-of-home and also consider adoptees' genetic background are needed to explain this matter more precisely.However, it is important to emphasize that collecting a nationwide data, similar to ours, would be challenging and also very expensive, which enhances the value of our findings.
Children of mothers with schizophrenia are shown to be at increased risk of being placed in out-of-home care during their infancy and childhood [7,8].Therefore, it is critical to develop practices and policies that secure a safe caregiving environment for these genetically vulnerable children.The results can be utilized in developing out-of-home care, foster and adoption practices for children, particularly in highrisk populations.In addition, the results can help to target early interventions during sensitive periods in child development.Furthermore, the results can be utilized in planning family-centered psychosocial support for adoptive families This is to be considered as a strength for our study as the adoptive parents represent an epidemiological, diagnostically normal demographic sample.
Furthermore, with the fine-grained adjustment of family functioning (GFRs) [38,40] we were able to clarify the role of family functioning and its associations with adoptees' psychiatric status when the duration of preadoption out-home-care was considered.The adoptive families and adopted children were met and interviewed to examine the family functioning and diagnostic status, which earlier studies have not done this thoroughly.The GFRs are comprehensive evaluations of adoptive family functioning and may therefore represent a clustered risk score, which some studies have preferred to be utilized when examining the impacts of environmental adversities [58].It can be possible that in the families in which there were more dysfunctional family processes, there were also more substance abuse and other adversities.Also, it is probable that adoptive parents' possible psychiatric disorders contributed to the ratings of family functioning.
A significant limitation of this study is that we cannot elucidate the quality of the preadoption out-of-home care that was organized by social services and this has to be considered in the interpretation of the results.In Finland, institutional care has been the most common option for outof-home care although there have been notable municipal differences in child protection services [41].Furthermore, it has been stated that in so-called globally depriving institutions, detrimental effects on children's development may occur in less time compared to more adequate institutions [17].Thus, it is possible that the chosen cut-off point for the outcome variable (≤ 6 months and > 6 months in out-ofhome care) is not optimal.To assess the used cut-off point and our findings, we performed a sensitivity analysis (Table S3, available online) with a different cut-off value (≤ 12 months, > 12 months).The sensitivity analysis showed that extended duration of preadoption out-of-home care (because of later cut-off point) had an effect on the associations of genetic risk for schizophrenia spectrum disorders in the groups with longer preadoption time (6 months vs. > 12 months, OR 3.12 vs. OR 3.93) and adoptive family functioning in the groups with shorter preadoption time (≤ 6 months vs. ≤ 12 months, OR 5.09 vs. OR 4.15) with adoptees' later psychiatric morbidity.The analysis therefore indicates that the impact of adoptees' genetic background on the risk for any later psychiatric disorder is more pronounced as the time in preadoption out-of-home care progresses.Moreover, extended time in preadoption out-of-home care seems to weaken the impact of the adoptive family's functioning on adoptees' later psychiatric morbidity.Thus, the sensitivity analysis supports our initial findings.
proved on 15 October 1991 by the Ethics Committee of Oulu University Hospital.The study design was evaluated to have followed the ethical practices of the time.All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.This article does not contain any studies with animals performed by any of the authors.
Informed Consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. in which the adopted child has experienced preadoption adversities.
Summary
In summary, this study examined the impacts of duration of preadoption out-of-home care and adoptive family functioning on later psychiatric morbidity of adoptees with high (HR) and low (LR) genetic risk for schizophrenia spectrum disorders.The study used national data from the Finnish Adoptive Family Study of Schizophrenia.The study population in this substudy consisted of 43 h adoptees and 128 LR adoptees of whom 90 were 0-6 months and 81 over 6 months in preadoption out-of-home care.The study used the Global Family Ratings to assess the functioning of adoptive families and DSM-III-R criteria to assess the psychiatric disorders.The results showed that in the group of the adoptees with over 6 months in preadoption out-of-home care, the likelihood for any psychiatric disorder was significantly increased in HR adoptees (adj.OR 3.12, 95% CI 1.06-9.20)compared to LR adoptees.Among the adoptees with 6 months or less in preadoption out-of-home care, the likelihood for psychiatric disorders was increased in those living in adoptive families with dysfunctional processes (adj.OR 5.09, 95% CI 1.60- 16.18).The results of this study indicate that in terms of later mental wellbeing, it is important for children, and especially for children with high genetic risk for schizophrenia spectrum disorders, to have a secure and stable early rearing environment.Particularly, when adoption is needed, the importance of early placement and wellfunctioning family environment are emphasized.
Table 1
Characteristic of the adoptees in relation to preadoption out-
Table 2
Association of the characteristics of the adoptees with likelihood for psychiatric disorders, by the length of pre-adoption out-of-home time * Odds ratios (ORs) and 95% CI of OR are based on the logistic regression analysis assessing the likelihood for psychiatric disorder of the adoptees after adjusting for genetic risk, family functioning, gender and time spent with biological mother ** p < 0.05 *** p < 0.1 | 2022-08-14T06:17:35.784Z | 2022-08-13T00:00:00.000 | {
"year": 2022,
"sha1": "1c55d32938fade4ae3957f268d898567508bfe2d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10578-022-01411-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae22ea28a98eb7876bc0b0ddd46fb7720c0a322c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
92966160 | pes2o/s2orc | v3-fos-license | Synthesis and characterization of copper nanoparticles/polyvinyl chloride (Cu NPs/PVC) nanocomposites
In the present work, two-synthesis method for copper nanoparticles (Cu NPs) in polyvinyl chloride polymer (PVC) has been developed by adding previously synthesized NPs to the resin and by an in situ synthesis of the NPs. The resin was mixed with additives to enhance obtaining a pasty mixture called plastisol. The obtained plastisol is liquid at room temperature and have viscoelastic and pseudoplastic properties. The plastisol showed changes in its mechanical properties by heat action, like a remarkable increase of viscosity at high temperatures, passing from the initial liquid state to solid state at curing temperature (180-200 ºC). Cu NPs were synthesized previosusly by microwave assisted polyol method, using copper acetate (CuAc2) as precursor, ascorbic acid (AA) as reducing agent, polivinilpirrolidone (PVP) as stabilizer and ethylene glycol as solvent. In this synthesis ethylene glycol (EG), a diol widely used in polyol method; was used as a solvent. Microwaves provided thermal heating, an alternative to the conventional heating by convection. The microwave irradiation (cid:82)(cid:73)(cid:3) electromagnetic microwave energy into heat energy was achieved, this one occurs preferably in polyol, such as ethylene, which has high dielectric constant. Different concentrations of plasticizer and several temperatures were evaluated for curing nanocomposites (Cu NPs/PVC). To obtain films of Cu NPs/PVC, a coating technique over a substrate (paper or fabric transfer) was used. Films were prepared in order to produce a material with further bactericidal activity. The characterization was performed by X-Ray diffraction and ultraviolet visible spectrophotometry.
Introduction
Thermoplastics functionalized with cooper nanoparticles are of great interest because of its many applications such as medical items, cleaning, antiseptics, textiles, paintings, intrahospital coatings, coatings handrails on public transportation, etc. The antibacterial effect of copper nanoparticles has been reported by Yoon et al. (2007) and Cioffi et al. (2005). In preparation of nanoparticles the size requirement is not enough, it is necessary a procedure that assures a tight distribution of size particle, a controlled morphology and composition and identical crystal structure. Chemical methods provide better control to achieve these characteristics. One of them is the polyol method [Brayner et al. (2013), Shankar and Rhim (2014), Gurav et al. (2014), Fievet et al. (1993), Park et al. (2007a) and Park et al. (2007b)]. This method is characterized by using alcohols as solvents, generally diols as ethylene glycol, propanodiol or diethylene glycol. They can dissolve several inorganic precursors, in addition, they act as reducing and chelating agents Therefore, a fast nucleation and slow growth of particle is accomplished at specific temperature. By this method, it has been possible to obtain metal nanoparticles and metal oxides with low distribution of size and controlled morphology. This method is already known to obtain metal nanoparticles and metal oxides with low distribution of size and controlled morphology, although in recent years some variants have been reported as the polyol method via microwave [Zhao et al. (2004), Kawasaki et al. (2011), Nishioka et al. (2013, Nikkam et al. (2014), Valodkar et al. (2014) and Hong et al. (2002)].
Recently it has been reported the synthesis of several nanocomposites, such as Cu NPs/PVC [Becerra et al. (2013)], copper nanoparticles in high density polyethylene Cu NPs/HDPE [Molefi et al. (2010)] and copper nanoparticles in polypropylene (Cu NPs/PP) [Pinochet (2012)] synthesized by molten state method and incorporating the copper nanoparticles. It has also been reported on the "in situ" synthesis copper nanoparticles in polystyrene (Cu NPs/PS) from the polymerization of styrene monomers [Konghu et al. (2012)]. Unlike the above polymers PVC is a thermoplastic (it is softened by heating and can be easily molded and retain the new shape recovering its initial consistency when it is cooled), so is possible to obtain in rigid or flexible forms depending on their formulation and transformation. This is the reason because that the polyvinyl chloride is one of the most important synthetic materials at world market after polyethylene.
The aim of this study is to evaluate the synthesis "ex-situ" and "in-situ" of nanocomposites Cu NPs/PVC for a potential development of films with antibacterial properties.
Preparation of plastisol PVC
The composition mixture is generally expressed in units "per hundred resin" (phr), where a certain amount of additive is indicated per 100 parts of resin (UNE 53-46290). The plastisol is obtained by mixing the PVC resin with 80 phr of DOP and 2.5 phr of the stabilizer in a glass reactor of 500 ml capacity. With the immersion mixer incorporated in the reactor, the plastisol was stirred at 400 rpm for 10 minutes or until obtaining a homogeneous paste. After the mixing step, the procedure continuous to remove air bubbles from PVC mass. Vacuum is applied trough a connector to eliminate the air trapped into the paste mass-produced by the movement of the blades of the mixer, and the swelling reaction of PVC resin by plasticizer addition. The equipment used was a Schlenk line connected to a vacuum pump Vacuubrand RZ 2.5 with a maximum vacuum of 2×10 -3 mbar.
Synthesis of Cu NPs by polyol method via microwave
AA was dissolved in 5 mL of EG, then was added PVP and 10 mL of EG again; the solution formed was stirred no more 500 rpm by 15 minutes approximately, and heated at a constant reflux. It was accomplished by using microwave equipment adapted for chemical synthesis; microwave heating is carried out for 20 min. In this stage a solution of CuAc 2 in ethylene glycol was added through a Schlenk funnel three or six minutes after starting the microwave heating.
By performing this method the reducing agent is in excess in the initial solution as long the precursor solution is added to it, therefore the reaction of reduction of the Cu +2 ions is made directly. Cu NPs in solution are centrifuged at 10,000 rpm in a high-speed centrifuge, rinsed with acetone, and centrifuged again and a reddish powder is obtained.
Ex-situ synthesis of Cu NPs/PVC nanocomposites
Cu NPs powder obtained was used to prepare nanocomposites of Cu NPs/PVC by the sol-gel method. In this method, firstly a solution or PVC plastisol solution is formed, then Cu NPs 1% weight of plastisol is dispersed using a magnetic stirrer for 30 minutes or until uniform distribution of the solution. This solution was poured on a transfer paper using an application die of defined thickness (15 in.). Curing or gelling of the plastisol is performed in a stove at a temperature of 180 °C with a residence time of 3 minutes. The laminates of the nanocomposites of Cu NPs/PVC come off the paper and they are let cool.
In-situ synthesis of Cu NPs/PVC nanocomposites
In situ impregnation of copper nanoparticles in the PVC polymer matrix is performed by the sol-gel method. In a four-necked ball; 3.4 g of PVP and 10 g of DOP are added, the mixing is made using an ultrasonic tip for 15 minutes. Then a solution containing 0.47 g of CuAc 2 in 10 g of DOP and is mixed for 1 hour. Next, a solution containing 0.92 g of AA in 10 g of DOP is added and mixed for 30 minutes. The molar ratios were [CuAc 2 /PVP] = 7.6 and [AA/CuAc 2 ] = 2. Finally, PVC resin with 2.5 phr of stabilizer is added and the reaction time of 2 hours is completed. The procedure for air extraction from plastisol is performed. The curing thus obtained is performed at the same previous conditions.
Characterization
The UV-Vis spectra were obtained with a spectrophotometer UV-Vis model Genesys S10 (Thermo) single beam using a light source xenon lamp flash UV-Vis with a range from 190-1100 nm. Crystallinity and phase composition of the copper nanoparticles were investigated by X-ray diffraction (XRD) using a X-rD8-Focus (Bruker) diffractometer operated at 40 kV, 40 mA with -2 geometry and Cu radiation (K-alpha 1, 0.154018 nm), "slit" receptor 0.1 mm, "slit" 1 mm divergence and "soller" 2.5°, scintillation detector. The XRD patterns were recorded in
Time influence microwave radiation on synthesis of Cu NPs
Because the reducing agent is initially under microwave irradiation and having a higher power cycle, the solution absorbs more radiation and the reducing agent reacts with the environment decreasing its reducing power, so the reduction reaction towards nanoparticles formation is not favorable. In this case, the time of preferable addition is three minutes, in a higher addition time; there is a longer exposure of microwave radiation for the reducing agent producing the same effect already mentioned. Fig. 1 shows that the plasmon of Cu NPs is more acute in 5 and 3 samples followed by the sample 1, this indicates indirectly that these nanoparticles have a minor size of particle that in other samples, so the optimal molar ratio of reducing agent to the precursor is 5, when the molar ratio is 8 this effect does not improve, while when the molar ratio is 2.5, this effect decreases, this result is agree with the results of the antecedents, since in the method of synthesis of nanoparticles of copper by polyol the conventional heating is used, the optimal molar relation AA to reducing agent is of 8, and when the heating is through microwaves this value decreases.
Characterization of Cu NPs by XRD
The XRD pattern is shown in Fig. 2 and compared with the standard pattern number 4-836, confirming the cubic face centered (fcc) structure of Cu 0 . The two intense peaks of the spectrum match with two of its three signals of the characteristic pattern of diffraction for the Cu 0 , at 2 = 43.37°, 50.53° and 74.12°, corresponding to (111), (200) and (220) crystal planes, respectively. This confirms the formation of nanoparticles of pure copper with cubic facecentered structure (fcc cubic system, Fm-3m space group No. 225, a = 3.6150 Å, D = 8.950 Å). Fig. 4(a) shows a common plastisol, which became a gel at 180 °C with a residence time of 3 minutes. During processing of flexible PVC, by any route, initial mixing of PVC particles in the liquid plasticizer, passes through two processes gelation and fusion. During the gelation process, the plasticizer is absorbed by the PVC particles and diffuses through them, swelling them. Inside reactor occurs a complex process where the shear rate developed by the mixer affects the rheological behavior of plastisol. Shear rate developed by mixer and the order in which the ingredients are added can in addition affect to the rheological behavior of plastisol. The interactions that take place between the plasticizer and the resin yield the gel behavior and fusion, reason why the knowledge of the nature of these interactions is fundamental. Fusion is the process where as a result of heating (typically around 160 °C), the PVC particles and the plasticizer are fused and mixed thoroughly to form a homogeneous material. This material is able to develop its mechanical properties completely. The Fig. 4(b) corresponds to plastisol with 1% of CuAc 2 and is observed the turquoise green color characteristic of the copper salt. Fig. 4(c) corresponds to plastisol prepared following the propose methodology in this research. When finalizing the preparation of plastisol additive with CuAc 2 , the PVP and the AA, this presents a green color that is changing to yellow and turns in red after adding the PVC resin. After 12 hours of rest of plastisol, very fine precipitate reddish is observed very fine, which us could indicate the metallic copper presence, as it is observed in the Fig. 4(d). The time of rest after adding the PVC resin is necessary, since when adding the resin this is swollen absorbing the DOP, so the speed of reduction of the copper ions increases.
Cu NPs/PVC XRD characterization
For the formation of Cu NPs the AA plays an important role as reducing agent of the salt of copper, and in excess, is essential to avoid the oxidation of copper nanoparticles [Kawasaki et al. (2011)]. The antioxidating property of the AA comes from its capacity to capture free radicals by means of the electron donation, as it is outlined in Fig. 5. The PVP is used as an antibinder or dispersing agent that stabilizes copper nanoparticles, and the size and the form of nanoparticles are going to depend strongly on the concentration of the antibinder. The PVP has the structure of a polyvinyl skeleton with polar groups of oxygen and nitrogen, that have pairs of free electrons to donate and form an interaction coordinated with copper ions, creating therefore the compound of PVP-Cu +2 , as it is outlined in Fig. 6, reaction 2. This fact indicates that the complex of PVP-Cu +2 is reduced to PVP-Cu +1 in the first place, and then Cu +1 , reason why the celestial colour of the reaction system happens to yellow colour and after to red after a greater reduction of Cu +1 to Cu 0 as it is observed in the Fig. 4(d), nevertheless with the help of the temperature in the cured process of, not only the reddish colour is observed but that also is appraised the metallic brightness characteristic of copper. Fig. 7 shows the XRD patterns of the PVC (M1) and Cu NPs/PVC (M4) films. A centered wide peak at 12.92° is identified, that corresponds to the amorphous matrix of PVC, and the percentage of crystallinity of the PVC is at the most the 12%. Also two peaks at 2 are identified = 43.13° and 50.28° that corresponds to the reflections of Bragg of (111) and (200) of Cu 0 of centered cubic faces.
Conclusion
In the development of the ex-situ synthesis of compounds of Cu NPs/PVC it is indispensable to control the size and dispersion of Cu NPs, which occurs preferably when the precursor is added at three minutes. The optimal potential for the synthesis via microwaves is of 20% of cycle. The optimal molar relation between reducing and precursor agent is 5.
For the synthesis in-situ of compounds of Cu NPs/PVC the reduction of Cu +2 to Cu 0 it has been obtained with the addition of the AA which is corroborated with the XRD. This method has technological advantages on the other method because the present process of transformation of the PVC stays, which would facilitate a possible transference of technology. Nevertheless, it is pending a morphological study of the Cu NPs formed. | 2019-04-03T13:16:30.453Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "725779a04098826166274c28043d631b8a3ae95f",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.mspro.2015.04.038",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e5cc0551b06b0bd643ae5505ad73f315d38f1b8e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
3080066 | pes2o/s2orc | v3-fos-license | Lymphocytic sialadenitis of Sjögren's syndrome associated with chronic hepatitis C virus liver disease
Viral infection has often been suggested as a possible cause of Sjögren's syndrome or chronic lymphocytic sialadenitis, and Epstein-Barr virus has been found in the salivary glands of patients with this condition. After we had noted Sjögren's syndrome in several patients infected with hepatitis C virus (HCV), a virus also excreted in saliva, we set up a prospective study to investigate the association of chronic lymphocytic sialadenitis, with or without symptoms, to chronic HCV liver disease. The histological appearances of labial salivary glands in patients with proven HCV hepatitis or cirrhosis were compared with those in dead controls. Histological changes characteristic of Sjögren's syndrome were significantly more common in HCV-infected patients (16 of 28, 57%) compared with controls (1 of 20, 5%). Focal lymphocytic sialadenitis characteristic of Sjögren's syndrome (though only 10 patients had xerostomia and none complained of xerophthalmia) appears to be common in patients with chronic HCV liver disease; if this association is confirmed, identification of the underlying mechanism may improve our understanding of both disorders.
only. Much larger studies would be necessary to show important differences in mortality.
Despite the similar age range, packed cell volume, and admission parasite counts in the moderate and severe malaria groups, both drugs produced more rapid parasite clearance in the former than in the latter. This finding suggests either reduced splenic clearance function or a larger biomass of sequestered parasites in severe malaria. 10 Absorption and disposition indices of chloroquine are similar in uncomplicated and severe disease,1,8 and pharmacokinetic factors would not therefore explain the difference. How parasitised erythrocytes (largely ring forms) are removed from the circulation after drug treatment is not fully understood, but this mechanism may be distinct from the ability of an antimalarial drug to inhibit and kill the mature parasites sequestered in the microcirculation of vital organs.1o In 1989, when this study was done, chloroquine was a highly effective treatment for falciparum malaria in The Gambia. However, resistance has developed rapidly. By 1990, parasite clearance measures were longer, and Rl and R2 resistance was seen in half the patients admitted to the same study sites (unpublished). High-grade resistance will soon preclude the use of chloroquine in severe malaria. Other treatments are needed urgently. The artemesinin compounds are promising. However, there is no evidence yet that they can save more lives than quinine (or chloroquine) given in appropriate doses. This issue will only be resolved satisfactorily by a large comparative study with mortality as the endpoint. Lymphocytic sialadenitis of Sjögren's syndrome associated with chronic hepatitis C virus liver disease Viral infection has often been suggested as a possible cause of Sjögren's syndrome or chronic lymphocytic sialadenitis, and Epstein-Barr virus has been found in the salivary glands of patients with this condition. After we had noted Sjögren's syndrome in several patients infected with hepatitis C virus (HCV), a virus also excreted in saliva, we set up a prospective study to investigate the association of chronic lymphocytic sialadenitis, with or without symptoms, to chronic HCV liver disease.
The histological appearances of labial salivary glands in patients with proven HCV hepatitis or cirrhosis were compared with those in dead controls. Histological changes characteristic of Sjögren's syndrome were significantly more common in HCVinfected patients (16 of 28, 57%) compared with controls (1 of 20, 5%).
Focal lymphocytic sialadenitis characteristic of Sjögren's syndrome (though only 10 patients had xerostomia and none complained of xerophthalmia) appears to be common in patients with chronic HCV liver disease; if this association is confirmed, identification of the underlying mechanism may improve our understanding of both disorders.
Introduction
Sjogren's syndrome is generally thought to be an autoimmune disease because of an associated chronic lymphocyte infiltration of salivary and lacrimal glands1 and the autoantibodies sometimes detected in serum. The factors which might trigger such a focal immune reaction remain unknown, but viral infections have repeatedly been suggested as a possible cause.2 In rats, a coronavirus infection of salivary and lacrimal glands can lead to chronic sialadenodacryoadenitis akin to Sjogren's syndrome. 3 Epstein-Barr virus (EBV) has recently been found in the salivary glands of patients with Sjogren's syndrome,4,s and may be responsible for many cases of this condition.2 Like EBV, hepatitis C virus (HCV) is excreted in saliva where it may be detected in most, if not all, patients with chronic HCV hepatitis or cirrhosis and HCV infection has been transmitted by saliva in chimpanzees7 and man.8 After observation of Sjogren's syndrome in patients with chronic HCV infection, we wondered whether there might be a link between HCV infection and salivary gland inflammation, and set up a prospective study of labial salivary glands in patients with chronic HCV hepatitis or cirrhosis.
Patients and methods
From April 1,1990, to March 30,1991, 29 patients with non-A, non-B hepatitis or cirrhosis were referred for hepatic biopsy and were entered into the study. 27 were anti-HCV positive by a first-generation enzyme-linked immunosorbent assay (ELISA) (Chiron/Ortho Diagnostics, Raritan, New Jersey, USA). Serum samples were retested with a second-generation anti-HCV ELISA and by recombinant immunoblot assay (RIBA, Ortho) to measure antibodies against different viral antigens. 1 patient remained seronegative and was excluded. In the other 28 patients (9 men, 19 women; mean age 60, range 32-80 years), HCV infection was considered to be the cause of chronic liver disease because of epidemiological and histological data (table i) and other causes of chronic hepatitis, particularly autoimmune hepatitis, were excluded. No patient had antimitochondrial or liver-kidney microsomal antibodies, and smooth-muscle antibodies were found in only 3 patients at a titre below 1:100; antinuclear antibodies were found by indirect immunofluorescence on rat liver and kidney sections in 12 patients, but at a low titre (table i), and always with an irregular immunofluorescence pattern. 20 patients subsequently received 3 million units a-interferon thrice weekly; transaminase concentrations at 3 months had returned to normal in 8 patients, had improved but were still raised in 8, and were unchanged in 4. The likely cause of HCV infection was blood transfusion in 13 (mean delay 13 [SE 8] years between transfusion and biopsy), acupuncture in 1, and intravenous drug injection in 1, and no clear risk factor could be identified in 13. Table I shows the activity of liver disease characterised by Knodell's score;9 24 patients also had histological evidence of cirrhosis, with striking steatosis (> 10% hepatocytes) in 14 and large lymphocytic nodules in 9.
22 consecutive patients who underwent necropsy in our institution during the same period were used as controls for histological examination of the salivary glands. 2 were secondarily excluded, 1 because of HCV cirrhosis and 1 with autoimmune cirrhosis. Of the 20 remaining controls studied, 7 were men and 13 women with a mean age of 70 years (range 32-92). The main cause of death was stroke in 4, cancer in 5, alcoholic cirrhosis in 3, cardiac failure in 2, pulmonary embolism in 2, bacterial infection in 3, and multiple myeloma in 1; the main associated diseases were diabetes mellitus in 2, hypertension in 5, renal failure in 3, arteritis in 2, and hypothyroidism in 1.
Patients with HCV infection were asked directly about symptoms of xerostomia and xerophthalmia. As no patient complained of ocular symptoms, 6 were randomly selected to undergo ophthalmologic examination, including Schirmer's tear test and the rose-bengal dye test. Labial salivary glands were sampled in normal mucosa according to the same protocol in patients and controls;" all samples were of similar size and were fixed in Boin's fluid, embedded in paraffin, and stained with haematoxylin-eosinsafranin. All sections were examined by the same pathologist, who was not aware of the origin of the samples, and were graded according to Chisholm and Mason's classification." Sections of
Results
10 of the 28 patients with HCV complained of xerostomia, in 5 of whom symptoms were severe. No patient complained of xerophthalmia but of the 6 patients randomly assigned to ophthalmological examination, 3 had a positive Schirmer test, 1 of whom also had a positive rose-bengal test. Antinuclear antibody titres and the results of histological examination of labial salivary-gland biopsies are shown in table 1. 16 of 28 (57%) patients with chronic HCV liver disease had histological evidence of Sjogren's syndrome compared with 1 of the 20 (5%) controls (p < 001). (Biopsy results for controls were: grade 0, 3; grade 1, 7; grade 2,9; grade 3, 0; and grade 4, 1.) When the 12 patients with labial biopsy grades 1 or 2 were compared with the 16 who had labial biopsy grades 3 or 4, there were no statistically significant differences with regard to sex, age, mode of contamination, y-globulin concentration, Knodell's score, or response to interferon (table 11). 6 of the 10 patients with
Discussion
In Sjogren's syndrome, symptoms of xerostomia and xerophthalmia are caused by lymphocyte infiltration and destruction of lacrimal and salivary glands. The diagnostic criteria and even the definition of this condition have been the subject of much debate. Although the diagnosis was made on clinical grounds alone for many years,12 it is now widely accepted that the histological appearances of salivary glands can be a useful guide, and that a focus of more than 50 lymphocytes per 4 mm2 of a salivary gland section is diagnostic of the condition if the biopsy specimen has been taken from normal mucosa.11 The sensitivity of this technique is reduced when the biopsy sample contains less than 5 salivary glands, as may occur in advanced stages of sialadenitis because of extensive atrophy and fibrosis of labial salivary glands. Some authors have suggested that patients with grade 2 lymphocyte infiltrate and extensive fibrosis could be considered to have Sjogren's syndrome 5 by which criteria all but 8 of our 28 patients with chronic HCV liver disease would have qualified for this diagnosis.
We used the more strict criteria for statistical analysis because of their good specificity: Chisholm and Mason" did not find any grade 3 or 4 changes among 60 controls. Nevertheless, because Scott13 found minor inflammatory changes to be common in older people, especially women, with occasional lymphocytic foci and Greenspan et aho found 6 specimens with grade 3 changes (though none with grade 4) in 53 unselected necropsy specimens, and in view of the age and sex distribution of our patients with chronic HCV liver disease, we compared them with controls who had a similar sex ratio and a slightly higher mean age. Only 1 of 20 controls had grade 3 or 4 sialadenitis (5%), compared with 16 of 28 with HCV infection and chronic liver disease-which therefore seems to predispose to focal lymphocytic sialadenitis characteristic of Sjogren's syndrome. However, as only 10 of the 28 patients had xerostomia (mild in 5) and none complained of xerophthahnia (although 3 of 6 patients examined had abnormal Schirmer or rose-bengal tests), it may be more appropriate to use the terms sicca complex or chronic lymphocytic sialadenitis instead of Sjogren's syndrome. Whatever the label, such a condition is well known in chronic autoimmune liver diseases such as primary biliary cirrhosis, autoimmune chronic active hepatitis, and cryptogenic cirrhosis,14 and has even been used as an additional argument to support an autoimmune pathogenesis in these diseases. Although there is a link between Sjogren's syndrome and autoimmune liver disease in the proven absence of HCV infection, it is clear that until recently some patients with chronic HCV liver disease have been thought to have autoimmune liver disease, 1-1,16 and the occasional coexistence of Sjogren's syndrome in such patients may have been misleading. Although false-positive results for anti-HCV antibodies have been reported in autoimmune liver disease,17 in our patients the confirmation of anti-HCV antibodies by RIBA, the low concentrations or absence of circulating autoantibodies, and the response to interferon 18 all support the diagnosis of chronic HCV liver disease.
We have found a striking association between HCV infection and sialadenitis, but our findings do not prove a direct link. However, there are several ways in which HCV infection might cause sialadenitis. Smooth-muscle or liverkidney microsomal antibodies have been reported during HCV infection and interpreted as secondary immune phenomena,ts,16 and antibodies against host-derived epitopes may also be detected early in HCV hepatitis.19 Thus an autoimmune reaction may explain lymphocytic infiltration, even in organs not infected by HCV, if they contain a target epitope. HCV genomic sequences may also be found in mononuclear cells in the blood of infected patients (C. Brechot, personal communication), and might also lead to abnormal immune responses. Another possibility is suggested by the detection of EBV in the salivary glands of patients with Sjogren's syndrome. HCV has been found in the saliva of infected individuals6 and there is a strikingly similar nodular pattern of lymphocyte infiltrate in salivary glands and in liver. Could HCV infection of salivary glands account for the chronic lymphocytic sialadenitis that we observed? Identification of the association between nodular chronic lymphocytic sialadenitis and chronic HCV liver disease may offer new insights into our understanding of both conditions. | 2018-04-03T03:23:59.680Z | 1992-02-08T00:00:00.000 | {
"year": 1992,
"sha1": "c88d41dce1a2bbe77b39a3fb6c70ec4a9cadc7c8",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc7134660?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f764e7df0f9ecd8662a9fe392222f7a8fef1644",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219094424 | pes2o/s2orc | v3-fos-license | Comparative analysis of strength and deformation of reinforced concrete and steel fiber concrete slabs
. The results of experimental studies of the steel fiber influence on the bearing capacity, deformability and crack resistance of reinforced concrete multi-hollow plates are given. We investigated a serial floor slab and a similar one, but with the addition of steel fiber. Both plates are factory-made. For testing, the testing apparatus was designed and manufactured that made it possible to study full-size floor slabs in laboratory conditions. The tests were carried out according to a single-span scheme with the replacing equivalent load. The loading was carried out by applying two concentrated strip vertical loads along the plate width. The load was applied in steps of (0.04 ÷ 0.05) from the breaking load. Each stage ended with exposure lasting up to 10 minutes with fixing all the necessary parameters. Deformations were measured using dial gauges. From the moment the first crack appeared in the stretched zone of concrete, the process of crack formation and opening was monitored. At each level, using the Brunell tube, the width of their opening and height were measured. The moment of cracking in both slabs began at the same relative strain. It has been established that the bearing capacity and crack resistance of a slab of combined reinforcement using steel fiber are respectively 50 and 44% higher than that of a similar reinforced concrete slab. The maximum deflection of the slab of combined reinforcement is 37.5% lower than that of conventional reinforced concrete. The destruction of both slabs occurred under loads, when the relative deformations in the compressed zone of concrete reached 0.80×10 -3 and 1.10×10 -3 for reinforced concrete and steel-fiber concrete slabs, respectively, the difference is 37.5%.
Introduction
Hollow core slabs are usually used as slabs between floors in the construction of buildings and structures. Of various shapes and sizes, with different bearing patterns, they are all widely used in construction. Their production accounts for a significant mass part of reinforced concrete of the total material consumption during the construction of the facility. This type of product can be called universal, because its use is not limited to the type of structure. The main distinguishing feature of such floor slabs is the presence of voids located along the slab. These voids almost always have a circular cross section. Also, characteristic is the manufacture of recessed grooves along the side faces. Such plates are pre-stressed and nonstressed by pouring into molds and subsequent vibration compaction with final heat treatment.
The improvement of such demanded reinforced concrete structures, the increase in their bearing capacity, crack resistance and durability is an actual problem.
Recent researches analysis
It is known that the use of steel fiber leads to an increase in the physicomechanical characteristics of concrete, namely, strength, deformability, crack resistance, water permeability, impact strength, frost resistance, etc. [1][2][3].
Most of these characteristics are usually determined in the laboratory. In this case, the main objects of research are samples in the form of cubes or prisms, and less often -models in the form of beams or slabs of reduced size.
Over the past five years, the authors have carried out large-scale studies to determine the effect of steel fiber on the strength and deformation properties of fiber concrete [4,5]. It was found that the strength and crack resistance of steel fiber concrete, higher than that of ordinary concrete, on average, by 40 and 30%, respectively. Creep -on the contrary, is (20-22)% lower. The long-term strength of steel fiber-reinforced concrete beams that have been exposed to operational loads for more than 400 days is on average 37% higher than that of similar beams made of ordinary concrete. All these results were obtained, again, in laboratory conditions, and, as it is known, they are far from always confirmed by the operation of real structures.
Studies to expand the scope of steel fiber concrete are carried out by many authors [6][7][8]. So, in [9], the use of fiber-reinforced concrete slabs is considered, which are more economically and technically profitable compared to conventional reinforced concrete slabs when installing floors. The author substantiates this by an increase in impact strength and ductility, higher crack resistance and bearing capacity. An interesting comparison of the properties of concrete slabs with two types of fiber fibers and in the absence of fiber reinforcement was presented in [10]. Four specimens contained steel and polypropylene fibers added in a volume ratio of 0.5% and 1.0%. The slabs had dimensions of 820 × 820 × 80 mm and were supported by four rollers along the edges that control the displacement. The concentrated load is applied in the center of the plate. The results of experimental studies were compared with theoretical predictions. Based on the processing of the data obtained, it was concluded that 1% of steel fibers by volume has the best effect on the operation of the slabs. This conclusion coincides with our results regarding the effectiveness of fiber reinforcement.
A. Blanco [11] believes that combining fibers with traditional reinforcement can be a very interesting design decision to create more durable and economical designs. His work is devoted to the analysis of the bearing capacity and ultimate state of slabs. For this purpose, eighteen concrete slabs (3 × 1 × 0.2m) with different reinforcement, fiber types (steel and plastic) and their volumetric content (0.25 and 0.50%) were investigated. These slabs were tested for bending with data monitoring at four points.
The works of other authors can be noted [13][14], but nevertheless, the influence of steel fiber on the work of flexible concrete elements has not been fully studied, and many aspects of practical interest remain.
The purpose of work
The aim of this work is an experimental study of the influence of steel fiber on the bearing capacity, deformability and crack resistance of serial reinforced concrete multi-hollow slabs manufactured in the factory.
Materials and methods
The object of the study are floor slabs PK 30.12-8, manufactured in the factory by the enterprise Velikodolinsky ZhBK Plant, LLC in accordance with regulatory documents [15,16] and working drawings of the 1.141-1 series [17], using conventional technology and with the addition of steel fiber with curved ends.
For testing, a testing apparatus was designed and manufactured that made it possible to study full-sized floor slabs in laboratory conditions (Fig. 1). In order to comply with safety regulations and prevent brittle collapse of reinforced concrete slabs during the test under load, steel pipes were freely threaded into the extreme voids, which did not impede the deformation of the structure. This made it possible to timely detect the appearance of cracks, safely measure their parameters and draw on the underside of the slab. To determine the strength properties of concrete, at the factory, samples of cubes with a rib size of 10 cm were made from the same mixture as the slab, which were tested for compression in laboratory conditions. The obtained value of cubic strength showed that concrete corresponds to grade C16/20. Determination of concrete strength during short-term loading was carried out in accordance with the requirements of current standards [18,19].
The tests were carried out according to a single-span design with a substitute equivalent load (Fig. 2). The loading was carried out by applying two concentrated strip vertical loads along the slab width.
Research results
Two multi-hollow floor slabs were tested, one is ordinary reinforced concrete (PK series 30. , and the second is similar, but with the addition of 1% steel fiber. Slabs have dimensions in the plan of 1190x2980 and a height of 220 mm (Fig. 3), concrete consumption 0.43 m 3 .
During the tests, the load applied to the element and the corresponding deformations were recorded; tests were carried out in accordance with [20].
The load was applied in steps of (0.04 ÷ 0.05) from the breaking load. Each stage ended with exposure lasting up to 10 minutes with fixing all the necessary parameters. Deformations were measured using dial gauges with a division price of 0.001 mm and a base of 25 cm. Five gauges (3, 4, 5, 6, and 7) were installed on the upper surfaces of the slabs, in the central part (Fig. 3). Gauges 1, 2 and 8, 9 were fixed to the side surfaces (faces) of the plates. The first two gauges were located in the middle of the span in the zone of clean bending, and a pair of indicators 8, 9 in the zone of load transfer (Fig. 3). The first and ninth gauges are in the stretched zone of concrete, 2 and 8 are in the compressed.
From the results shown in Fig. 4 and 5, it follows that the readings of all 5 gauges located on the upper surfaces of the slabs from the beginning of loading and up to failure change equally synchronously and almost by the same value. The latter indicates that the loading of reinforced concrete slabs using a two-level cross-beam system ensures uniform loading of its upper surface.
From the moment the first crack appeared in the stretched zone of concrete, the process of crack formation and opening was monitored. At each level, using the Brunell tube, the width of their opening and height were measured. ІІ section. At VIII-X loading steps, when the load varies in the range from 44.41 kN to 59.21 kN, a sharp change in the strain growth rate occurs (the angle of inclination of the curves changes significantly). Deformations in the compressed and stretched zones of concrete increase almost 3 times. Such a significant increase in deformation is explained by the avalanche-like process of cracking (12 cracks with an opening width of up to 0.005 mm).
In section III, with a load of more than 59.21 kN, the relative deformations in the compressed zone of concrete again change almost linearly up to the breaking load (108.55 kN) and amount to 0.75×10 -3 . In the stretched zone of concrete, the strain growth rate is significantly higher. Deformations from 0.2×10 -3 increased to 1.4×10 -3 , 2 times higher than the deformation of the compressed zone of concrete. This is explained by the fact that, at this stage of loading, along with the formation of new cracks, the process of opening previously formed cracks is intensified. The width of their disclosure increases 3-4 times. Fig. 4. Deformation of a hollow-core reinforced concrete slab according to indicators. Figure 5 shows the deformation of concrete fibers in a steel-fiber concrete slab.
The nature of the curves shown in Fig. 5 is similar to the nature of the strain curves of concrete fibers in a conventional reinforced concrete slab (Fig. 4). Namely, a linear relationship is observed up to the load level corresponding to the onset of crack formation (64.14 kN). Relative deformations corresponding to the indicated load do not exceed the value of 0.1×10 -3 .
The latter indicates that the cracking moment in both slabs begins at the same relative strain equal to 0.1×10 -3 .
At the second stage, in the range of load changes from 64.14 kN to 78.95 kN, the relative deformations in the compressed zone of concrete increase to 0.2×10 -3 , which is two times lower than in a slab of ordinary reinforced concrete. This is explained by the fact that 12 cracks formed in an ordinary slab at this stage, and in a slab of steel-fiber-reinforced concrete -7. Moreover, not only the number of cracks, but also the width of their opening is 1.7 times smaller.
In the third section, at loads greater than 78.95 kN, the relative deformations in the compressed zone of concrete again change linearly up to a breaking load of 162.83 kN.
Comparing the results shown in Fig. 4 and 5, it is easy to verify that the destruction of the investigated plates occurred when the relative deformations in the compressed zone of concrete reached 0.80×10 -3 and 1.10×10 -3 for reinforced concrete and steel-fiber concrete plates, respectively; the difference is 37.5%.
In a reinforced concrete slab, this deformation occurs at a load of 108.55 kN, and in a steel-fiber concrete slab at 162.83 kN; these values differ by 50%. Figure 6 shows the nature of the changes in the deflection of a reinforced concrete slab during its loading. Deflections were measured using the Maximov deflections with a division value of 0.01 mm. The results presented in Fig. 6 are identical to the results shown in Fig. 4, in the sense that the previously described stages of structural deformation are clearly traced on the curves. Stage I to the load level of 44.41 kN (41% of the destructive) -linear. The maximum deflection at the end of the stage is 1.7 mm, i.e. 7% of its maximum value at the destruction moment.
At stage II, the linearity is substantially violated, and by the end of the stage, the deflections increase to 5.5 mm, i.e. more than 3 times with an increase in load of only 10%. This is also explained by the fact that 12 cracks formed in the plate at this stage.
At stage III, the load compared to the first two stages doubled, and the deflections increased five times to a value of 2.5 cm.
In Figure 7 shows the nature of the change in deflections in a steel-fiber concrete slab during its loading. In this figure, as in the previous ones, 3 sections can be distinguished. The first one is linear up to the load level corresponding to the moment of crack formation (64.14 kN).
At the second section, in the load interval from 64.14 kN to 123.35 kN, linearity is broken, because 19 cracks with an opening width not exceeding 0.005 mm are formed.
In the third section, the load varies from 123.35 kN to 162.83 kN. The process of formation of new cracks is significantly slowed down (5 new cracks), and in parallel with it, the process of intensive opening of existing cracks begins. The width of the opening of five cracks increased 10 times (0.05 mm). Figure 8 shows for comparison the deflections in reinforced concrete and steel fiber concrete slabs. From the presented results it is seen that the maximum deflection in a steel-fiber concrete slab is 37.5% less than in a similar reinforced concrete slab. This is explained by the fact that at the time of fracture in a conventional reinforced concrete slab there were 8 through cracks with an opening width of up to 0.1 mm, while in a steel fiber reinforced concrete slab there were only 4 with an opening width not exceeding 0.06 mm. In addition, the total magnitude of the opening of all cracks in an ordinary slab is 1.57 mm, and in steel-fiber concrete -only 0.52 mm, i.e., almost 3 times less. Along with indicators, strain gauges with a strain measurement base of 50 mm were glued on the upper and lateral surfaces of reinforced concrete and steel fiber reinforced concrete slabs (Fig. 9). The results presented in Fig. 10, indicate that two completely unrelated strain measurement systems show very close values (the difference does not exceed 5%).
Conclusions
An analysis of the experimental studies showed that the main parameters that determine the physicomechanical characteristics of concrete and fiber-reinforced concrete, namely, bearing capacity, deformability and crack resistance, are interconnected throughout all stages of loading. 1. The bearing capacity and crack resistance of a slab of combined reinforcement using steel fiber are respectively 50 and 44% higher than that of a similar reinforced concrete slab. 2. The maximum deflection of the slab of combined reinforcement is 37.5% lower than that of ordinary reinforced concrete. 3. The destruction of both slabs occurred under loads, when the relative deformations in the compressed zone of concrete reached 0.80·10-3 and 1.10·10-3 for reinforced concrete and steel-fiber concrete slabs, respectively, the difference is 37.5%. | 2020-04-23T09:09:59.770Z | 2020-05-12T00:00:00.000 | {
"year": 2020,
"sha1": "6e8a45c91c9f956f4cf75857e6a4a05db8e9e3e5",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/26/e3sconf_icsf2020_06003.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f0f966851b166d59835e6ae1ab01bf62b1d6eeae",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
207957638 | pes2o/s2orc | v3-fos-license | Injury incidence, characteristics and burden among female sub-elite futsal players: a prospective study with three-year follow-up
The main purpose of the current study was to analyze the injury incidence, characteristics and burden among sub-elite female futsal players. Individual exposure to match play and training, injury incidence and characteristics (player position, injury mechanism, type of injuries, severity of injuries, recurrent vs. new injuries, season variation of injury pattern) in a female futsal team were prospectively recorded for three consecutive seasons (2015–2018). Incidences were calculated per 1,000 h of exposure. A total of 30 injuries were reported during the three seasons within a total exposure of 4,446.1 h. The overall, match and training incidence of injuries were 6.7, 6.4 and 6.8 injuries/1,000 h of exposure, respectively. Most injuries had a non-contact mechanism (93%), with the lower extremity being the most frequently injured anatomical region (5.62 injuries/1,000 h of exposure). The most common type of injury was muscle/tendon (4.9 injuries/1,000 h of exposure) followed by joint (non-bone) and ligament (1.3 injuries/1,000 h of exposure). The injuries with the highest injury burden were those that occurred at the knee (31.9 days loss/1,000 h exposure), followed by quadriceps (15.3 day loss/1,000 h) and hamstring (14.4 day loss/1,000 h) strains. The first few weeks of competition after pre-season and soon after the Christmas break were the time points when most injuries occurred. These data indicate that sub-elite female futsal players are exposed to a substantial risk of sustaining an injury. To reduce overall injury burden, efforts should be directed toward the design, implementation and assessment of preventative measures that target the most common diagnoses, namely, muscle/tendon and ligament injuries.
INTRODUCTION
. Futsal requires players to perform on a reduced (usually indoor) pitch size (40 × 20 m) and during 2 × 20 min periods (with time stopping at every dead ball and unlimited substitutions) a high number of repeated high intensity multiplanar movements such as sudden acceleration and deceleration, rapid changes of direction, tackling and kicking (Castagna et al., 2009;Beato, Coratella & Schena, 2016;Naser, Ali & Macadam, 2017). At top levels, the combination of these repeated high intensity movements that are performed during training and match play alongside current congested training and competitive calendars and exposure to contacts might place futsal players at high risk of injury. However, prior to implementing injury prevention programs into everyday futsal training routines, it is essential to establish the extent of the problem in terms of the incidence and characteristics of injuries (Van Mechelen, Hlobil & Kemper, 1992;Finch, 2006;Van Tiggelen et al., 2008).
Despite being one of the most played sport in several countries, a limited number of prospective epidemiological studies have been published investigating injuries sustained by elite futsal players (mainly during match play) (Ribeiro, Oliveira & Costa, 2006;Junge & Dvorak, 2010;Angoorani et al., 2014;Hamid, Jaafar & Ali, 2014;Álvarez Medina et al., 2016). These studies have reported incidence rates for male players ranging from 3.5 to 89.9 injuries per 1,000 h of match play, most of them affecting the lower extremity with contusions of the lower leg and ankle sprains the most frequently diagnosed types of injury (Ribeiro, Oliveira & Costa, 2006;Junge & Dvorak, 2010;Angoorani et al., 2014;Hamid, Jaafar & Ali, 2014;Álvarez Medina et al., 2016;Larruskain et al., 2018). However, it should be noted that among these epidemiological studies, only two (Angoorani et al., 2014;Hamid, Jaafar & Ali, 2014) have reported incidence data of female futsal players. Angoorani et al. (2014) showed an incidence rate in female players of 10.7 injuries per 1,000 h of match play during camps with the Iran national team (18 months of follow-up), whereas Hamid, Jaafar & Ali (2014) found an incidence rate of 19.7 injuries per 1,000 h of match play during the Malaysian national futsal league. In both studies, ankle sprains and ligament ruptures were the most observed injuries, similar to what has been observed in other team sports such as football (Hägglund, Waldén & Ekstrand, 2009;Asker et al., 2018), handball (Asker et al., 2018) and rugby (Peck et al., 2013). It is likely that the anatomical, hormonal and neuromuscular sex-related differences (among other factors) may contribute to sex-specific differences in injury incidence. Furthermore, only Angoorani et al. (2014) provided injury incidence rates during training in male and female futsal players, reporting an incidence of 1.8 and 3.1 injuries per 1,000 h of exposure, respectively. As the training volume (Almeida et al., 1999) and the number of hours of high intensity training (Brooks et al., 2008) have been significantly correlated with an increased risk of sustaining non-contact injuries in team sports (mainly attributed to an acute and/or cumulative fatigue state), knowing the injury incidence rates during futsal training may help coaches and physical trainers to identify if the training load and content allows players to recover fully from match demands. None of the studies that have provided epidemiological data of futsal-related injuries in male and female players have calculated the injury burden (the product of severity (consequences) and incidence (likelihood)) and/or built a risk matrix. A risk matrix is a graph of injury severity plotted against injury incidence with criteria incorporated into the graph for evaluating the level of risk, usually by dividing the graph into some risk areas using descriptive or quantified incidence, severity and risk evaluation categories (Fuller, 2018).
Consequently, there is a clear need for more prospective epidemiological studies that inform about injury incidence and burden in female futsal players. Identifying the most common and burdensome futsal-related injuries, as well as how (traumatic or overuse) and when (matches or training sessions) they usually occur would lead coaches, physical trainers and physiotherapists to prioritize the application of specific measures to prevent or reduce the risk of sustaining such injuries. Therefore, the main purpose of the current study was to analyze the injury incidence, characteristics and burden among sub-elite female futsal players during three consecutive seasons.
METHOD Participants
All female sub-elite futsal players from the same team that were playing in the Spanish second division were prospectively followed during three consecutive seasons (2015/16, 2016/17 and 2017/18) which covered the period between September and May. Twenty-two different female futsal players participated in this study. However, as some players remained in the team for more than one season, the total number of player seasons was 39 (2015/16: 14 players followed, 2017/17: 13 players followed, 2017/18: 12 players followed). All players had more than 5 years of futsal experience. The team finished all three seasons in the top 10 of the league (4st, 6st and 9st). All players were verbally informed about the study procedures and provided written informed consent. For players younger than 18 years old (n = 3), written informed consent was also obtained from their parent or legal guardian. Players who left the team during the season (e.g., due to transfer) were included in the analysis according to their time on the team. The experimental procedures used in this study were in accordance with the Declaration of Helsinki and were approved by the University Office for Research Ethics (Órgano evaluador de proyectos, Universidad Miguel Hernández de Elche) (DPS.FAR.02.14).
Data collection
The study design and data collection followed both the consensus on definitions and data collection procedures for studies of football injuries outlined by the Union of European Football Associations (Hägglund et al., 2005) and the consensus document for football injury surveillance studies (Fuller et al., 2006). An injury was defined as any physical complaint sustained by a player that resulted from a futsal match or futsal training and where the player was unable to participate in a match or training sessions on the day after the injury (time-loss injury) (Fuller et al., 2006). The day on which an injury occurred was day 0 and was not counted when determining the severity of an injury. If a player had to stop training or participating in a match because of injury on 1 day but could participate the next day, the time loss was recorded as 0 days.
The club's medical staff (which remained the same for all three seasons), diagnosed, treated and recorded all time-loss injuries on a standardized injury report form that was sent to the study group each month. Specifically, the team was supported by one certified medical doctor, one physical trainer and one physiotherapist. The doctor was the member of the medical staff who assessed and diagnosed injured players through the use of clinical judgements (e.g., physical examination, posture and gait inspection, inspection and palpation of muscle bellies, etc.). Diagnostic imaging techniques (e.g., echography, magnetic resonance imaging and ultrasound imaging) were also applied when it was needed. Although early treatment actions were delivered as soon as possible when a player sustained an injury during training or competition, the initial assessment and diagnosis were often carried out within 12 h to 4 days post-injury as some signs of injury may arise a few hours or days later (Askling et al., 2007). The physiotherapist administered the therapeutic exercises during the first stages of the rehabilitation process. The physical trainer was responsible for introducing injured players to the drills and skills that would be required to return to full participation in training and to be available for match selection. A futsal player was considered injured until the medical staff (upon agreement) allowed full participation in training and they were eligible for match play.
For all injuries that satisfied the inclusion criteria (time-loss injury), team medical staff provided the following details to investigators: date of injury, moment (training or competition), player position (goalkeeper or field player (lastwoman, wing or pivot)), injury mechanism (traumatic (contact or non-contact) or overuse), injury location, type of injury (the specific injury diagnosis was also recorded), extremity of the injury (dominant/ non dominant), injury severity based on lay off time (0 days (when a player could not participate fully on the day of an injury but was available for full participation the next day), minimal (1-3 days), mild (4-7 days), moderate (8-28 days), severe (>28 days) and career ending injury), whether it was a recurrence or new injury and total time taken to resume full training and competition. Illnesses and any physical or mental complaint that did not result from a futsal match or training were excluded. Individual player exposure time in training and matches (friendly and competitive) were recorded daily in minutes by the physical trainer.
Those players who were already injured when the follow up process started (September 2015) were included in this study once medical staff agreed return to training and availability for match selection. Those individuals who were still injured at the end of the study period were included in the statistical analyses, and the estimated duration of the recovery period was established after discussion with the respective medical staff. As a medical history based on information from the player may be confounded by recall bias, previous injuries of those players who were recruited to the team after the study started were not included unless an accurate and detailed description of them were provided in the form of a report or standard form and signed by either a certified medical doctor or a former physiotherapist.
Demographic information such as stature, body mass and age were collected during the last week of the preseason period (which was before the start of the season).
Data analysis
Descriptive data are presented as a mean with the corresponding standard deviation, proportions (%), incidence rates and 95% confidence intervals (CI). The overall injury incidence, match injury incidence and training injury incidence were the number of injuries divided by 1,000 player-hours in total, match and training, respectively. For incidence rates, 95% CIs were calculated as the incidence ±1.96 times the square root of the number of injuries divided by the number of participants. The injury burden was calculated as the number of lay-off days/1,000 h (Bahr, Clarsen & Ekstrand, 2017). Player overall hours were calculated by adding match and training hours. Player match hours were calculated by multiplying total number of matches in the season per five players per match duration (40 min with stopped clock)/60, and player training hours were calculated by adding individual training hours (warm up of the matches was not included). All of the analyses were performed using the PASW statistical package, version 18.0 (SPSS Inc., Chicago, IL, USA), with p < 0.05 considered statistically significant. A post hoc power analysis was conducted using the software package, G Ã Power 3.1.2 (Faul et al., 2007;Faul et al., 2009). The sample size of 39 was used for the statistical power analyses. The alpha level used for this analysis was p < 0.05. The post hoc analyses revealed the statistical power for this study was 0.74. It could be concluded that the given sample size was large enough to detect significant effects.
The spreadsheet designed by Hopkins (2007) for combining effect statistics was used to make clinically (qualitative) inference for paired-comparisons between incidence rates. In particular, the incidence rate ratio (and its associated confidence limits) was assessed against predetermined thresholds. Thus, an incidence rate ratio of 0.91 represented a substantially lower injury risk, while an incidence rate ratio of 1.10 indicated a substantially higher injury risk (Hopkins, 2010). An effect was considered unclear if its CI overlapped the thresholds just mentioned; in other words, if the effect could be substantial in both a positive and negative sense. Otherwise the effect was clear and deemed to have the magnitude of the largest observed likelihood value. The following scale was used to qualify with a probabilistic term the magnitude of the observed effect: <0.5%, most unlikely; 0.5-5%, very unlikely; 5-25%, unlikely; 25-75%, possible; 75-95%, likely; 95-99.5%, very likely; >99.5%, most likely (Hopkins, 2007).
Study quality assessment
The quality of the study was assessed using the "Strengthening the reporting of observational studies in epidemiology" (STROBE) (Von Elm et al., 2014) and the risk of bias of external validity quality, using an adapted version of the Newcastle Ottawa Scale (NOS) (Saragiotto et al., 2014;Videbaek et al., 2015). The study fulfills all the criteria of the STROBE scale except the items 9 and 10 (Appendix A2). Regarding the NOS adapted scale just item 6 was not fulfilled (Appendix A3). Thus, the reporting and external validity quality of the present study could be considered as high according to the qualitative descriptors proposed by Von Elm et al. (2014) and Wells et al. (2013) respectively.
RESULTS
During the three seasons, four players dropped out due to transfers to another club or they were released by the club but their injury data were included based on their time at the club. The average duration of each season was 34.3 ± 2.1 weeks with 31 ± 2.7 matches per season and 3.3 ± 1.3 trainings sessions per week. Player and team characteristics are presented in Table 1.
Overall, match and training incidence
A total of 30 injuries were reported in 15 different players during the three seasons (two match injuries and 28 training injuries) within a total exposure time of 4,446.1 h (310 h of match exposure and 4,136.1 h of training exposure), which is equivalent to an overall incidence rate of 6.75 injuries per 1,000 h of exposure (95% CI [6.47-7.02]). One of the injuries was not taken into account due to the player having to retire from the sport because of the injury. The match injury rate was similar (no statistically (p > 0.05) and clinically irrelevant (very likely trivial) differences) to the training injury rate (6.45, 95% CI [6.38-6.52 vs. 6.77], 95% CI [6.50-7.04]/1,000 h) and 38% (15/39) of players sustained at least one injury during the three seasons. Players sustained 0.77 injuries per season on average, which is equivalent to 10 injuries per season for a squad of 13 players.
The injury incidence and characteristics of the injuries during the three seasons are shown in Table 2. Wings had a very likely higher incidence rate (96.6% likelihood) than goalkeepers and most likely higher (100% likelihood) than pivots. Finally, goalkeepers had a likely higher incidence rate (76.6% likelihood) than pivots.
Injury mechanism
Two out of three injuries were due to trauma and one out of three injuries was due to overuse. The incidence rate of traumatic injuries was most likely higher (100% likelihood) than overuse injuries ( . No foot/toe injuries were reported. In terms of pairedcomparisons, thigh injuries occurred more frequently (100% likelihood) than injuries in other lower extremity regions. Ankle injury rates were most likely higher (100% likelihood) than knee, hip/groin and lower leg/Achilles tendon injuries. There were no meaningful differences between the remaining paired combinations.
Type of injuries
The mean incidence of injury type grouping is presented per 1,000 h of exposure with 95% CIs. Most injuries were diagnosed as muscle/tendon injuries ( likelihood). Likewise, joint (non-bone) and ligament incidence rate were most likely higher (100% likelihood) than fractures, bone stress and contusions. Comparisons between each severity level showed that the moderate injury incidence rates were most likely higher (100% likelihood) than other severities. Minimal and mild injury incidence rates were most likely higher (100% likelihood) than severe and career ending injuries.
Severity of injuries
The recorded overall time-loss injuries was 429 days, so overall injury burden during the three seasons was 96.5 days loss/1,000 h exposure (58.1 in matches and 99.4 in trainings). Figure 1 shows a quantitative risk matrix illustrating the relationship between the severity and incidence of the most common reported injuries. For each injury type, severity is shown as the average number of days lost (log scale), while incidence is shown as the number of injuries per 1,000 h of total exposure for each injury type. The shading illustrates relative importance of each of the injury types; the darker the color, the greater the injury burden, and the greater the priority should be given to prevention. Furthermore, lastwomen and pivots showed the highest injury burden (40.9 and 33.3 days loss/1,000 h exposure) compared to goalkeepers and wings (7.9 and 14.4 days loss/1,000 h exposure). On the other hand, muscle/tendon injuries and joint (non-bone) and ligament injuries showed similar injury burden (44.98 and 49.48 days loss/1,000 h exposure) although their overall incidence was significantly different. Regarding injury Figure 1 Quantitative risk matrix of injuries, illustrating the relationship between the severity (consequence) and incidence (likelihood) of the most common injuries.
DISCUSSION
The overall, training and match incidence rates reported in the current study were comparable to those found in the only study (to the authors knowledge) that has provided three incidence rates separately in a cohort of 17 female futsal players (Angoorani et al., 2014) (4.7, 3.1 and 10.7 injuries per 1,000 h of exposure to overall, training and match play, respectively). Conversely, the match injury incidence reported in the current study (6.4 injuries per 1,000 h of match play) is lower than that reported by Hamid, Jaafar & Ali (2014) in the Malaysian female futsal league (29.6 injuries per 1,000 h of exposure to match play). An explanation of this discrepancy may be attributed to the more congested competitive calendar in the study carried out by Hamid, Jaafar & Ali (2014) compared to our study. Thus, while in their study the Malaysian league had a duration of approximately 22 weeks (1st July until 28th November) with a break in August (because of fasting during Ramadan) and one or two matches per week, the three seasons (2015-2018) of the Spanish second division analyzed in the current study lasted 8 months (average of 34.3 ± 2.1 weeks) with two breaks periods of 2-3 weeks (at Christmas and Easter) with one match played per week (usually at the weekend days). This hypothesis may be supported by evidence from prospective epidemiological studies carried out in elite male futsal players (Ribeiro, Oliveira & Costa, 2006;Junge & Dvorak, 2010) and football players (Dvorak et al., 2011;Junge & Dvořák, 2015) during international tournaments (i.e., World cups) which have shown higher incidence rates in comparison with those conducted during national league futsal (Hamid, Jaafar & Ali, 2014;Álvarez Medina et al., 2016) and football (Noya Salces et al., 2014;Stubbe et al., 2015). This is likely due to the higher match demands during international tournaments with relatively shorter recovery times between matches. These tournaments also tend to occur at the end of long competitive league seasons where accumulated fatigue may also be a factor in the higher incidence rates. Unlike data from other team sports (regardless of the sex of the players) (i.e., football (Giza et al., 2005;Waldén, Hägglund & Ekstrand, 2007), basketball (Borowski et al., 2008), netball (Best, 2017)) where match injury incidence is always notably higher (almost 10 times) than the injury rate obtained for training sessions, in our study both incidence rates were similar. The latest trends in strength and conditioning for team sports have suggested that training session design (i.e., work-load, intensity, duration), when possible, should mimic match demands so that players are better prepared for what they face during matches (Gabbett, 2016). Perhaps, the training sessions designed by the team staff might have included a large number of repeated high-intensity actions (e.g., accelerations and decelerations, changes of direction) in order to replicate the evolving nature of the futsal game. However, an excessive training load and/or an insufficient recovery of previous efforts might have forced players to perform some of these highly demanding training sessions under suboptimal states of readiness and this could have potentially increased the risk of injuries (mainly muscle-tendon and ligament injuries) (Gabbett, 2004). To determine whether or not futsal players are in an optimal state of readiness for the stress that will be a priori elicited by training, it is advisable to monitor daily training load (internal and external) and strain, wellbeing and recovery status from previous efforts and also include regular physical performance tests as a component of the training program (Bouaziz et al., 2016;Elloumi et al., 2012). This information might help coaches and physical trainers to constantly re-adjust the design of the training sessions throughout the season so that the physical and psychological demands that will be imposed on the players do not negatively affect their optimal readiness to re-perform.
When exploring differences in playing position on incidence rates our data from the goalkeepers and outfield player's differed from the findings previously reported by Hamid, Jaafar & Ali (2014). Their study, also in female futsal players, showed a higher incidence rate in goalkeepers but we found outfield players showed higher incidence and higher amount of days off per injury than goalkeepers. Our findings are similar to that which has been reported in other team sports such as handball (Tsigilis & Hatzimanouil, 2005) and football (Mallo et al., 2011;Falese, Della Valle & Federico, 2016). It is difficult to prescribe a reason for the discrepancy between the findings of Hamid, Jaafar & Ali (2014) and our current study. However, it might be due to the fact that outfield players need to perform a larger number of repeated high intensity multiplanar movements that occur every few seconds (Doğramacı & Watsford, 2006), which may place outfield players at a higher risk of injury than goalkeepers.
Previous studies have indicated that a large percentage of injuries in male futsal players (Ribeiro, Oliveira & Costa, 2006;Junge & Dvorak, 2010) are caused by contact trauma, however the current study demonstrates that most injuries sustained by female players are due to non-contact trauma (>90%). Our results are in agreement with the study of Angoorani's et al. (2014) and might be partly attributed to the fact that both studies included training injury incidence data, something that other studies have failed to do. Furthermore, the higher number of high intensity phases observed in elite male players during the course of futsal play (Carling et al., 2015;Naser, Ali & Macadam, 2017) might contribute to generate more tackling situations and partially explain the fact that males suffer more contact injuries than females.
With respect to the location of futsal-related injuries, and similar to previous studies in male (Ribeiro, Oliveira & Costa, 2006;Junge & Dvorak, 2010;Álvarez Medina et al., 2016) and female futsal players (Angoorani et al., 2014;Hamid, Jaafar & Ali, 2014), lower extremity injuries were, by far, the most frequent injuries (83.3% of all the injuries recorded). The thigh (50% of all the injuries recorded) was the anatomical region of the lower extremity where injuries occurred significantly more followed by the knee (6.7% of all the injuries recorded) and ankle (6.7% of all the injuries recorded). Furthermore, the most common type of injury grouping was muscle/tendon injuries followed by joint (non-bone) and ligament injuries. As futsal is a fast-paced game relying mostly on the lower extremity for ball control, involving sprinting and frequent changes in direction such observations were anticipated. In football, it has been demonstrated that player match availability has a strong correlation (r > 0.85) with team success (i.e., ranking position, games won, goals scored, total points) (Eirale et al., 2013;Hägglund et al., 2013;Carling et al., 2015). If this statement also holds for futsal, then injury prevention measures should focus not just on reduction of the incidence of the most frequent injuries but also on reduction of the injuries with the highest burden (e.g., those injuries that keep players out of training and match play the longest) (Bahr, Clarsen & Ekstrand, 2017). According to the results found in this study, knee and thigh injuries are those with the highest injury burden with 31.9 and 29.7 days of absence per 1,000 player hours, respectively. In particular, medical and fitness staff should implement measures mainly aimed (but not solely) at reducing the number and severity of ACL and hamstring and quadriceps muscle injuries. It should be noted that one player from the team had to retire from futsal due to an ACL rupture, which was not included in the injury burden calculation as the number of days lost were not defined. This reinforces the need to deliver targeted interventions aimed at reducing this devasting and relatively frequent (two cases in the three seasons recorded in our study for a single team) type of injury in female athletes. It should be also highlighted that the overall (31.7 days) and training (30 days) injury burdens of the last season analyzed (2017/18) were significantly lower than those obtained for the two previous seasons (overall = 155.3 (2015/16) and 108.4 (2016/17) days; training = 165.6 (2015/16) and 108.0 (2016/17) days). Perhaps, the fact that during the three seasons that were object of study the club kept the same medical staff and head coach may have been a factor that may explain in part this circumstance. In this sense, and similar to what was found in previous studies (Ekstrand et al., 2018;Lausic et al., 2009), the potential and gradual improvement in the quality of the internal communication not only within the members of the medical staff but also between the medical staff and the coach that might have occurred throughout the three consecutive seasons may have had a positive impact on the players' availability for futsal play in the last season. In fact, according to Ekstrand et al. (2018), the measures designed to reduce the injury burden in elite teams should not only address the traditionally proposed modifiable injury risk factors, for example, eccentric strength deficits (Croisier et al., 2008;Petersen et al., 2011;Van Dyk et al., 2016), poor neuromuscular control (Lees & Nolan, 1998;Hewett et al., 2005), altered muscle architecture (Lees & Nolan, 1998;Arnason et al., 2004;Timmins et al., 2016), player load and match frequency (Rahnama et al., 2003;Miloski, Freitas & Barra-Filho, 2012) but also some new external factors such as job security and club stability and players adherence and coaches compliance to the injury prevention programs applied. The inclusion of updated and evidence-based advancements in factors related to injury management (including diagnosis techniques, treatment approaches and monitoring tools) might also have a positive impact on the injury burden.
As expected, new injury rates were higher than recurrent injury incidence rates (5.6 vs. 1.1 injuries per 1,000 h). However, the recurrent rate identified in the present study may be considered high. It was found that 20% of recurrent injuries (mainly lower extremity muscle and tendon injuries) occurred within 2 months after return to play. This may be regarded as a sign of premature return to train/play and incomplete or inadequate rehabilitation. The lack of and evidence-based criteria for a safe return to train/ play may have resulted in letting injured players return to play sooner than recommended. This may have been due to the desire to let them play in important matches or to let them play with ongoing minor symptoms, and this might be two primary reasons behind the high recurrent injury incidence rate. Future studies should extend our current knowledge further in relation to the improvement of the decision-making process for a safe return to train/play by developing learning algorithms or artificial intelligence-based models that allow the identification of when a player is successfully rehabilitated before returning to train/play. Furthermore, medical and fitness team staff should allow players enough time for rehabilitation before return to train/play.
Regarding the moment when most injuries took place, the findings indicate that there are two periods when they are more likely to occur, October and January-February. The higher amount of injuries during October may be explained by the fact that within the pre-season period the training loads are much higher than during the competitive period (Miloski, Freitas & Barra-Filho, 2012) and accumulating fatigue may increase the injury risk during the first weeks of competition. Petersen et al. (2010) reported a higher incidence in the 2 months after the winter break (January-February) which is consistent with the results of the present study.
Limitations
Despite being one of the first prospective studies that has analyzed the incidence rates and characteristics of futsal related injuries in female players, some limitations must be considered. The sample size of players and injuries is small, and results should be cautiously interpreted (especially the incidence rates reported for specific and less frequent injuries). The analysis of only one team limits the external validity of the results. Consequently, it is unknown if female players from other teams in which there could be a higher (or lower) medical staff-to-player ratio or access to other staff (such as strength and conditioning coaches, psychologists and nutritionists) may show similar injury incidence rates and characteristics than those reported in the current study. Even though all female players had sub-elite status, most of them had jobs besides futsal that could alter their risk of injury and recovery time, for example, by preventing them from training or taking full advantage of medical treatment. Therefore, future studies are needed in order to analyze if elite female futsal players on full-time (professional) contracts may show different injury incidence rates, characteristics and burden.
CONCLUSIONS
Sub-elite female futsal players (particularly outfield players) are exposed to a substantial risk of sustaining injuries. Most injuries had a non-contact mechanism, with the lower extremity the most frequently injured anatomical region. Knee (ACL tears) and thigh (hamstring and quadriceps muscle strains) injuries are those with the highest injury burden. Special attention should be given to the first weeks of competition after pre-season and soon after the Christmas break as incidence rates peak during this period in female futsal players. Medical and fitness team staff should focus their attention on designing, implementing and then evaluating preventative measures that target the most common diagnoses, namely, ligament and muscle/tendon injuries highlighted in this study, as well as making sure that return to train/play criteria are in place in order to reduce the injury burden within female sub-elite futsal players.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
Iñaki Ruiz-Pérez was supported by a pre-doctoral grant from the Ministerio de Economía y Competitividad (FPI BES-2015-07200) from Spain. Francisco Ayala was supported by a postdoctoral grant from Seneca Foundation (postdoctoral fellowships funded by the regional sub program focuses on the postdoctoral development, 20366/PD/17) from Spain. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | 2019-11-07T15:30:13.069Z | 2019-11-05T00:00:00.000 | {
"year": 2019,
"sha1": "b684e2515dadf188fb82cdb17da1e73b0618cbc4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.7989",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7371be66162c310c50b4a09f81e98f65e4d36ffb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231803232 | pes2o/s2orc | v3-fos-license | Our experience of total pericardiectomy for constrictive pericarditis: a comprehensive analysis over a period of 5 years
Introduction Constrictive pericarditis (CP) usually presents as a result of chronic fibrous pericardial thickening and calcification of the pericardium which causes reduced cardiac output. Despite the lack of prospective studies comparing the different therapeutic strategies, surgical pericardiectomy is a valuable treatment under most circumstances. Aim We analyzed our records to highlight the predictors of morbidity and mortality of pericardiectomy and also short-term surgical outcome of the same procedure in a single center. Material and methods We carried out a comprehensive retrospective analysis of the records of patients who underwent surgery for CP at our institute between 2013 and 2018. 30 patients underwent isolated pericardiectomy. All patients underwent median sternotomy and total pericardiectomy without the use of cardiopulmonary bypass. Pre-operative, intra-operative and post-operative characteristics were noted. Results Fifteen patients had a history of pulmonary tuberculosis. The majority of the patients presented with NYHA grade III or IV. 60% of the patients were male. The preoperative mean central venous pressure was 24 ±9 mm Hg and decreased to 9 ±5 mm Hg after surgery. The 30-day mortality was 6.66% (2/30). Morbidity was mainly due to low-cardiac output syndrome (n = 4). A total of 26 patients had significant improvement in their NYHA status. Conclusions Although pericardiectomy for CP remains associated with some operative mortality, the short-term outcome is favorable, and surgical treatment is able to improve the functional class in the majority of survivors.
Introduction
Constrictive pericarditis (CP) usually presents as a result of chronic fibrous pericardial thickening and calcification of the pericardium [1]. The thickened and fibrotic pericardium causes reduced cardiac output as a consequence of impaired diastolic filling of the cardiac chambers [2]. Despite the lack of prospective studies comparing the different therapeutic strategies, surgical pericardiectomy is a valuable treatment under most circumstances [3][4][5][6][7][8]. The main benefits of this operation are the increase in survival, the relief of symptoms, and the prevention of disease progression [9][10][11][12].
Aim
The outcomes of pericardiectomy would greatly improve from appropriate surgical strategies and perioperative medical management based on the identification of perioperative risk factors [5,[13][14][15]. We analyzed our re-cords to highlight the predictors of morbidity and mortality of pericardiectomy and also short-term surgical outcome of the same procedure in a single center.
Material and methods
We carried out a comprehensive retrospective analysis of the records of patients who underwent surgery for CP at our institute between 2013 and 2018. We identified 35 patients who underwent surgery for CP. Out of these 35 patients, 30 patients underwent isolated pericardiectomy and 5 patients had concomitant surgery for associated cardiac conditions along with pericardiectomy. We analyzed the records of the patients who underwent isolated pericardiectomy only.
The diagnosis of CP was made on the basis of clinical, echocardiography, radiological, surgical, and pathological criteria. Patients with tuberculosis and CP were started on anti-tuberculosis medicines as per the guidelines and were Our experience of total pericardiectomy for constrictive pericarditis: a comprehensive analysis over a period of 5 years operated on only after the intensive phase of the medicines was over. Anti-tuberculosis medicines were continued after surgery for the designated time period in the continuation phase.
All patients underwent median sternotomy. In all cases of pericardiectomy, a circuit for extracorporeal circulation was ready to use, and the perfusionist was present in the operating room. External defibrillation pads were always installed before the operation. All patients underwent total pericardiectomy, which was defined as phrenic-to-phrenic excision of the pericardium and from the great arteries superiorly to the diaphragmatic surface inferiorly [16]. The pericardium was palpated to identify a relatively soft and uncalcified area after median sternotomy, and the thymus was removed laterally. An incision was made over the pericardium. Dissection was started at the base of the aorta, extended downwards to the lateral and posterior walls of the left ventricle and the pulmonary veins, followed by the diaphragmatic pericardium. The pericardium over the right atrium and venae cavae was resected last. The myocardium was then exposed, to achieve mobilization of the heart down to the phrenic nerves. Dissection and pericardiectomy beyond the phrenic nerves was not needed in any of the patients. In the presence of dense adhesions impossible to separate we incised the area in a grid pattern carefully not to cause any injury. If calcified plaques penetrating the epicardium were present, we had to leave small islands of calcified pericardium. Cardiopulmonary bypass was not needed in patients who underwent isolated pericardiectomy (Figures 1 and 2).
Pre-operatively causes of CP were assessed, baseline clinical details of the patients were noted, and investigations were noted. Intra-operative findings, central venous pressure before and after surgery and causes of perioperative morbidity and mortality were noted. One year survival and improvement in functional class were assessed.
Statistical analysis
Results are the mean ± standard deviation or percentages. For risk analyses, a multiple logistic regression model was developed using a forward stepwise variable selection method. The Fisher exact test or chi square test was used for categorical variables. Student's t-test was employed for continuous covariates. P-values ≤ 0.05 were considered significant. Analyses were undertaken using SPSS version 23 by IBM. Table I shows the causes of CP and their distribution in our study. Fifteen patients had a history of pulmonary tuberculosis. The cause was not determined in 11 patients and in 4 patients some other infection was the cause. Table II summarizes the baseline clinical details of the patients. The majority of the patients presented with NYHA grade III or IV. 60% of the patients were male.
Results
Echocardiography revealed a thickened pericardium in all patients. Preoperative computed tomography (CT) was done in all patients, and part of the pericardium was demonstrated to be thickened or calcified.
Surgical findings included a markedly thickened pericardium with some calcification in 20/30 patients, caseous necrosis in the pericardial cavity in 18/30 patients and a calcified pericardium penetrating the myocardium in 6/30 patients. The preoperative mean central venous pressure was 24 ±9 mm Hg and decreased to 9 ±5 mm Hg after surgery (p < 0.001). Surgical pathology specimens were examined for granulomas (10/30), as well as inflammation and fibrin deposition (17/30). All bacterial cultures were negative.
There were 2 deaths and both were cardiac-related. They occurred in the perioperative period as a result of Infections other than Tuberculosis 4 13 Other infections included viral in 3 patients and bacterial in 1 patient.
ORIGINAL PAPER low cardiac output syndrome due to right-heart failure. The 30-day mortality was 6.66% (2/30). Morbidity was due to low-cardiac output syndrome (n = 4), acute renal failure (n = 1), respiratory insufficiency (n = 2), mediastinitis (n = 1) and re-exploration for bleeding (n = 1) (Table III). A total of 26 patients had significant improvement in their NYHA status with almost all discharged patients remaining in NYHA class I/II during follow up. One-year survival in 28 patients was 92.86%. The multivariate analysis of composite surgical mortality or major morbidity is summarized in Table IV. Multivariate analysis showed that preoperative NYHA class (odds ratio (OR) = 2.54; p < 0.001), age (OR = 3.78; p = 0.01), tuberculosis (OR = 20.25; p < 0.001) and renal failure (OR = 2.73; p = 0.02) after surgery were each positively associated with increased mortality or major morbidity.
Discussion
The clinical outcome of pericardiectomy for CP remains constrained by high peri-operative mortality. Our study has highlighted that some pre-and peri-operative factors, mainly related to the clinical condition of the patients, can adversely affect the short-term outcome. Idiopathic or viral pericarditis is the predominant cause in the western world, but tuberculosis is still a common cause of CP in "developing" and "underdeveloped" countries (especially in Asia) [2,5,14,17]. Patients with traumatic CP, post-surgery CP and radiation-induced heart disease were not found in our study. On the other hand, 50% of our patients had tuberculosis. Mean age in our study was 32 years, which is about 10 years younger than that reported in earlier studies [13][14][15]. This finding reflects the difference in the etiology of CP.
In our study CP was more common in males, which is consistent with other series [5,[13][14][15]18]. Also there were almost twice as many males as females. No single approach should be used to diagnose all cases of CP [19]. The diagnostic approach should be individualized for patients. The prominent signs are pleural effusions, ascites, leg edema, and increased jugular venous pressure. Sometimes, however, patients may not present with sufficient signs or symptoms for a definitive diagnosis to be made. In such cases, additional imaging is required and recommended.
Echocardiography should be the initial noninvasive imaging employed [5,17,[19][20][21]. CT and MRI give more information than echocardiography and should be utilized wherever available. We followed the same approach and CT was done in all the patients. CT helped us to identify pericardial calcification more accurately and we were able to plan our intra-operative strategy with more ease.
Pericardiectomy is the most effective curative treatment for chronic CP [5,[13][14][15]17]. In the current study preoperative NYHA grade was positively associated with mortality and major morbidity, and hence as per our experience patients with CP should be operated on as soon as the diagnosis is made and before they develop NYHA grade III heart failure. In our institute, pericardiectomy is carried out within 1 week after the diagnosis, before clinical manifesta-tions become worse. Various approaches and methods (left anterolateral thoracotomy; median sternotomy; U incision with the base of the U lying at the left sternal border; bilateral thoracotomy) have been described since Rehn and Age is given as range and other characteristics are given as percentage. Our experience of total pericardiectomy for constrictive pericarditis: a comprehensive analysis over a period of 5 years Sauerbruch conducted a successful pericardial resection for chronic CP [22]. The most commonly used approach is median sternotomy, which is also the approach of choice at our institute. Median sternotomy provides more radical clearance of the pericardium over the right atrium and venae cavae, and allows extensive pericardial removal and also facilitates use of cardiopulmonary bypass if needed. [13,20]. Chowdhury et al. suggested that delayed improvement and persistent symptoms are most commonly the result of incomplete decortications [5]. However, Schwefer et al. suggested that the long-term outcome is related not only to extent of the surgery but also the etiology of pericardial disease and preoperative NYHA status [19]. In the current study, despite total pericardiectomy, 6.66% of patients had early postoperative deaths due to low-output syndrome.
A correlation between NYHA grade and overall or early survival has been observed by our study and also other studies, and hence early pericardiectomy is advocated [23]. In the current study, low-output syndrome and respiratory insufficiency were the most common postoperative complications. Improved perioperative management and medical therapy are important to avoid low cardiac output and restoration of right-heart function. Diuretics, inotropes, and vasodilators are the best medical therapies in the perioperative period [17]. However, there is a temptation, if faced with a much raised venous pressure and edema, for overzealous use of diuretics. Such management can lead to sudden death due to electromechanical dissociation. Also care should be taken to start proper treatment for the etiology pre-operatively.
In the current study tuberculosis was the most common etiology for chronic CP and we started our patients on anti-tuberculosis medications pre-operatively as per guidelines. These medicines were continued postoperatively for a period of 6-9 months. Tuberculous CP can have significant involvement of the lungs [2]. Preoperative chronic lung disease also has a considerable negative effect on postoperative results. Effective interventions can include aggressive physiotherapy, fiberoptic bronchoscopy along with aggressive screening for postoperative pneumonia. Blood transfusion may be needed for CP patients suffering from tuberculosis due to long-term malnutrition and deprivation of cardiogenic nutrients [14]. Also tuberculosis was positively associated with mortality in the present study. Hence proper management of tuberculosis is a must for a good surgical outcome.
The current study is a retrospective observational study with a small cohort and analysis of short-term results. Long-term follow-up data on survival are lacking, which limits our ability to understand the long-term benefits of total pericardiectomy. We found mostly infective etiology for chronic CP in the current study and the outcome of pericardiectomy for this etiology was analyzed. We do not know the outcome of pericardiectomy for other etiological factors.
Conclusions
Although pericardiectomy for CP remains associated with some operative mortality, the short-term outcome is favorable, and surgical treatment is able to improve the functional class in the majority of survivors. Some preoperative and peri-operative factors, mainly related to the patients' clinical conditions, can adversely affect the short-term outcome. Careful perioperative management and surgical intervention upon or before right-heart failure may improve outcome. This is accomplished more readily through median sternotomy in patients with CP. Routine use of cardiopulmonary bypass during pericardiectomy is not necessary.
Disclosure
The authors report no conflict of interest. | 2021-02-05T05:15:16.839Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "b041d6a9d90a2c9908fb1e3fc1e942d1d5d3ac8b",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-40/pdf-42910-10?filename=Our%20experience.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b041d6a9d90a2c9908fb1e3fc1e942d1d5d3ac8b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249305144 | pes2o/s2orc | v3-fos-license | Longitudinal genomic alternations and clonal dynamics analysis of primary malignant melanoma of the esophagus
Primary malignant melanoma of the esophagus (PMME) is a rare gastrointestinal melanoma with a high rate of recurrence and metastasis. The standard of care for PMME has not been established yet due to a lack of understanding of its clinical and molecular pathogenesis. Thus, we performed genomic profiling on a recurrent PMME case to seek novel opportunities for the management of this rare disease. Between 2013 and 2016, 6 tissue samples including 3 from the primary tumors, 2 from the relapsed tumors, and 1 from a normal control were collected from a patient diagnosed with PMME and were subjected to whole-exome sequencing to track the dynamic genetic changes. Additionally, we also analyzed a cohort of 398 samples obtained from the TCGA skin cutanesous melanoma (TCGA-SKCM) dataset to assess the frequency and determine the clinical implications of genomic events found in the presented study. ARHGAP35 (p.L1022M) was the only mutation shared across temporal PMME lesions. The PMME samples showed higher levels of genetic instability and intra-tumor heterogeneity. They also shared several concordant copy number variations (CNV). All lesions were concordant with the evolution trajectory, and shrinkage of the founding clone caused the subclonal population to become dominant in PT1c, which was likely the reason behind metastatic seeding. ARHGAP35 mutations were found in 6% of the TCGA-SKCM cohort samples. The presence of the mutations was associated with poor progression-free survival (PFS) by both univariate and multivariate Cox regression analyses. Our study showed that the primary tumor clone disseminates earlier in PMME. This highlights the need to understand the mechanism involved in the early PMME recurrence to optimize treatment.
cancer and less than 0.05% of all melanoma subtypes. PMME is typically detected at a more advanced stage and tends to display high rates of recurrence and metastasis. About 18.4% of PMME patients present with metastatic disease at the time of diagnosis and 89.7% develop recurrence or metastasis within a few months from diagnosis [1] . As a result, PMME tends to have a poor 5-year overall survival (OS) ranging between 4% to 37.5% [1][2][3][4] .
In our prior study, PMME tumors displayed a high intra-tumor heterogeneity, and normal mucosa samples carried genetic alternations derived from the primary tumor clone, which suggests the possibility of dissemination during the very early stages of tumorigenesis [5] . Only 483 PMME cases have been reported so far up to 2021 [ 1 , 3 , 5 , 6 ] . As a result due to its rarity, the clinical and molecular pathogenesis of PMME is still not understood, and hence the treatment is still not optimized. Thus, there is an urgent clinical need to understand the molecular mechanisms involved in the relapse of PMME.
Therefore in this study, we aimed to further investigate the genomic alternations and clonal dynamics involved in PMME relapse through longitudinal genomic profiling of a patient that developed PMME relapse twice within four years from diagnosis. In addition, we also explored how the genomic profiles evolved in the relapse of PMME under treatment pressure, as well as analyzed of a cohort of 398 samples obtained from the TCGA-SKCM dataset to assess the frequency and determine the clinical implications of genomic events found in the presented study.
Patient characteristics
A 68-year-old Chinese female, with a 4-month history of dysphagia and retrosternal burning pain, was admitted to the Thorax Surgical Department of Nanjing Drum Hospital in February 2013. Physical examination and CT scan showed no evidence of skin melanoma lesions. The patient underwent a partial esophagectomy without any treatment before surgery. The resected specimen was a multifocal (three lesions termed as WMY-PT1a, WMY-PT1b, and WMY-PT1c), elevated pigmented tumor measuring 0.4cm × 1.8cm. Histological analysis confirmed mucosal and sub-mucosal melanoma sparing the muscular tunica, without lymph node involvement (0/8 positive) or peripheral nerves and vessels invasion. Immunohistochemical staining demonstrated positive expression of human melanoma black 45 (HMB45), Melan A, and S100. The patient received one week chemotherapy with fotemustine and oxaliplatin as adjuvant therapy and stopped chemotherapy for intolerance. Instead, the patient was treated with five treatments of dendritic cell (DC)-cytokine-induced killer (CIK) (DC-CIK) immunotherapy combined with Recombinant human interleukin 2 and Recombinant human granulocyte macrophage stimulating factor within one month. In September 2014, esophagogastroscopy revealed two elevated tumors at 24 cm from the incisors (WMY-R1). The lesions were removed via endoscopic submucosal dissection (ESD) and postoperative pathological examination confirmed the presence of a PMME diagnosis. After surgery, the patient received additional immunotherapy using temozolomide combined with dendritic cell-cytotoxic T lymphocytes (DC-CTLs) for two weeks. A second relapse (WMY-R2) was identified in June 2016 following an endoscopy examination. The patient then underwent ESD treatment and postoperative pathological examination which again confirmed a PMME diagnosis. In August 2016, an abdominal computed tomography (CT) revealed a mass in the right lobe of the liver. Positron Emission Tomography-Computed Tomography (FDG PET/CT) showed hypermetabolic lesions in the right liver lobe with a maximum standardized uptake value of 7.5 (Fig.S1). The patient was diagnosed with liver metastasis and died in November 2016. This study was approved by the Internal Review Board of Nanjing Drum Tower Hospital and was conducted in accordance with the Declaration of Helsinki (Revised in 2013).
Extraction of tissue samples
Six formalin-fixed paraffin-embedded (FFPE) tissue samples including three primary PMME tumor samples, one normal esophageal mucosa sample, and two relapsed tumor samples were collected from the same patient ( Fig. 1 ). Genomic deoxyribose nucleic acid (DNA) was isolated using the TIANamp Genomic DNA kit (Tiangen Biotech, Beijing, China) according to the manufacturer's instructions.
Whole-exome sequencing
The DNA of the PMME tumor and normal samples was fragmented with an ultrasonicator UCD-200 (Diagenode, Seraing, Belgium), and subsequently purified and selected according to size with Ampure Beads (Beckman, MA, USA) following end repairing, an "A" base addition and adaptor ligation. The purity and concentration of the DNA were determined using a Nanodrop 2000 spectrophotometer and a Qubit 2.0 Fluorometer with Quanti-IT dsDNA HS Assay Kit (Thermo Fisher Scientific, MA, USA). Samples were prepared using the TruSeq Capture kit (Illumina, San Diego, CA, USA) for DNA libraries preparation. Whole exome paired-end sequencing was performed on Illumina HiSeq X10 (Illumina, San Diego, CA, USA) at Novegene (Novegene, Beijing, China). After filtering out the low-quality reads and reads containing adaptor sequence, the raw reads were mapped to the reference human genome (hg19) by using the BWA aligner (version 0.7.10). For quality control purposes the BAM files with target sequences at an average depth of 180 × for the tumors and 150 × for normal samples were kept.
Annotations of the somatic single nucleotide variants (SNVs) and small insertions and deletions (indels) variants
The normal mucosal samples served as germline control. SNVs and indels were identified by using MuTect2 packed in GATK (version 4.1.2.0). Reliable variants were acquired for the following three types of variants; (i) variants with allele frequency above or equal to 0.01; (ii) variants with mutations as indicated in public databases (1000 Genomes, gnomAD, and ExAC) with allele frequency below or equal to 0.001; (iii) variants located in the coding region of the genome. The latter variants were defined as functional variants and kept for all analysis except for signature decomposition.
Putative driver genes were curated by merging the COMSIC Cancer Gene Census (https://cancer.sanger.ac.uk/census, v90) and those reported by Bailey et al. [7] and Vogelstein et al. [8] . Sorting Intolerant from Tolerant (SIFT) and Polymorphism Phenotyping-2 (Polyphen2) were used to identify the putative driver mutations. Missense mutations identified as deleterious by either of these two algorithms were classified as putative driver mutations. The tumor mutation burden (TMB) was defined as the number of somatic coding non-synonymous variants per megabase of genome examined (33 Mb). Furthermore, the Cancer Genome Atlas (TCGA) database was searched to acquire the MAF files and the corresponding clinical data. A total of 398 skin cutaneous melanoma (SKCM) samples were downloaded from the cBioportal (http://www.cbioportal.org/, October 2021) and were subjected to the uniform in-house filter pipeline to improve the unity of the data from different origins. copy number, the sCNVs were classified as duplicated (a1 + a2 > 2 and a1 > 0), haploid loss of heterozygosity (LOH) (a2 = 1, a1 = 0), copy number neutral LOH (a2 = 2, a1 = 0), duplication LOH (a2 > 2, a1 = 0) and homogeneous deletion (a1 = a2 = 0). To better reflect the CNV amplification and deletion, the log2 ratio was calculated and used to gauge the CNV frequency of each cohort based on a threshold of 0.15, and subsequently used to estimate the genome instability index (GII), which was defined as the proportion of the length of the genome with a segmented copy number amplification (GII Amp ) or deletion (GII Del ).
Dynamic clone evolution inference
The cancer cell fractions (CCF) of each mutation were estimated by ABSOLUTE. PyClone-VI [9] and clonevol [10] were used to infer seeding patterns associated with the primary tumor and relapsed tumors. The PyClone-VI consensus cluster files were used to enumerate the evolutionary relationships, and the evolutionary trajectory between clones was estimated by clonevol and visualized by Fishplot [11] .
Reconstruction of the phylogenetic tree PYLIP [12] was applied to reconstruct the phylogeny of recurrent PMME using all non-synonymous mutations identified by WES. The maximum parsimony algorithm was adopted to build the optimal tree structure. In this case, mutations shared by all five samples and less than five samples were defined as trunk and branch events respectively, while mutations harbored by only one sample were defined as private events.
Analysis of mutational signature and pathway enrichment
The Yet Another Package for Signature Analysis (YAPSA), version 3.12 was employed to analyze the mutational signature of single base substitutions (SBS) according to the COSMIC database [13] . To better display the dynamics of mutational signature based on the phylogenetic tree, the signature composition of each branch were analyzed by the mutations of the tree, which were annotated as specific signatures by using the YAPSA package [14] .
The pathway enrichment of the mutated genes from different phylogenetic clades were based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and were analyzed using the R package clusterProfiler. The g:Profiler(https://biit.cs.ut.ee/gprofiler/gost) was used to compare the altered pathways enriched by differentially expressed genes (DEGs) between the ARHGAP35 -wild type and ARHGAP35 -mutant groups derived from the TCGA SKCM cohort. These DEGs were identified using the limma package with a false discovery rate (FDR) adjusted p -value of less than 0.05 and an absolute log2 fold change value of more than 1.
Immune infiltration level and immune therapy response prediction
The level of tumor-infiltrating immune cells of the SKCM samples obtained from TCGA was analyzed by TIMER2.0. TIMER is an interactive web server for online analysis and visualization of immune cell infiltration (timer.comp-genomics.org) [15] . According to the developer's instructions, we performed the comparison between ARHGAP35 -mutant and ARHGAP35 -wildtype SKCM samples by using the "Mutation" module belonging to the "Immune Association" tab and used the CIBERSORT results for further interpretation.
Immunochemistry (IHC)
The transforming RhoA/B/C protein expressions of the PMME tumor were assessed by IHC staining. The specimens were fixed in formalin and embedded in paraffin. Tissue sections of a thickness of 3μm were deparaffinized in xylene and rehydrated. Heat-mediated antigen retrieval was performed with ethylenediaminetetraacetic acid (EDTA) buffer solution with a pH of 9.0 before commencing with IHC staining protocol. Then the sections were incubated with an anti-RhoA/B/C antibody (1:20000, Boster Biological Technology Co. Ltd., China) overnight at 4 °C, and subsequently washed three times with phosphate-buffered saline (PBS). Thereafter, the sections were incubated with a Goat Anti-Rabbit IgG H&L (HRP) secondary antibody (1:500, Boster Biological Technology Co.Ltd., China) for 30 min at 37 °C, and again washed three times with PBS. The peroxidase activity was visualized by using a diamino-benzidine tetrahydroxy chloride solution. The sections were counterstained with hematoxylin. Positive control section samples for RhoA/B/C were obtained from tonsillar tissue, while the negative control section samples were obtained from normal liver tissue.
The immunoreactivity of RhoA/B/C was assessed independently by two pathologists who were blinded to the clinical background of the patients, and would ask for a third party to make a judgement when there was disagreement. Specimens with clear evidence of membrane and cytoplasm staining were considered immunopositive.
Statistical analysis
The two-sided Mann-Whitney U test was employed for the comparisons between two groups of continuous data using Graphpad Prism (version 8.0). Survival analysis was performed using the survival package in R (version 4.0.2). The hazard ratio (HR) was calculated by the univariate Cox proportional hazards model. The Kaplan-Meier methodology was applied for the comparison of overall survival (OS) and progression-free survival (PFS) between groups and the p -value was calculated by the Log-rank test. For the SKCM cohort, the OS and PFS were obtained from the clinical data available on the TCGA database. All statistical tests were two-sided and a p-value below 0.05 was considered statistically significant.
The repertoire of somatic mutations and copy number alterations among samples
Whole-exome sequences were covered by a median coverage of 238 × (range from 184 to 385) for the five PMME samples and 167 × for the normal control sample, with an average of 80% coverage of the targeted regions represented by at least 100 reads. The main composition of variant classification of mutations among these samples was missense mutations, and there were also appreciable numbers of in-frame deletions and insertions ( Fig. 2 A). Overall, the five PMME samples displayed low TMB, with a median of 2.7 mutations (range,1.8-3.3), and showed a highly concordant SBS pattern, consisting of a high proportion of cytosine (C) and a smaller portion of thymine (T) and adenine (A) ( Fig. 2 B). Mutational signature analysis revealed that the SBS1 and 5 (clock-like) were dominant among all tumors, and SBS19 (unknown), SBS24 (aflatoxin exposure), and SBS39 (unknown) were also present across primary and recurrent tumors ( Fig. 2 C). Interestingly, SBS22 (aristolochic acid exposure) disappeared in the WMY-R1 site but occurred again in the WMY-R2 site.
The genomic instability between the primary PMME and recurrences was compared. sCNVs analysis revealed a common copy number gain on chromosomes 6p, 8q, 19q, and 20p, as well as loss of chromosomes 6q and 10p across both the primary and relapsed PMME sites ( Fig. 2 D and Fig. S2). Additionally, the relapsed PMMEs showed higher microsatellite instability than that of three primary samples and tended to have a higher GII score ( Fig. 2 D and E), suggesting that the recurrent genome was more unstable during tumor progression.
Potential driver genesof PMME
We then analyzed the somatic alterations in known cancer driver genes that were predicted to have functionally deleterious effects by using SIFT and POLYPHEN, and identified several potential driver genes including ARHGAP35, SF3B1, TRRAP, NF1, FOXO1, KMT2C, MAP3K1, CREBBP , and NRAS , as well as copy number amplifications of genes including ARHGAP35, FAM135B, CDKN1A , and loss of EPHA7, CDKN2A, PTEN ( Fig. 2 E). Collectively, the genetic features of the PMME in our case analysis were consistent with previous melanoma sequencing studies [16] . After comparing the genomic features of the primary tumors and recurrences, mutations affecting several melanoma-related genes such as NRAS and MAP3K1 and copy number amplification of KRAS were only identified in the relapsed sites. The NF1 mutation was only detected in the WMY-PT1c suggesting that tumor clones with NF1 mutation may be either sensitive to DC-CIK treatment or have a disadvantage in tumor cell competition.
Notably, ARHGAP35 was the only mutated gene shared by all five samples, simultaneously affected by both CNV and SNV, indicating the importance of this gene in the development or maintenance of PMME.
The evolutionary trajectoryof PMME
To better understand the evolutionary origins of recurrent PMME, a phylogenetic tree was constructed based on the high confidence mutations identified in each tumor cell affected region for all primary tumor samples and recurrent samples ( Fig. 3 A).The topological structure of the tree showed that this PMME complied with the branch evolutionary model and could be divided into two clades: (i) primary clade including WMY-PT1a and WMY-PT1b; (ii) relapsed clade including WMY-R1, WMY-R2, and WMY-PT1c ( Fig. 3 A).
Although these samples displayed a high degree of inter-sample heterogeneity, a total of 20 gene mutations were found to be shared among all samples ( Fig. 3 B-C, Table S1). However, the ARHGAP35 L1022M gene mutation was identified as the only driver mutation along the trunk. In the primary clade, FOXO1 A172D and KMT2C P3169S driver mutations were observed in the branch while EPAS1, LATS1, SETD2, and FAM135 mutations were private. No significant branch mutation was observed in the relapsed clade, and the well-defined melanoma genes such as NF1 and NRAS occurred as private mutations in the WMY-PT1c and WMY-R2 samples, respectively. CNV alternations were added to the phylogenetic tree according to the extent of shared somatic CNVs identified in each sample. Amplifications of ARHGAP35, FAM135, CDH17, CDKN1A, AKT2, and FGF21, as well as loss of EPHA7, ZEB1, BMP5, and LATS1 were ubiquitous and were conserved across all PMME samples ( Fig. 2 E, Fig. 3 A, and Fig S2). The SF3B1 and TRRAP mutants were also identified in intermixed tumor clones by CCF analysis (Fig. S3A-B), suggesting a multiclonal origin. Similarly, the results based on SNVs, common sCNVs, and chromosome alternations were also found to be shared by WMY-PT1c and recurrent samples. Collectively, phylogeny analysis inferred that PMME dissemination occurred during the primary tumor initiation and the invisible metastatic colonization may therefore account for the two recurrences.
We next depicted the mutational signature tree based on analyzing the somatic mutations from each determined phylogenetic class ( Fig. 3 D). SBS15, 24, and 39 were identified as dominant mutational signatures. After comparing the mutational signature landscape ( Fig. 2 C), SBS1, 5, and 22 disappeared reflecting the existence of the intermixed tumor clone and high intra-tumor heterogeneity within the PMME. Interestingly, SBS24 (ascribed to aflatoxin exposure) was lost in the early recurrences (WMY-PT1c and WMY-R1) but became active again in the late recurrence site (WMY-R2). The temporal episodic mutational signature suggested that the progenitor tumor clone could be dormant after seeding until an adapted microenvironment was reconstructed.
The altered gene pathways were further investigated in both the primary and relapsed clades. The mutated genes in the relapse clade (annotated in Fig. 3 A) tended to be enriched in axon guidance and TGF-beta signaling pathways ( Fig. 3 E), both of which were reportedly involved in metastasis [ 17 , 18 ], while no significant pathway was identified in the primary clade (Fig. S3C).
Temporal clonal dynamics of PMME
The evolutionary relationships and temporal order of the driver acquisitions during the PMME progression were evaluated by calculating the CCF value of SNVs and indels of the affected canonical driver genes. To further explore the relationship between each tumor site, we compared the CCF between each two of the five samples. The results showed a significant correlation between WMY-PT1a and WMY-PT1b, as well as WMY-R1 and WMY-R2 ( Fig. 4 A). The clonality of mutations represented by the CCF distribution increased in the recurrent lesions ( Fig. 4 B), either by using all mutations (left panel of Fig. 4 B) or private mutations (right panel of Fig. 4 B), suggesting that the recurrent tumors tended to have higher clonality compared to the late-stage primary tumors. The temporal evolution trajectory of clonal dynamics revealed a linear evolution pattern that all lesions were concordant in the evolution trajectory ( Fig. 4 C). The ancestor clone shrunk during tumor progression and a subclone expansion was observed in WMY-PT1c. Mutations in the ARHGAP35, SF3B1, TRRAP, MLLT6, FOXO1, NF1, and NRAS genes ( Fig. 4 C and Fig.S3) were noted in the dominant subclone, but only the ARHGAP35 mutation was ubiquitously shared across time. However, no mutations in the cancer-related genes were observed in the founding clone ( Fig. 4 C). The non-driver genes whose mutations clustered in the founding clone were further analyzed by evaluating the mutation prevalence in the TCGA-SKCM cohort and we found that RP1, ABCA13, and DCDC1 were frequently mutated. However, the role of these genes as driver genes in SKCM had not been reported previously (Fig. S4). Therefore, our results showed that recurrences from the initial tumor were derived by subclonal evolution. In summary, subclonal driver gene mutations were pervasive in this case, indicating the strong positive selection during the tumor progression.
The clinical relevance of ARHGAP35
We specifically focused on the ARHGAP35 as it represented the only ubiquitously shared driver mutation and consistently maintained a growth advantage from the primary to recurrence stage. In order to explore the clinical value of the ARHGAP35 mutation in melanoma, we conducted a further analysis based on the TCGA SKCM dataset downloaded from the cBioPortal database. A total of 24 SKCMs (6.03%, 24/398) had the ARHGAP35 mutations, including 3 nonsense mutations, 24 missense mutations, and 1 inframe deletion ( Fig. 5 a). These mutations were nearly evenly distributed throughout the coding sequence but lacked a hotspot mutation site ( Fig. 5 a), which indicates that ARHGAP35 might play a suppressor role in melanoma. Co-occurrence of ARHGAP35 mutation and other known melanoma driver genes was observed, such as NF1, PTPRT, and FAT4 ( Fig. 5 B). The survival analysis in the SKCM cohort revealed that the ARHGAP35 mutation was associated with poor survival. According to univariate analysis, SKCM patients with the ARHGAP35 mutation had significantly worse OS ( P = 0.006) and PFS ( P < 0.001) when compared with those having the wild-type gene ( Fig. 5 C). Multivariate Cox regression was further carried out to adjust for the effect of clinical parameters and other genes significantly associated with survival by univariate analysis ( Fig. 5 D-I).
Exploring the potential function of the ARHGAP35 mutation
In order to get further insight into the potential biological function of the ARHGAP35 mutation, we investigated the immune activity and the global transcriptomic change between the ARHGAP35 -mutant and the wildtype patients in the TCGA-SKCM cohort. According to the CIBERSORT analysis, when compared with the ARHGAP35 wild-type patients, the ARHGAP35 mutated patients exhibited a significant fraction reduction in the resting myeloid DC cells and a significant fraction increase in the naive CD4 + T-cells and naive B-cells (Fig. S5A-C). Furthermore, the downregulated genes in the ARHGAP35 mutated group were significantly enriched in Gene Ontology terms of adaptive immune response and immune system process (Fig. S5D), while no pathway was enriched by the up-regulated genes. The protein-protein interaction networks based on STRING showed strong experimental and bioinformatic evidence that ARHGAP35 could interact with RHOA, RAC1, and RASA1 (Fig. S5E). The Rho activity was suppressed by GAPs and p190A. The RhoGAP encoded by the ARHGAP35 gene was generally regarded as the main RhoGAP for RhoA in cells, so we speculated that the protein expression of Rho GTPases might have changed in this PMME patient. The IHC results showed that Rho GTPases were expressed positively in the cytoplasm and membrane of tumor samples, and negatively in the normal sample (Fig. S5F).
Discussion
In this manuscript, we describe a rare case of a female patient diagnosed with PMME who underwent three surgical treatments to remove the primary tumor and subsequently two recurrences. According to the pathological examination, the resected tumors had negative surgical margins and there was no evidence of metastasis at the time of the surgical resection. Given that 5cm of the esophagus had been removed at the first surgery, our analysis on driver gene heterogeneity and clonality revealed that subclones in recurrences stemmed from the initial tumor, suggesting that early dissemination extended beyond the surgical resection. The hypothesis that the recurrent subclone was derived from one of the primary tumors was supported by the results of the phylogenetic tree based on SNVs, CNVs, and mutational signatures. It should be noted that no potential driver genes or pathways were identified in the founding clone, suggesting that the founding clone had the ability of cell migration but still needed further gene regulation by mechanism such as epigenetic modification to invoke cell proliferation. It seems that the subclones in the later relapsed sites were in a dormant state until a change in the environment triggered their rapid proliferative potential.
ARHGAP35 , a less well-known driver gene, was the only significant trunk mutated gene identified among these samples. ARHGAP35 encodes p190RhoGAP-A which has been considered to be a tumor suppressor gene and involved in cell-cell junctions and cell migration by regulating RhoA activity [19] . The other PMME cases we reported before also had genetic alternations within the guanine nucleotide exchange factors (GEFs) and GTPase-activating proteins (GAPs) in the metastatic clade, which are two subsets of regulators controlling the guanosine diphosphate (GDP)/guanosine triphosphate (GTP) cycle of Rho GTPase [20] . Based on our analysis of the TCGA SKCM cohort, the presence of ARHGAP35 mutations was an independent predictor of poor prognosis and survival in melanoma. We also found that the ARHGAP35 deficient melanomas showed lower fractions of myeloid DC, naïve CD4 + T cell, and naïve B cell, and had a lower adaptive immune response. Therefore based on the above findings the ARHGAP35 was identified as a new potential driver gene accounting for metastasis.
Most acquired driver mutations related to tumor growth were private mutations, such as driver mutations in genes SETD2 and LATS1 in WMY-PT1a, EPAS1 in WMY-PT1b, NF1 in WMY-PT1c, MAP3K1 in WMY-R1, and NRAS in WMY-R2. There is a theory suggesting that early disseminated tumor cells (DTCs) create a niche environment for tumor recurrences such as immunosuppression or metastasis, and cells are often slowly cycling until a change in environment generates a rapid proliferative potential [21] . During this time, expansion of residual disseminated cancer is paused and DTCs survive to fuel relapse and evade anti-proliferative treatment [22][23][24] . Our findings also support the above theory by phylogenetic evidence based on mutational signature analysis. We found an episodic mutational signature SBS24 disappeared in WMY-PT1c but reproduced in late recurrence WMY-R2 ( Fig. 3 D), indicating that subclones that originated from the initial clone might have survived the therapy by dormancy.
Gene clustering analysis based on the driver mutations showed that pathways of axon guidance, MAPK, and TGF-beta signaling were abnormal in the relapse clade, however, no significant abnormal pathway was identified in the primary clade. MAPK cascade activation is the center of a variety of signaling pathways and plays a key role in cell proliferation-related signaling pathways. Late acquired private mutations such as those in MAP3K1, NRAS and NF1 were critical nodes in the MAPK pathway, which may have enabled the proliferation ability of DTCs. Axon guidance and TGF-beta signaling are both related to tumor metastasis, suggesting that systemic inhibition of TGFbeta signaling could awake dormant DTCs fueling multi-organ metastasis [25] .
It still remains unclear how dormant DTCs evade recognition by the immune system. By utilizing RNA-seq and WES data from TCGA SKCM cohort, we found that ARHGAP35 mutation was associated with decreased adaptive immune response, as shown by the down-regulated genes enriched within the immune system process and adaptive immune response in the ARHGAP35 mutant group (Fig.S5A-D). An interesting finding was that the myeloid DCs decreased significantly as well in the mutated group, and the DCs acted as the primary antigen-presenting cells in the tumor and played a critical role in anti-tumor immunity [ 26 , 27 ]. It has been proposed that an initial tumor clone may create the pre-metastatic niche to develop a favorable immune microenvironment and progressively adapt to immune pressure during dissemination [ 28 , 29 ]. Thus, we hypothesize that inactivation of the ARHGAP35 gene may mediate the mechanisms to modulate DCs and the dysfunctional DCs, facilitating local and metastatic progression. However, further research is warranted to confirm this hypothesis.
Conclusion
In summary, we demonstrate that the primary PMME tumor can seed from the esophagus, leading to recurrence. This highlights the need to understand the mechanism involved in the early PMME recurrence to optimize treatment. Our data shows that ARHGAP35 has an important role in promoting metastasis and immune suppression. The involvement of this mutation in tumor recurrence was rarely studied in melanoma. Further research should focus on understanding the role of this mutation in early metastasis, particularly the formation of invisible metastatic colonization, which is essential for the future development of precision therapy and prevention of metastasis in PMME and other melanoma subtypes.
Data availability
The sequence reported in this paper has been deposited in the CNGBdb (China National GeneBank database, https://www.cngb.org) with accession number CNP0001947. The information of sequencing quality control is showed in Table S2.
Novelty & impact statements
Primary malignant melanoma of the esophagus (PMME) is a rare aggressive melanoma with a high potential for metastasis and recurrence. However, the standard-of-care treatments for PMME have not been established yet due to a lack of understanding about the clinical and molecular pathogenesis of this rare disease. Our study demonstrated the importance of longitudinal genomic profiling to understand the dynamic nature of recurrence and provided novel evidence to early tumor cell dissemination theory. We also identified a less well known gene ARHGAP35 as a potential metastasis driver gene in melanoma. | 2022-06-03T15:20:43.848Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "75101585bf65ed4ca2aa7823db07b7827e094895",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.neo.2022.100811",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "95dcc9e12eb83c120266ec3f186f1ed0fcaa990f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
108687603 | pes2o/s2orc | v3-fos-license | Subak’s Capacity Building: A Little Effort Toward Food Security and Sustainable Development Goals
Nowadays, there are strong demands on the organic farming. Organic farming as a part of organic agriculture builds on the harmony of nature, through optimizing the use of biodiversity and natural intake of the recycling process of natural materials, in order to reach sustainability, healthy food production and energy saving. Effort to change the behavior of farmers who are accustomed to using inorganic material to organic farming was not easy. This study aimed to determine (1) the costs and benefit of local varieties cultivation of paddy, Cicih Gondrong, with semi-organic pattern; and (2) the farmer perception toward Cicih Gondrong semi-organic pattern cultivation. This research used plot demonstration (demplots) and survey methods. The results showed that semi-organic cultivation of paddy Cicih Gondrong, economically profitable. There are significant changes in the perception of farmers on the management of local varieties of rice before and after the demonstration carried out. Government role is needed to make organic farming program running well especially to provide assistance and infrastructure support. Keywords— Capacity Building, Organic Farming, Sustainable Agriculture, Farmer Perception, Government Role
I. INTRODUCTION
Bali, one of small islands in Indonesia, has a beautiful and unique Tradition-Religion-Culture-Aspiration society that well known in all around the world. Unfortunately, there is tradeoff between tourism development to increase community welfare and enhance agriculture activities to sustain environmental. Nowadays, the best option is to develop tourism without sacrificing agricultural sector and environment such as agro tourism development program.
Agriculture that is developed to be agro tourism includes a smallholder agriculture, plantation, forestry, animal husbandry and fisheries. So that agro tourism development in Bali goals are building an integrated system of agriculture and tourism activities to develop the tourism sector as well the agricultural sector in same time maintaining environmental sustainability and improving the welfare of farmers. This program will be encouraged by high quality of natural resources of Bali which has potential in paddy cultivation with traditional religious institution called Subak.
Buleleng Regency is one area in Bali that has high potential for paddy farming with well known product of rice: Sudaji Rice. This Rice has a unique taste with bright white color and high price. Location of Sudaji village is 9 km from the Capital City District (Keloncing); 16 km from the Capital District (Singaraja); and 88 km from the provincial capital of Bali (Denpasar). This Village is hilly topography with altitude around 450-560 meters above the sea. The level of rainfall is 2000 mm / year, temperatures between 25-28o C. Sudaji location can be seen in Figure 1. This indicates that the characteristic of agriculture land in this village is very fertile with abundant of irrigation water. Sudaji Rice has a good brand in Bali because it has good quality and fluffier, so it has a very high demand at high price. Unfortunately, green revolution encouraged farmer to use chemical intake intensively that resulted in low quality of Sudaji rice. In the other hand, tourist more interested in organic product. So it need many efforts to restore the image of Sudaji rice e.g reintroducing the application of organic based rice farming technologies and local varieties of Sudaji rice: Cicih Gondrong.
Organic-based farming practices allegedly able to contribute to the development of rural development. Integrated organic farming represents a chance at all levels, promote rural economic development in a sustainable manner. So many new employment opportunities will grow as a result of organic-based agricultural growth. In an effort to embody and utilize the concepts of environmentally friendly agriculture (organic based) which rely on farmland biodiversity, it is necessary to increase the net return programs implemented based organic rice through revolving funds and assistance directly implemented by the farming community in the village Subak Sudaji.
II. LONG TERM RICE DEVELOPMENT STRATEGY MAP FOR LOCAL VARIETY CICIH GONDRONG
In order to increase the benefit of organic rice farming program to enhance the well-being of farmers in Sudaji village especially in implementation of organic technology, there are some consideration such as: 1) Vision and Mission expect that Bali agriculture will achieve sustainable agricultural systems, which is able to guarantee the food security system, dynamic and advanced agribusiness system, 2) In 21st century, awareness of people around the world to realize the danger of synthetic chemicals in agriculture because it has led to the destruction of nature and the environment, particularly in agricultural ecosystems, such as the reduction of a variety of micro-organisms in the soil, killing a variety of natural enemies, increase pest populations and plant diseases, pollution of soil, water and air. Especially of that is the synthetic chemical residues in food is very harmful to human health.
3) Changes in consumer preferences continue with demanding better quality in agricultural product than previous products involving aspects of quality, nutritional composition, safety consuming, and result from activities that do not harm the environment, biodiversity, and human rights are not violated. 4) Expected model to increase the added value of tourism in Bali such as presenting local dish that safe for direct consumption, have a high nutrient content and environmentally friendly. It means agriculture product with organic-based farming. 5) Metaphysical activities were inspired by local wisdom such as Ahimsa, Tatwamasi, and Tri Hitha Karana. Philosophical Teachings are required to always maintain balance and fertility Rwa bhineda (two things that seem contradictory that that make life happens). In this case the natural enemy and agro eco systems are manifestation of the Rwa bhineda should always be kept in balance. Way is through the renewal of the organic-based agriculture.
6) Former, culture and religion are very tight in Bali especially in improvement fertilization of land, crop rotation, and conservation efforts. Recently this norm is weak due to the demands of modern agriculture, so it needs revitalization. 7) Social Capital sourced from the dynamics of socioreligious groups: Subak, Banjar and small traditional organization based on activities called Seka-strongly supports renewal of organic-based agriculture in Bali.
The achievement of this program will drive achievement of food security in Bali Province. Food security was defined in the 1974 World Food Summit as availability at all times of adequate world food supplies of basic foodstuffs to sustain a steady expansion of food consumption and to offset fluctuations in production and prices. Food security is a flexible concept as reflected in the many attempts at definition in research and policy usage. Even a decade ago, there were about 200 definitions in published writings., Whenever the concept is introduced in the title of a study or its objectives, it is necessary to look closely to establish the explicit or implied definition.
Food security as a concept originated only in the mid-1970s, in the discussions of international food problems at a time of global food crisis. The initial focus of attention was primarily on food supply problems -of assuring the availability and to some degree the price stability of basic foodstuffs at the international and national level. That supply-side, international and institutional set of concerns reflected the changing organization of the global food economy that had precipitated the crisis. A process of international negotiation followed, leading to the World Food Conference of 1974, and a new set of institutional arrangements covering information, resources for promoting food security and forums for dialogue on policy issues.
Food security exists when all people, at all times, have physical, social and economic access to sufficient, safe and nutritious food which meets their dietary needs and food preferences for an active and healthy life. Household food security is the application of this concept to the family level, with individuals within households as the focus of concern. Food insecurity exists when people do not have adequate physical, social or economic access to food as defined above.
III. METHOD
This study is an experimental study involving 5 farmers with land cultivation area 3 acres. Farmer plant local variety of paddy, Cicih Gondrong, with organic fertilizer applications. This research carried out during the third growing season , September 2013. Two type of data were analyzed to know about benefit cost ratio of organic based cultivation of local variety of paddy. Qualitative data was used to describe about reintroducing process of local variety of paddy, Cicih Gondrong.
A. Implementation Program Model
Model of program execution using a packet-based Organic Rice Cultivation Technology (P3BO Unmas) with assistance by the technical officer and an expert of the team member. Long-term development strategy of organic Rice at Sudaji village presented at Figure 2 Organic farming Method:
B. Benefit and Cost Analysis
The following are the cost of return predictions based local cultivars of rice farming in the village Sudaji organic. The analysis was made for the land area per one hectare, the value or price of a used applies to the local area. In the early stages (first production period) of organic cultivation was not perfectly formed. The average yield per hectare of rice fields were 5800 Kg dry milled grain or IDR 29.04 million. Gradually, the perfect organic land and ecosystem balance occurs as preservation of biodiversity. The amount of initial investment is 27.80 million dollars per hectare. Bank rate calculated is r = 13% / year or 4.33% / planting. Value for money invested 27.80 million dollars during the growing season to-1, at the end of the growing season 9th, will have a total value of money now (NPV) of 265.74 million dollars. Cash Flow of Nine Planting period presented at Table 2. V. CONCLUSIONS Food security is a multi-dimensional phenomenon. National and international political action seems to require the identification of simple deficits that can be the basis for setting of targets, thus necessitating the adoption of single, simplistic indicators for policy analysis. Reintroducing and replanting local variety of Paddy: Cicih Gondrong could drive Food Security, Improving Quality of Environment and Increasing welfare of Farmer. | 2019-04-12T13:54:41.366Z | 2015-02-08T00:00:00.000 | {
"year": 2015,
"sha1": "2574ef08eb2b4296a5690ca6ea893453b9973026",
"oa_license": "CCBYSA",
"oa_url": "http://www.insightsociety.org/ojaseit/index.php/ijaseit/article/download/475/515",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d4146f06a0048e798eee6f9c4fe058e27757de97",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Business"
]
} |
266243542 | pes2o/s2orc | v3-fos-license | Design and analysis of fault-tolerant sequential logic circuits for safety-critical applications
ABSTRACT
INTRODUCTION
Fault-tolerance and reliability analysis plays an essential role in the design and implementation of highly reliable and robust digital control systems [1]- [3].Safety critical control applications that use these types of electronic digital circuits like avionics, space, and industrial control applications have become more vulnerable to the effects of faults stemming from different natural resources.Examples of theses faults are intermittent faults, permanent single faults, transient single faults, multiple bit upsets (MBUs) or common cause faults (CCFs).All these faults may result from different factors like ionizing radiation, harsh environment and electromagnetic interference that can undermine and defeat the traditional fault-tolerant techniques even at the ground level [4].Faults may affect digital control systems in a different way based on the level of severity of the environment in which the control system is operating.Different fault tolerant digital control systems were developed in the literature works to quickly identify the presence of a digital subsystem failure in the control system and diagnose its causes in terms of type.However, most of the developed digital systems caused low levels of dependability and reliability because of the limited capability of the developed fault tolerance mechanism and the inclusion of additional hardware components that are not necessary to the control system operation.Although there are some traditional fault tolerant techniques based on the hardware redundancy or the reconfiguration strategies used to mask or correct the event of faults, there is a low fault coverage (C) of meeting high degrees of dependability and reliability in these critical control systems.An example of computer architecture, the field programmable gate array (FPGA) architecture consists of a two-dimensional array of logic blocks and flip-flops connected by the interconnection routing blocks.The logic blocks can perform combinational and sequential logic functions using the look up tables (LUTs) and the memory elements utilized to realize state machine control units.Combinational components like LUTs and routing resources are vulnerable to be affected by permanent faults.These faults can be corrected either by reloading the bitstream file or by resetting the FPGA chip.However, the sequential components like memory flip-flops are vulnerable to transient faults that can be corrected by the next load of configuration bit stream [5].
There are some challenges that stem from applying traditional fault-tolerant techniques in building reliable digital control systems.Firstly, the number of tolerated faults is limited to the number of redundant components available in the digital control system before the whole system fails.Secondly, the failure of redundancy management unit, which monitors the operation of the digital system, coordinates the redundancy of the components, and detects if there is a defect in the working element, may cause a whole system failure even if there are no actual defects in the working system [6].The major contribution of this research work is overcoming and avoiding these architectural challenges by designing a novel fault-tolerant methodology that includes both static and dynamic redundant fault-tolerant systems.This approach consists of sequential logic circuit, D flip-flop storage elements linked to a fault injection unit, a duplicate modular redundancy, and data monitoring units.The experimental simulation work is presented, and the results prove that the approach achieves a robust fault-tolerant digital control system that can be used as a hardware platform for ultra-dependable and safety-critical control applications.
PREVIOUS WORKS
A brief presentation of research works focusing on the topic of fault tolerant digital systems and error detection methods is presented in this section.Different methods were used to create different types of the fault-tolerant digital embedded system as it is shown in Figure 1.All these presented methods are discussed in this section.
Figure 1.The different methods that were used for creating fault tolerant digital systems, from the literature Almukhaizim and Makris [7] explained a methodology for creating fault-tolerant digital circuits that was built based on an expansion of the concurrent error detection (CED) method.They used the CED method to accomplish mistake detection as well as to provide error diagnosis and remedy capabilities.A fault tolerance method for sequential logic circuits based on the concept of sequential finite state machines (FSMs) [8], [9].The suggested method was relied on the addition of redundant comparable states to safeguard a small number of states with a high likelihood of recurrence.All single errors occurring in the state variables of highly occurring states or in their combinational logic were guaranteed to be tolerated by the redundant states.Their method required little space because just a few states require protection as well as improved the fault tolerance of synthesized sequential circuits.Ostanin et al. [10] presented a fault-tolerant, low-overhead, and synchronous sequential circuit design.Their approach was based on a fault-secure system.Their method consisted of only one fault-secure sequential circuit, one regular (unprotected), one checker, and one rather straightforward exclusive OR (XOR) circuit.The recommended scheme's dependability was demonstrated for both single stuck-at failures at gate poles and transient, intermittent route delay faults.Each subsequent flaw was said to manifest itself after the preceding one has vanished.Ban and Junior [11] 415 established a trade-off between reliability and hardware area overhead by applying hardening methods to the arithmetic circuits.Their work also suggested several fault-tolerant strategies in which important component gates in mathematical circuits were identified and rated based on the consequences of a circuit output mistake.Regarding the area limitation of the design requirements, these crucial gates were toughened first.In fact, output bits that were deemed essential to a system were given greater protection priorities, which lowered the likelihood of catastrophic mistakes.The researcher selected the boolean difference error calculus (BDEC) method that was previously suggested in the literature and expanded it in two ways: first, to account for the impact of reliability-enhancement strategies like redundancy, and second, to encompass sequential circuit parts [12].Dug et al. [13] constructed and examined two techniques for creating fault-tolerant pipelined sequential and combinational circuits on a FPGA board.Error-detection and partial error correction (EDPEC), and full-error detection and correction (FEDC) were considered as evaluated approaches.Shalini et al. [14] presented a selective triple modular redundancy (STMR) technique, where fault tolerance in digital circuits; hardware redundancy was a suitable approach.To enhance the timing behavior of synchronous sequential circuits, by disregarding the delay, the output was precisely determined.
The selection criteria for STMR included latency and failure likelihood.It was demonstrated through simulation that the suggested approach decreased hardware failure by utilizing TMR technique only when necessary.The researchers developed a new a feedback control loop connected to a digital pipeline hardware system with an appropriate dynamic model to lessen the impact of errors and faults effects on the output [15].The digital blocks whose executed operation was rewinded were selected as data-path registers for the correction loops of a robotic industrial arm which have applied correction factors.They evaluated the cost and reliability of the suggested technique and compared them to the standard TMR approach.In comparison with the triple approach, their method employed 30% fewer slices for FPGA technology.The architectural design of a hybrid and fault-tolerant processing core that is using concepts of error detection and correction against radiation faults is presented, analyzed, and simulated [16].The error correction codes were embedded among five stages of pipeline processing to identify the run-time faults and operational errors.The experimental timing simulation results indicate that the proposed fault-tolerant method is efficient in consuming digital hardware resources and its software operation is continuously monitored by intelligent fault-tolerant techniques.
THE PROPOSED RESEARCH METHOD
The proposed fault-tolerant sequential logic system is created to achieve high standards of dependability in relation to several fault models, including transient, intermittent, and permanent faults.In the proposed fault-tolerant sequential logic system shown in Figure 2, three types of fault tolerance techniques are designed against different types of faults.The basic sequential circuit component that is investigated in this paper is a D flip flop (F-F) memory element, which has two fixed states and can save one bit at one time.In addition, a D flip flop is a bi-stable memory component that can store either a "1" or a "0" bit at a single time.Once the storage memory element reads the D input signal, a checking operation is executed in the circuit to monitor the status of the synchronous clocking signal whether it is high or low, during which point the input signal propagates to the output signal with the rising edge of each synchronous clocking pulse.Furthermore, the complementary of the output signal Q is called Q bar as it is shown in Table 1.
To design a highly robust sequential fault-tolerant system which can be resilient to the effects of various attacks of natural faults and single upsets, two types of fault tolerance techniques and data monitoring units for the two output signals Q and !Q were architected and embedded in the proposed system.For the first logic circuit, a (XNOR) gate called first data monitoring unit for Q which compare the input of D F-F with the next state Q was built, if the output of XNOR is high and equal to 1 that indicates the D F-F work normally and no fault appear, at the opposite of the (0) appearance that indicate an error appearance.For this purpose, a controlled switch depending on XNOR output was embedded, if the input of this switch is equal to 1 the output of Q will flow, and when its input equals to (0) the inverted value of Q will flow.Furthermore, a XOR gate called first data monitoring unit for !Q which compare the input of D F-F with the next state !Q was built, when its output equal (1) that indicates that the D F-F is working normally and when its output equal (0) indicates a fault appearance, so a controlled switch depending of XOR output was embedded, when its input equals to (1).The output of !Q will flow, and when its input equals to (0) the inverted value of !Q will flow.Consequently, these two types of intelligent fault tolerance techniques can be used to tolerate unlimited number of transient and intermittent faults efficiently.Furthermore, two additional Data monitoring Units for the output signals Q and !Q of another memory device were proposed.These two units use the concept of double modular redundancy (DMR) [17]- [19] with two XNOR gates and another two controlled switches that are responsible of detecting and correcting the effects of artificial and natural permanent faults.The idea is using an additional spare (D flip-flop), XNOR gates compares the output of a switch that follow the first XNOR with the output of the spare D flip flop, if its output equals (1) that indicates that no error is observed, and the switch will allow the output of a switch that follow the first XNOR to flow.However, when the output equals (0) that indicates that an error is observed, and the switch will allow the output of a spare D flip flop to flow.In addition, to make the execution of the proposed design deterministic and synchronous, all the digital switches that are used are controlled by a trigger signal which led to that the comparison of all the outputs will be at the same time.Represents the excitation equation of the proposed digital circuit shown in (1):
F = [X AND Y AND~Z AND ! Q(t + 1)] OR [~Z AND Q(t + 1)]
(1) Figure 3(a) presents the first monitory unit (MU1) timing diagram in its Normal State operation when no fault appears by using MATLAB Simulink [20].The input signals 'X', 'Y', and 'Z' are equal to the value 1,1,0 respectively and the data input of the D F-F is equal to 1, in this state the MU1 will compare the status of input signal with the resulted output signal by using the XNOR1 gate.Additionally, the D flip-flop input is checked with the complemented output by using the XOR gate, if both outputs of the XNOR and the XOR gates are equal to '1' value, that indicates no fault appearance.Furthermore, Figure 3(b) presents the MU1 timing diagram when the 'Q' output signal of the D F-F is defected with a simulated fault.In this scenario, the output data of the XNOR gate will be equal to '0' value and the MU1 will correct this false value and replace it with a right value using a programmable digital switch.
RESULTS AND DISCUSSION
To evaluate the dependable and resilient behavior of the proposed fault-tolerant sequential logic circuit and calculate how much it is reliable and secure, a Markov chain diagram comprised of five descriptive states was modeled as it is shown in Figure 4 and Table 2 [21].Three operating states were embedded in the reliability model, one state for failing in a safe mode, and one state for failing in an unsafe mode.The status of the system is in one of the five states: totally operational, first failing-operational, second failing-operational, failing in a safe mode, or failing in an unsafe mode.To analyze the reliable behavior of the designed sequential fault-tolerant system using Markov chain models, it can be assumed that each sequential memory element obeys the exponential failure rule and has a constant failure rate of [22].The probability equation P(t+Δt) that a fault-tolerant digital sequential circuit will fail in future at some time (t+Δt) can be calculated and written as in the following relationship: where, is the failure rate, and P (t + Δt): probability that a fault-tolerant digital sequential circuit will fail at some time (t + Δt).
The reliability can be computed from (3): where, The two-dimensional state transition matrix of a Markov model would resemble: Using algebraic manipulation to let the temporal interval t decrease to zero, the following differential equations are produced: The following equations have been constructed using the Laplace transform: System reliability R(t) is typically defined as the probability that a logic circuit operate without going to failure during the period [0, t].In addition, reliability is considered as an evaluation metric for measuring that the predicted service is reached to customer [23] and [24].In (4) represents reliability and how it is calculated.On the other hand, the safety is an extened concept of the reliability.The safety of a logic circuit S(t) is defined as the probability of a circuit to execute its predicted function completely or transition to operate in a failing in a safe mode in the period [0, t].Hence, (5) represents the safety and how it is calculated.
The stratix IV FPGA fabric which has been assumed to be a target realization platform has 38.1 FIT failure rate.Where FIT refers to failure in time which is a unit that represents how many failures can be occur every 10 9 hours in time. = ℎ * 10 −9 So, ℎ = 38.1 * 10 −9 failure/hour.Altera's stratix IV FPGA chip has the same frequency as 50 MHz.Thus, the mean time to repair (MTTR) for one clock is 20 ns.In Table 3, it can be observed that we compare the probability state values of different fault detection coverage values which represents the probability of being in different states in Figure 4. Additionally, the WINSTEM SURE analysis program [25] was used to model the reconfigurable behaviour of the proposed system.The SURE program is a reliability analysis simulation tool that is developed by the National Aeronautics and Space Administration (NASA) agency to calculate the probabilities of failure rate.Table 4 demonstrates reliability and safety at different fault detection coverage values and in Figure 5, we can observe the fault detection coverage versus reliability.
CONCLUSION AND FUTURE WORK
In this paper, we presented an architectural design and reliability analysis of a novel fault-tolerant sequential logic circuit for safety-critical digital applications.The primary objective is overcoming the deficiencies and faults can attack the operation of latches and D flip-flops embedded in safety-critical sequential circuits.The advantage of the approach is that it tolerates an unlimited number of intermittent and transient faults.We demonstrated the experimental results of achieving high levels of reliability by simulating the fault injection campaigns into the output signals of memory storage elements.The results prove that the proposed system can achieve 0.9998 reliability and safety for the fault detection coverage which is equal to 0.8 and achieve 0.99999998 reliability for the coverage equals to 0.99999.For the future work, it is planned to focus on using the mathematical verification concepts that could be utilized to validate the operational execution of the data monitoring models.Furthermore, the proposed circuit can operate in critical environments that generate potential CCFs by adding a hybrid fault-tolerant mechanism with spare sequential components.Finally, generating the hardware description language (HDL) code using the MathWorks Simulink-based HDL coder and synthesizing the proposed circuit in real-time is one of the future works.
Boolean difference error calculus (BDEC) Error-detection and partial-error correction (EDPEC) Full-error detection and correction (FEDC) Feedback control loop based on a dynamic model Bulletin of Electr Eng & Inf ISSN: 2302-9285 Design and analysis of fault-tolerant sequential logic circuits for safety-critical..(Shawkat Sabah Khairullah)
Figure 2 .
Figure 2. The proposed fault-tolerant sequential logic system
Figure 3 .
Figure 3. Timing diagram (a) MU1 in normal NO fault injection and (b) MU1 at first fault injection
Figure 4 .
Figure 4. Discrete-time Markov chain for the proposed fault-tolerant sequential logic system
Figure 5 .
Figure 5. Fault detection coverage versus reliability
Table 3 .
Probabilities for different states with different fault detection coverage values
Table 4 .
Reliability and safety with different fault detection coverage values | 2023-12-16T16:27:34.469Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "902a795388fbd757a85ccea1296dfe4b3b7570fd",
"oa_license": "CCBYSA",
"oa_url": "https://beei.org/index.php/EEI/article/download/5713/3554",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dc8efc6f883568f6f440f4a028256d10fe9769d9",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
17156320 | pes2o/s2orc | v3-fos-license | Role of Small RNAs in Trypanosomatid Infections
Trypanosomatid parasites survive and replicate in the host by using mechanisms that aim to establish a successful infection and ensure parasite survival. Evidence points to microRNAs as new players in the host-parasite interplay. MicroRNAs are small non-coding RNAs that control proteins levels via post-transcriptional gene down-regulation, either within the cells where they were produced or in other cells via intercellular transfer. These microRNAs can be modulated in host cells during infection and are among the growing group of small regulatory RNAs, for which many classes have been described, including the transfer RNA-derived small RNAs. Parasites can either manipulate microRNAs to evade host-driven damage and/or transfer small RNAs to host cells. In this mini-review, we present evidence for the involvement of small RNAs, such as microRNAs, in trypanosomatid infections which lack RNA interference. We highlight both microRNA profile alterations in host cells during those infections and the horizontal transfer of small RNAs and proteins from parasites to the host by membrane-derived extracellular vesicles in a cell communication mechanism.
INTRODUCTION
Trypanosomatid parasites comprise the African trypanosomes (Trypanosoma brucei), South american trypanosomes (Trypanosoma cruzi) and Leishmania, that profoundly affect mankind and substantially impact world public health (Coura and Viñas, 2010;Alvar et al., 2012). The diseases caused by these parasites predominantly affect the populations of developing regions of Africa, Asia and the Americas; however, population movement creates a new epidemiological challenge with worldwide spreading (Coura and Viñas, 2010;Alvar et al., 2012). Through different mechanisms these parasites establish a successful infection. Among these mechanisms, the small RNAs emerge as new players in the host-parasite interplay.
Small non-coding RNAs play an essential regulatory role in complex biological systems without protein translation (Aalto and Pasquinelli, 2012). Of these RNAs, the short-length RNA molecules (ranging from 20 to 30 nucleotides), such as microRNAs, small interfering RNAs (siRNAs) and Piwi-interacting RNAs (piRNAs), stand out. Through sequence complementarity, these RNAs guide recognition of target genes within the ribonucleoprotein (RNP) complex and typically reduce the expression of a specific gene through the process of RNA interference (RNAi). Despite having the same mechanism of action, these different classes of regulatory RNAs differ in their association with Argonaute (AGO) protein family members to form the RNP complex, in their biogenesis, in their gene regulation pathways and in their biological functions (Ghildiyal and Zamore, 2009). Furthermore, the world of small non-coding RNAs is expanding, with new classes continuing to be discovered, even in organisms that were not previously thought to express small RNA-mediated pathways, such as T. cruzi and some Leishmania species (Ullu et al., 2004;Ghildiyal and Zamore, 2009;Garcia-Silva et al., 2010a,b;Lye et al., 2010).
RNA-mediated silencing is an evolutionarily conserved mechanism that may have evolved together with parasite infection, as parasites developed strategies to interfere with host microRNA populations, thus recognizing the RNAi pathway as a new means of reshaping their environment to evade host immune surveillance and establish a successful infection (Cerutti and Casas-Mollano, 2006;Hakimi and Cannella, 2011). Obviously, changes in microRNA profiles might also be a defense mechanism of the infected cell. Nevertheless, the alteration of host microRNA levels after parasitic infection has been demonstrated (Geraci et al., 2015;Linhares-Lacerda et al., 2015), with some data revealing the intricate connection between the parasite and the RNAi machinery of the host organism (Ghosh et al., 2013). Moreover, the identification of predictive microRNA signatures associated with each specific parasitic infection could aid in the development of tools for diagnosis, prognosis, monitoring therapy and improving patient stratifications (Manzano-román and Siles-lucas, 2012).
In this mini-review, we briefly discuss current knowledge about the involvement of small RNAs in host-parasite interactions on trypanosomatid parasites that lack the AGO and Dicer genes and as a consequence do not have functional RNAi machinery. These parasites include T. cruzi (the etiologic agent of Chagas disease), Leishmania major and Leishmania donovani (which cause cutaneous leishmaniasis and visceral leishmaniasis, respectively). We focus on the microRNA profile alterations that occur in host cells due to infection with those parasites and on the trans-kingdom transfer of small RNAs and proteins from parasites to the host by membrane-derived extracellular vesicles (EVs) in a cell communication mechanism that may favor parasite survival.
MicroRNA PROFILE MODULATION DUE TO PARASITIC INFECTION
The elaborate relationship between parasites and their hosts aims to establish a successful parasite infection/infestation and promote survival, with parasites manipulating the host cellular machinery to avoid and regulate the host immune effector response (Manzano-román and Siles-lucas, 2012). In this context, gene expression modulation by microRNAs may be an ideal tool for parasites because microRNAs can function as master switches of many biological functions, fine tuning protein production (Zheng et al., 2013). It is reasonable to propose that cellular infection will be counteracted by cellular microRNAs that target crucial host factors as a defense mechanism; however, parasites subvert microRNA-directed functions as a means of altering gene expression in host cells (Hakimi and Cannella, 2011).
MicroRNAs are Related to Cardiac Alterations and Thymic Atrophy in Chagas Disease
The alteration of host microRNA levels after T. cruzi infection has been demonstrated in a murine model (Linhares-Lacerda et al., 2015;Navarro et al., 2015) and in Chagas disease patients (Ferreira et al., 2014). Chagas disease is a neglected tropical illness that is endemic to Latin America (Coura and Borges-pereira, 2012) and has an acute phase characterized by bloodstream circulating parasites and tissue parasitism, in addition to an intense immune response and hormonal imbalance (Pérez et al., 2011). Immune effector responses control T. cruzi numbers in the blood, and individuals enter the chronic phase of the disease with low parasites levels in several tissues (de Meis et al., 2013). Moreover, chronic infection can persist undetected, but ∼30% of patients develop severe complications, such as abnormal heart rhythm, heart failure, and digestive problems (Clayton, 2010;World Health Organization, 2010a).
The hearts of mice with experimental acute T. cruzi infection present an intense inflammatory cell infiltrate with myocarditis, arrhythmia and parasite nests in addition to a modified microRNA expression profile. Upon infection, 113 of 641 microRNAs were dysregulated; moreover, some microRNAs correlated with the maximal heart rate-corrected QT interval, which is a cardiac alteration (Navarro et al., 2015). Resembling the experimental model, chronic Chagas disease patients who develop cardiomyopathy can also present alterations in heart microRNAs and heart arrhythmia, among other cardiac complications. miR-1, miR-133a-2, miR-133b, miR-208a, and miR-208b were significantly downregulated in Chagas disease patients in comparison to uninfected patients (heart transplant donors). Moreover, in a comparison between two cardiomyopathy groups (chronic Chagas disease patients and dilated cardiomyopathy patients), miR-1, miR-133a-2, and miR-208b expression was reduced in infected patients (Ferreira et al., 2014). These microRNAs are highly enriched in the heart, where they regulate heart development and myocyte differentiation (Lagos-Quintana et al., 2002;Chen et al., 2006). In addition, atypical expression of these microRNAs has been linked to cardiovascular diseases (Carè et al., 2007;Ikeda et al., 2013). Therefore, variations of host microRNAs in experimental models of acute Chagas disease could shed light on the mechanism that triggers heart clinical alterations, with possible relevance for chronic Chagas disease patients with cardiomyopathy, as the downregulation of miR-208 was detected in both patients and mice infected by T. cruzi (Ferreira et al., 2014;Navarro et al., 2015). It is noteworthy that cardiac damage releases miR-208 and other factors into the bloodstream, and the levels of these factors exhibit distinctive patterns that correlate with different cardiovascular diseases, showing great potential for use as biomarkers for cardiac illness (Gupta et al., 2010). However, no data are available concerning circulating microRNAs in chronic Chagas disease patients who develop cardiomyopathy.
In addition to heart manifestations, mice with acute T. cruzi infection also present a severe thymic atrophy, primarily due to the apoptosis of CD4 + CD8 + double-positive immature T cells (red) and others undetectable (gray). The regulated microRNAs were used to identify biological processes (depicted within the square) that were over-represented among the list of microRNA target genes. (B) The examination of mature microRNA expression patterns in human macrophages revealed unique microRNA profiles in response to the Leishmania major and L. donovani parasites. In addition, biological pathway enrichment was performed for the dysregulated microRNAs to identify signaling pathways (depicted within the square) that might be involved. and also due to migratory abnormalities that release potential autoreactive T cells to secondary lymphoid organs, which may play a role in the chronic phase of the disease (Savino, 2006). The development of increased T cell migration may be a consequence of signals delivered by thymic epithelial cells (TECs) that enhance the deposition of extracellular matrix proteins upon infection (Cotta-de-Almeida et al., 2003;Pérez et al., 2012). These signals might be under the control of microRNAs that are modulated in TECs from infected mice before the induction of thymic atrophy. Interestingly, microRNAs were primarily upregulated (29 out of 85 microRNAs), even if the TECs that were sorted from the thymus exhibited a cortical or medullary phenotype. Furthermore, Gene Ontology (GO) enrichment analysis of microRNA targets was used to identify biological processes that were over-represented among the list of target genes ( Figure 1A). Indeed, the theoretical relationships of these microRNAs with their putative RNA targets revealed transforming growth factor-β (TGF-β) as a molecular node of infection, as the gene encoding its receptor (the Tgfbr1 gene) appears in the middle of our microRNA network, with 8 different microRNAs (let-7a, let-7g, miR-101a, miR-148b, miR-193, miR-27a, miR-27b, and miR-30b) regulating this gene (Linhares-Lacerda et al., 2015).
Taken together, these reports highlight the importance of microRNA alterations in Chagas disease. Furthermore, additional studies are needed to define microRNA biomarkers of T. cruzi infection. In this context, it is reasonable to hypothesize that miR-208 is a potential biomarker for T. cruzi infection because this microRNA was downregulated in both the human and mouse heart and undetectable in TECs from T. cruzi-infected samples, revealing a possible common regulation pattern in response to T. cruzi infection.
Macrophages Change their microRNA Profiles in Response to Leishmania
Sophisticated strategies for surviving and establishing a successful infection, such as antigen presentation inhibition, were developed by Leishmania parasites, which cause cutaneous or visceral diseases and are among the neglected diseases (World Health Organization, 2010b). Leishmania is an obligate intracellular pathogen in mammalian hosts and primarily infects macrophages, where it avoids anti-parasitic responses and subverts host innate immunity. The parasite modifies both microRNAs and mRNAs from the host, leading to altered expression of lipid metabolic genes, among other genes, resulting in reduced cholesterol synthesis, the disruption of membrane lipid rafts and the inhibition of antigen presentation to T cells (Ghosh et al., 2013;Chakraborty et al., 2015).
Upon infection with L. major, human primary macrophages change the microRNA-levels of 64 of 365 microRNAs, as assessed via a quantitative PCR time course. These dysregulated microRNAs virtually targeted several transcripts with critical cellular functions, such as cellular movement, secretion, communication, enzyme production, activity in the extracellular space, signal transduction, and gene expression naturally induced by an abiotic stimulus, which were all evaluated via a GO enrichment analysis followed by a pathway analysis (Lemaire et al., 2013). Additional examination of mature microRNA expression patterns in L. major-and L. donovani-infected human primary dendritic cells and macrophages using next-generation sequencing revealed unique mature microRNA expression profiles in response to both parasite species in different human host cell types. Indeed, L. donovani-infected cells exhibited higher expression of the identified microRNAs than L. majorinfected cells. The biological pathway enrichment was performed again with predicted targets of the dysregulated microRNAs and identified the mitogen-activated protein kinase (MAPK) signaling pathway, among others (Figure 1B), regardless cell type or the infecting Leishmania species (Geraci et al., 2015).
In general, those studies revealed the remarkable capacity of Leishmania to modify microRNA expression in the host; nevertheless, the biological significance of the dysregulated microRNAs requires further investigation. For this purpose, the use of microRNA mimics and inhibitors is an excellent tool. For example, the transfection of the mmu-miR-210-5p inhibitor into L. major-infected murine macrophages significantly decreased the infection rates of these cells, suggesting a role of miR-210 in anti-parasitic activity (Frank et al., 2015). Moreover, the RNA targets obtained via in silico prediction require experimental evidence with further functional analysis to determine the role of each microRNA/mRNA-target in the specific pathways in which it participates. In the future, the knowledge gained from those investigations will assist in the discovery of new targets for diagnostics or therapeutic approaches.
TRANS-KINGDOM TRANSFER OF EXTRACELLULAR VESICLES
Cells exchange information with their environments, influencing the behavior of other cells, and themselves. Cells can communicate through a variety of chemical, mechanical, and biological signals that trigger cell signaling and allow the cells to process information from the outside to support survival. Intercellular communication through biological signals involves the transfer of many different molecules, such as hormones, cytokines, and small RNAs, primarily via membrane vesicle trafficking (Barteneva et al., 2013). Taking advantage of cellular communication through the transfer of membrane-derived extracellular vesicle (EV) cargo to host cells, parasites manipulate host functions to establish a successful infection (Marcilla et al., 2014). In this section, we review the trans-kingdom transfer of EVs from T. cruzi and L. donovani to host cells.
Trypanosomatid Parasites Deliver Small RNAs through Extracellular Vesicles to Host Cells
EVs are key players in cell-to-cell communication. Like other pathogens, T. cruzi releases proteins associated with vesicles into the extracellular milieu to enable pathogen survival and replication within the host (Marcilla et al., 2014). T. cruzi's protein vesicle content was defined in a proteomic study that among other classes, identified a relatively high proportion of RNA-binding proteins, suggesting a possible role in intercellular communication and gene expression regulation (Bayer-Santos et al., 2013). On the other hand, short transcriptome analysis using unbiased and genome-wide deep sequencing indicated an abundance of small RNAs derived from non-coding RNAs, of which tRNA-derived small RNAs (tsRNAs) derived from the 3 ′ end with a median length of 38 nt were the most frequently detected type. Moreover, a comparison between certain tRNA isoacceptors from which tsRNAs were derived revealed that tsRNAs are differentially expressed and may be actively produced rather than being random degradation products from tRNA turnover (Franzén et al., 2011). Quite strikingly, T. cruzi lacks functional RNAi machinery but does express a unique open reading frame for an AGO/PIWI protein with the conserved domain architecture of a canonical AGO in all stages of its life cycle (Garcia-Silva et al., 2010b). Interestingly, tsRNA colocalizes with this distinctive trypanosomatid AGO protein (TcPIWI-tryp) in EVs that are transferred to surrounding parasites and to susceptible mammalian host cells, where the protein changes gene expression profiles (Garcia-Silva et al., 2010a, 2014a; Figure 2).
Recently, evidences demonstrated that Leishmania parasites release exosomes containing RNA sequences and that this exosomos and their cargo can be internalized by host cells. This is true for two different species of Leishmania, namely L. donovani and L. braziliensis, suggesting that the packing of FIGURE 2 | Transfer of parasitic extracellular vesicle cargo to host cells. In this figure, we used a generic host cell to exemplify three different molecules delivered by parasitic extracellular vesicles. In the upper left part of the diagram, Trypanosoma cruzi releases EVs containing TS (trans-sialidase, blue circles) that can trigger gene modulation through the MAPK signaling pathway, which may be regulated by microRNAs. In the upper right part of the diagram, Leishmania donovani releases GP63 vesicles (orange circles) that cleave DICER (red), impairing microRNA maturation. Finally, in the bottom part of the diagram, EVs containing tsRNA (tRNA-derived small RNAs, in green) from Leishmania and from stressed Trypanosoma cruzi parasites modulate gene expression and might form RNP complexes (green).
specific RNA sequences into exosomes may be a conserved phenomenon in Leishmania (Lambertz et al., 2015). Like T. cruzi the authors found a richness of tRNA fragments originating from a small subset of tRNA isoacceptors, the tsRNAs, in both species, moreover in L. braziliensis which is RNAi-competent organism, they also found sequences derived from siRNA-coding regions in both sense and anti-sense suggesting that they appear as doublestranded RNAs in exosomes (Lambertz et al., 2015) (Figure 2). Although these finds, the in vivo biological effect of EVs carrying tsRNAs remains obscure, and more studies are required to assess the molecular mechanisms associated with these non-coding RNAs.
T. cruzi also releases EVs containing members of the transsialidase glycoprotein superfamily. One member of this family led to the aggravation of experimental Chagas disease, with a severe inflammatory reaction of the heart and an increased number of amastigote nests in animals that received these EVs prior to T. cruzi infection (Trocoli Torrecilhas et al., 2009). The T. cruzi trans-sialidase transfers host sialic acid to parasite surface glycoconjugates, a process that supports host-cell recognition, infectivity, and parasite survival. Indeed, this transsialidase activity can remodel parasite glycomolecules, altering host immune responses against the parasite and playing a role as a virulence factor (Freire-de-Lima et al., 2012).
The presence of the trans-sialidase was confirmed in the peripheral blood of chronic Chagas disease patients, where the antibody titre against the trans-sialidase increased with the frequency of peripheral double-positive immature T cells, potentially contributing to the clinical manifestations observed in the chronic phase of the disease. On the other hand, the thymus of T. cruzi-infected mice presents trans-sialidase depots near the parasite nests, which play a role in thymic atrophy and the premature release of double-positive CD4 + CD8 + immature T cells. In contrast, intrathymic trans-sialidase injection increased the splenic double-positive immature T cell population and activated the MAPK/JNK signaling pathway in immature T cells (Nardy et al., 2013). Despite great advances toward understanding the effects of the T. cruzi trans-sialidase (Alves and Colli, 2010), the components involved in this signaling process remain a mystery. This process appears to be cell type-dependent, with MAPK/ERK-1/2 induction in naive splenic CD4 T cells (Todeschini et al., 2015), MAPK/JNK induction in immature T cells (Freire-de-Lima et al., 2012) and NF-kB induction in endothelial cells (Dias et al., 2008), but no microRNAs have been described to date. We suggest miR-199a as a good candidate for future studies of the T. cruzi trans-sialidase pathway. miR-199a regulates the PI3K/Akt and ERK/MAPK signaling pathways (Santhakumar et al., 2010) and targets a sialyltransferase (ST6 β-galactosamide α-2,6-sialyltransferase 1, ST6GAL1) (Minami et al., 2013). In addition, miR-199a is upregulated during human hypertrophy-related heart failure (van Rooij et al., 2006); however, we do not know how the expression pattern of miR-199a changes in the hearts of Chagas disease patients (Figure 2).
Taken together, the EVs from T. cruzi could be an additional strategy for modulating host cells via pathogento-host communication through the delivery of tsRNAs and virulence factors. However, the involvement of small RNAs is a recent discovery, and more studies are needed to elucidate this issue.
Exosome Cargo Impairs microRNA Maturation during Leishmania donovani Infection
During intercellular signaling and communication, EVs are used as a mechanism to actively regulate protein release from the cell. In Leishmania, changes in parasite culture temperature (26/37 • C) lead to protein-specific enrichment in vesicles, affecting the cargo of the released exosomes. This exosome-based protein secretion mechanism delivers cargo to macrophages and triggers biological effects, such as the induction of interleukin-8 secretion (Silverman et al., 2010). Thus, EVs serve as an excellent pathogen-to-host communication process that could deliver effector molecules, such as proteins, and may also release RNAs into the host cytosol.
In this context, the delivery of the Leishmania surface protein metalloprotease GP63, which is a membrane-bound glycosylphosphatidylinositol (GPI)-anchored glycoprotein and a known virulence factor (Brittingham et al., 1995), participates in the parasite's strategy to evade immune surveillance. L. donovani extracts membrane cholesterol from macrophages, preventing T cell stimulation and causing hypocholesterolaemia that leads to protection against this infection, with an inverse correlation between serum cholesterol levels and parasite load in infected mice. In fact, the delivery of exosomes containing GP63 produced by Kupffer cell-resident parasites to hepatocytes impairs miR-122 activity by cleaving DICER1, which is a primary target of GP63 (Ghosh et al., 2012(Ghosh et al., , 2013; Figure 2). DICER1 processes pre-microRNAs into mature microRNAs and transfers those microRNAs to AGO, forming the RNP complex. In the presence of GP63, hepatocytes accumulated pre-miR-122 and failed to form the RNP-miR-122 complex, possibly leading to the downregulation of cholesterol synthesis because miR-122 is responsible for lipid metabolism and liver homeostasis Ghosh et al., 2013). Interestingly, the restoration of Dicer1 expression in parasite-infected livers increased miR-122 expression and restored serum cholesterol levels, with a drastic reduction in liver parasite load. Therefore, this process is a sophisticated example of how parasites evolved strategies to combat regulatory RNA functions in host cells, leading to an important metabolic change that promotes pathogenesis.
CONCLUDING REMARKS
Although few publications are available on this topic, current knowledge emphasizes the alteration of microRNA profiles during infection and EV cargo delivery during host interactions with the parasites T. cruzi, L. major, and L. donovani, which lack functional RNAi machinery. In this mini review, we highlighted some interesting findings in these fields and raised questions for further investigation, such as the status of miR-208 as a potential biomarker for T. cruzi infection, the presence of tsRNA in Leishmania EVs and the involvement of microRNAs in transsialidase triggered pathways during T. cruzi infection.
The main open question is to determine the role of each microRNA/mRNA-target in specific pathways through functional analysis and to investigate the importance of these factors in pathogenesis. The knowledge acquired from these futures studies will be useful for aiding the discovery of new targets for diagnosis or therapeutic approaches.
AUTHOR CONTRIBUTIONS
All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication. | 2016-05-15T05:10:26.621Z | 2016-03-30T00:00:00.000 | {
"year": 2016,
"sha1": "e575de214a854dc9c0ec38b79a3b74531c86d3e1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00367/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e575de214a854dc9c0ec38b79a3b74531c86d3e1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12628465 | pes2o/s2orc | v3-fos-license | Protective Roles of Sodium Selenite against Aflatoxin B1-Induced Apoptosis of Jejunum in Broilers
The effects of aflatoxin B1 (AFB1) exposure and sodium selenite supplementation on cell apoptosis of jejunum in broilers were studied. A total of 240 one-day-old male AA broilers were randomly assigned four dietary treatments containing 0 mg/kg of AFB1 (control), 0.3 mg/kg AFB1 (AFB1), 0.4 mg/kg supplement Se (+ Se) and 0.3 mg/kg AFB1 + 0.4 mg/kg supplement Se (AFB1 + Se), respectively. Compared with the control broilers, the number of apoptotic cells, the expression of Bax and Caspase-3 mRNA were significantly increased, while the expression of Bcl-2 mRNA and the Bcl-2/Bax ratio were significantly decreased in AFB1 broilers. The number of apoptotic cells and the expression of Caspase-3 mRNA in AFB1 + Se broilers were significantly higher than those in the control broilers, but significantly lower than those in AFB1 broilers. There were no significant changes in the expression of Bax mRNA between AFB1 + Se and control broilers; the expression of Bcl-2 mRNA and the Bcl-2/Bax ratio in AFB1 + Se broilers were significantly lower than those in the control broilers, but significantly higher than those in AFB1 broilers. In conclusion, 0.3 mg/kg AFB1 in the diet can increase cell apoptosis, decrease Bcl-2 mRNA expression, and increase of Bax and Caspase-3 mRNA expression in broiler’s jejunum. However, supplementation of dietary sodium selenite at the concentration of 0.4 mg/kg Se may ameliorate AFB1-induced apoptosis by increasing Bcl-2 mRNA expression, and decreasing Bax and Caspase-3 mRNA expression.
Introduction
Aflatoxin B1 (AFB1) is a well-known mycotoxin produced by different strains of Aspergillus flavus and Aspergillus parasiticum. In humans and various animal species, it has been reported to be a potent hepatotoxic and hepatocarcinogenic agent [1,2]. AFB1 is readily transported across the plasma membrane and interacts with nucleic acids and proteins, altering various cellular activities [3]. Previous research has shown that poultry are extremely sensitive to the toxic and carcinogenic action of AFB1, resulting in millions of dollars in annual losses to producers due to reduced growth rates, reduced egg production, increased susceptibility to disease, and other adverse effects [4][5][6][7][8][9].
Apoptosis is a specialized process of cell death that is part of the normal development of organs and tissue maintenance, but may also occur as a response to various environmental stimuli, indicating toxicity. Early research has shown that AFB1 is able to being a direct and indirect initiator as well as promoter of apoptotic process [10,11]. Several studies indicated that AFB1 induced apoptosis of different cells, such as hepatocyte [12], bone marrow cells [13], and bronchial epithelial cells [14]. Moreover, Chen et al. [15] reported that 0.3 mg/kg AFB1 in the broilers' diet could induce the increase of apoptotic thymocytes by the up-regulation of Bax and Caspase-3 expression and the down-regulation of Bcl-2 expression. Similarly, Wang et al. [16] has demonstrated that AFB1 could increase the percentage of apoptotic splenocytes in broilers, which was closely related to oxidative stress. However, the effects of AFB1 on the apoptosis of jejunum were rarely reported. The gastrointestinal tract is the main site where conversion and absorption of food components takes place. As part of the small intestine, the jejunum is the major component of the gastrointestinal tract. Epithelium cells in the small intestine have a high turnover and as it is essential to maintain normal balance, apoptosis is crucial for maintenance of normal morphology and function [17][18][19]. Since poultry are extremely sensitive to the toxic and carcinogenic action of AFB1, studies on AFB1-related apoptosis in the jejunum in broilers are very important.
As an important micronutrient for humans and animals, Selenium (Se) plays a vital role in biological systems, such as chemopreventive [20], antioxidant, detoxification [21] and anticancer effects [22], and effects on both the innate and acquired immune system [23,24]. Furthermore, Se plays a key role in cell apoptosis [22]. At nutritional doses, Se is an essential component of selenocysteine (SeCys) in selenoproteins, and it promotes cell cycle progression and prevents cell death [22]. Previous studies have shown that Se could counteract the adverse effects of AFB1 in poultry [4,15,16,24,25]. For example, Se may ameliorate AFB1-induced lesions of the thymus and accordingly improve the impaired cellular immune function in broilers [15]. Similarly, Se may exhibit protective effects on AFB1-induced splenic toxicity by inhibiting oxidative stress and excessive apoptosis [16]. Recently, our study has demonstrated that supplementation of dietary sodium selenite at the concentration of 0.4 mg/kg Se protected the jejunum from the developmental retardation, decreased proliferation, and G2/M phase arrest caused by AFB1 [26]. However, the effects of Se against AFB1-induced jejunal cell apoptosis have yet not been reported. In the present research, experiments were conducted to examine the effect of AFB1 exposure and sodium selenite supplementation on the cell apoptosis of broilers' jejunum by TUNEL assay and quantitative real-time PCR.
Animals and Diets
Two hundred forty 1-day-old healthy male AA broilers were obtained from a commercial rearing farm (Wenjiang poultry farm, Sichuan Province). Chickens were randomly assigned four dietary treatments containing 0 mg/kg of AFB1 (control), 0.3 mg/kg AFB1 (AFB1), 0.4 mg/kg supplement Se (+ Se) and 0.3 mg/kg AFB1 + 0.4 mg/kg supplement Se (AFB1 + Se), respectively. Our earlier studies have demonstrated that 0.3 mg/kg AFB1 in diet had obvious adverse effects on broilers, and an appropriate level of Se supplied in the diet (0.4 mg/kg) could provide optimal protective effects against AFB1-induced toxicity in broilers [15,16]. Based on this information, an appropriate toxin concentration (0.3 mg/kg AFB1) and dietary Se level (0.4 mg/kg) were chosen. 1% Feed-grade sodium selenite was mixed into the control diet to formulate + Se and AFB1 + Se diets containing 0.4 mg/kg Se supplement by a stepwise dilution method. AFB1 was obtained from Pribolab Pte. Ltd (Singapore). The AFB1contaminated diets were made up according to the method described by Kaoud [27]. Briefly, 3 mg AFB1 was completely dissolved in 30 mL methanol, and then the 30 mL mixture was mixed into a 10 kg corn-soybean basal diet to formulate the AFB1 and AFB1 + Se diets, respectively. An equivalent amount of methanol was mixed into corn-soybean basal diet to produce the control diet. Then the methanol of the diets was evaporated at 98 °F (37 °C)(the concentration of dietary AFB1 was not detected in this experiment, but it can be assured that any possible back-ground contamination was evenly distributed among the experimental groups throughout the trial because the same lot of basal diet was used for formulating experimental diets). The content of Se (0.332 mg/kg) in the control diet was analyzed by hydride-generation atomic absorption spectroscopy. Broilers were housed in cages with electrically heated units and were provided with water as well as the aforementioned diets ad libitum for 21 days. The basal diets were formulated according to National Research Council (NRC, 1994) and Chinese Feeding Standard of Chicken (NY/T33-2004) recommendations to meet the nutrient requirements of broilers from 1 to 21 days. The composition of the basal diets is presented in Table 1. All procedures of the experiment were performed in compliance with the laws and guidelines of Sichuan Agricultural University Animal Care and Use Committee.
Clinical Signs and Body Weight
The clinical symptoms were observed each day. At 7, 14 and 21 days of age during the experiment, the body weight of chicken in each group was measured.
TUNEL Immunohistochemistry
At the end of 7, 14, and 21 days, six chickens in each treatment were euthanized, and the jejunum (the midpoint between the bile duct entry and Meckel's diverticulum) were immediately fixed in 4% paraformaldehyde. After fixation for 24 h, tissues were dehydrated, paraffin-embedded, sectioned into 5 μm slices. Sections were stained with TUNEL immunohistochemistry assay, which was performed using apoptosis detection kit (QIA33, Merck, Darmstadt, Germany) according to the manufacturer's instructions, as described by Peng et al [28]. The number of TUNEL-positive cells was evaluated in the apical region of villi using Image-Pro Plus 5.1 (Media Cybernetics, Silver Spring, MD, USA) image analysis software. For each sample, five random fields of 0.064 mm 2 were quantified (corresponding approximately to five fields at ×400 magnification), respectively. Results were expressed as the average of TUNEL-positive cells per 0.064 mm 2 area.
Quantitative Real-Time PCR (qRT-PCR)
Quantitative real-time PCR (qRT-PCR) assay was carried out as reported by Chen et al. [15]. Briefly, the jejunal mucosae from six chickens in each treatment at 7, 14, and 21 days of the experiment were stored in liquid nitrogen, respectively. Adding liquid nitrogen, the jejunal mucosae were crushed with pestle to homogenize until powdery, respectively. Total RNA was extracted from the powdery of jejunal mucosae using RNAiso Plus (9108/9109, Takara, Otsu, Japan). The mRNA was then reverse transcribed into cDNA using PrimScript TM RT reagent Kit with gDNA Eraser (RR047A, Takara, Otsu, Japan). The cDNA was used as a template for quantitative real-time PCR analysis.
For qRT-PCR reactions, 25 μL mixtures were made by using SYBR ® Premix Ex Taq TM II (DRR820A, Takara , Otsu, Japan), containing 12.5 μL Tli RNaseH Plus, 1.0 μL of forward and 1.0 μL of reverse primer, 8.5 μL RNAase-free water and 2 μL cDNA. Reaction conditions were set to 3 min at 95 °C (first segment, one cycle), 10 s at 95 °C and 30 s at Tm of a specific primer pair (second segment, 44 cycles) followed by 10 s at 95 °C , and 72 °C for 10 s (dissociation curve segment) using Thermal Cycler (C1000, BIO RAD, CA, USA). The mRNA expression of Bax, Bcl-2 and Caspase-3 was analyzed, and β-actin was used as an internal control gene. Sequence of primers was obtained from GenBank of NCBI. Primers were designed with Primer 5, and synthesized by BGI Tech (Shenzhen, China). The oligonucleotides used as primers in RT-qPCR analysis of Bax, Bcl-2, Caspase-3 and β-actin were determined according to the references [29][30][31]. The control broilers responses (mRNA amount) were been as reference values for between treatments comparisons within the same control day in each week, respectively. The results were analyzed with calculation method [32].
Statistical Analysis
The results were shown as means ± standard error (M ± SE). Statistical analyses were performed using one-way analysis of variance, and Dunnett's test was employed for multiple comparisons. A value of p < 0.05 was considered significant.
Clinical Signs and Body Weight
There were no evident clinical symptoms among four treatments. The body weight of broilers showed no significant differences between treatments at 1, 7, and 14 days of age (p > 0.05) ( Table 2). At 21 days of age, the body weight in AFB1 broilers was significantly lower than that in control broilers (p < 0.05), but no significant differences occurred among control, +Se and AFB1+Se broilers (p > 0.05) ( Table 2).
TUNEL Immunohistochemistry
In four treatments, the nuclei of TUNEL-positive cells were stained brown. TUNEL-positive cells were mainly distributed in the apical region of villi (Figure 1), with a few scattered positive cells in the middle and basal regions of villi and the crypt (Figure 2). Compared with control broilers on days 7, 14 and 21, the number of TUNEL-positive cells in AFB1 broilers was significantly increased (p < 0.01), however, the number of TUNEL positive cells in + Se broilers showed no significant changes (p > 0.05). In addition, the number of TUNEL-positive cells in AFB1 + Se broilers was significantly higher than that in control broilers (p < 0.01), but, significantly lower (p < 0.01) than that in AFB1 broilers during the experiment. The number of TUNEL-positive cells in the apical region of villi is shown in Table 3.
Quantitative Real-Time PCR (qRT-PCR)
The mRNA expressions of Bax, Bcl-2 and Caspase-3 and the Bcl-2/Bax ratio in jejunal mucosa are shown in Table 4. The expressions of Bax and Caspase-3 mRNA in AFB1 broilers were significantly higher than those in control broilers at 7, 14 and 21 days of age (p < 0.01). There were no significant changes in the expression of Bax mRNA between AFB1 + Se broilers and control broilers at 7, 14 and 21 days of age. In addition, the expression of Caspase-3 mRNA in AFB1 + Se broilers was significantly higher (p < 0.01) than that in control broilers, but significantly lower (p < 0.05) than that in AFB1 broilers during the experiment, except for at 14 days of age (Table 4). Compared with control broilers, the expression of Bcl-2 mRNA and the Bcl-2/Bax ratio in AFB1 broilers significantly decreased on days 7, 14 and 21 (p < 0.01). The expression of Bcl-2 mRNA and the Bcl-2/Bax ratio in AFB1 + Se broilers were significantly lower than those in control broilers (p < 0.01 or p < 0.05), but significantly higher than those in AFB1 broilers (p < 0.01 or p < 0.05) ( Table 4). Note: data are presented with the means ± standard error (n = 6). The data are expressed as relative responses with respect to the control.
Discussions
The results of clinical signs and body weight observed in this study showed that 0.4 mg/kg supplemented dietary Se could be safe for chickens, which is in agreement with Cai et al.'s report [33]. AFB1 (0.3 mg/kg) did not induce evident clinical symptoms, but significantly decreased body weight of broiler at 21 days of age was observed. It is thus suggested that 0.3 mg/kg AFB1 may retard the growth of the broiler. As reported in a review paper, broiler's performance may be affected, when the concentration of dietary AFB1 is about 0.5 mg/kg [34].
Tissue homeostasis depends on both cell proliferation and cell death. The small intestinal epithelium is a rapidly renewing tissue, in which cells are lost from the villus into the gut lumen and are generally replaced at an equal rate by the proliferation of cells in the crypts [18]. Early researches indicate that apoptosis is responsible for controlling the majority of intestinal epithelial cell loss, and apoptosis is occurs predominantly in the villus tip cells [18], which is supported by the following observation (1) high levels of the pro-apoptotic protein, Bax, have been detected in these terminally differentiated cells [18]; (2) the expression of a possible apoptotic endonuclease (DNase I) also increases towards the villus tip [35]; (3) possible increased expression of transforming growth factor-β and evidence for reduced adhesion may also lend support to this hypothesis [36][37][38]; (4) The presence of large numbers of macrophages and lymphocytes at the villus tip is consistent with apoptosis of terminally differentiated cells [39]. TUNEL assay can identify DNA fragmentation and examine the topographic distribution of apoptotic cells [15,40,41]. Similar to previous reports in other animals' intestine [42,43], in the present study, TUNEL-positive cells in all treatments were predominantly distributed in the apical region of the villus. Therefore, TUNEL-positive cells in the villi tip were counted as the number of apoptotic cells in the jejunal mucosa.
Several studies have indicated that AFB1 was able to induce apoptosis in hepatocytes, lung and bone marrow cells, bronchial epithelial cells, thymocytes and splenocytes [12][13][14][15][16]. Our result shows that the number of TUNEL-positive cells was increased in AFB1 broilers when compared with control broilers. This result indicates that AFB1 could induce excessive apoptosis of broilers' jejunum. Our early study has revealed that 0.3 mg/kg AFB1 in the diet can induce pathological lesions (shedding) and reduce cellular proliferation of broilers' jejunum [26]. Epithelial cells of the intestine experience permanent renewal that includes cell proliferation, migration, differentiation, apoptosis, and cell shedding into the intestinal lumen [18,44]. Homeostasis of these activities is essential for structural and functional properties of intestine [45,46]. Also, both decreased proliferation and/or increased cell death may reduce cell number, whereas increased proliferation and/or decreased death may increase cell number [18]. Therefore, the increased apoptosis observed in this study, the decreased proliferation [26] and pathological shedding in the villus tips in broilers' jejunum induced by AFB1 [26] may lead to the decreased enterocytes which may be followed by the declined function of this organ.
Previous researches have demonstrated that AFB1 could lead to cellular apoptosis via mitochondrial or cell death receptor pathways [47][48][49][50]. In the present study, the mRNA expression of Bax, Bcl-2 and Caspase-3 was determined for evaluating whether AFB1-induced apoptosis of jejunum is related to the mitochondrial pathway. The results showed that the mRNA expression of Bax and Caspase-3 significantly increased, while the mRNA expression of Bcl-2 and the ratio of Bcl-2/Bax significantly decreased in AFB1 broilers, when compared with those in control broilers. These results suggest that the excessive apoptosis of jejunum induced by AFB1 was onset by the mitochondrial signaling pathway, which is accord with the previous research in human bronchial epithelial cells [14] and broiler thymocytes [15]. Future studies should focus on whether AFB1-induced apoptosis in jejunal cells is triggered by a cell death receptor pathway. Previous studies have revealed that AFB1 might induce oxidative stress by the formation of Reactive Oxygen Species (ROS) and the decrease of the activity and the gene expression of antioxidant enzymes [51][52][53][54][55][56]. As an important physiological effector of apoptosis [57], ROS induces the disruption of the mitochondrial membrane potential (MMP) and formation of mitochondrial apoptosis-induced channel promoted by Bax, by which cytochrome c releases from mitochondria [58]. Cytochrome c compounds in cytoplasm can activate caspase-9 followed by activation of Caspase-3 promoting the apoptosis process [59].
In the present study, the number of apoptotic cells had no significant difference between + Se and control broilers from 7 to 21 days of age. In addition, no significant difference was observed in the ratio of Bcl-2/Bax between + Se and control broilers during the experiment. The result suggests that 0.4 mg/kg Se supplied in the diet had almost no obvious effects on the apoptosis of broilers' jejunum.
Recent studies have shown that Se has protective action against cell apoptosis induced by AFB1 in poultry [15,16]. In the present research, the number of apoptotic cells and the expression of Caspase-3 mRNA in AFB1 + Se broilers were significantly lower than those in AFB1 broilers during the experiment, and the expression of Bcl-2 mRNA and the Bcl-2/Bax ratio was significantly higher than those in AFB1 broilers. These results indicate that the diet supplemented with 0.4 mg/kg Se might have protective roles against AFB1-induced jejunal apoptosis of broiler by the upregulation of the Bcl-2/Bax ratio. Similar results were also reported in broilers's thymus and spleen by Chen and Wang [15,16]. Previous researches showed that AFB1-induced apoptosis could be caused by lipid peroxidation and oxidative DNA damage [60,61]. However, selenium could repress ROS-mediate apoptosis by inhibiting the apoptosis due to ROS and mitochondrial dysfunction [62,63], which could be related to the antioxidant effects of Se [64]. Our results suggest that appropriate dietary Se could inhibit AFB1-induced apoptosis, which may be related to its anti-oxidant function. In comparison with those in the control group, the Bcl-2 expression and the Bcl-2/Bax ratio were significantly decreased, but the Bax expression showed substantially the same extent. This result suggested that supplemented Se could protect AFB1-induced apoptosis in some extent, but cannot restore it to the normal level as in the control group, because the Bcl-2 family member can be activated or suppressed by complex factors [65].The mechanism(s) of this observed action require further investigation.
Conclusions
In conclusion, 0.3 mg/kg AFB1 in the diet can induce an increase of cell apoptosis, a decrease of Bcl-2 mRNA expression, and an increase of Bax and Caspase-3 mRNA expression in broilers' jejunum. However, supplementation of dietary sodium selenite at the concentration of 0.4 mg/kg Se may ameliorate AFB1-induced apoptosis by increasing Bcl-2 mRNA expression, and decreasing Bax and Caspase-3 mRNA expression. | 2016-03-22T00:56:01.885Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "48002fbae531d9a27ce32b5162d30d02a45104b1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/11/12/13130/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48002fbae531d9a27ce32b5162d30d02a45104b1",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
247983979 | pes2o/s2orc | v3-fos-license | Adjuvant I-131 therapy for T0–3 N1b M0 differentiated thyroid cancer with many (≥ 5) positive nodes
Background In patients with well-differentiated thyroid cancer, there is controversy about the prognostic importance of a large number of positive neck nodes and the potential value of radioiodine therapy. The purpose of this study was to evaluate this issue in the group of patients for whom it is most clinically important — those with classic histology and favorable T and M stage. Materials and methods Twenty-five patients met the following inclusion criteria: classic histology of papillary or follicular thyroid carcinoma treated with total thyroidectomy and neck dissection followed by adjuvant I-131 treatment in our department between January 1, 2003, and December 31, 2013; adult age of > 21 years; and American Joint Committee on Cancer (AJCC ) stage (8th edition) of T0–3, N1b with ≥ 5 positive nodes, and M0. Results The median positive node number was 10 (range, 5–31). The median adjuvant I-131 dose was 158 mCi (range, 150–219 mCi). The median follow-up in patients without recurrence after treatment was 7.3 years. The 10-year actuarial rates were favorable: overall survival, 100%; freedom from visible recurrence, 82%; and visible or biochemical recurrence, 72%. Conclusion Recurrence was infrequent in our study population with ≥ 5 positive nodes following moderate-dose adjuvant I-131 treatment. These results are valuable in directing initial adjuvant therapy and follow-up intensity. Our results do not inform the question of the use of postoperative thyroglobulin (Tg) level to select N1b patients for low-dose I-131 treatment.
Introduction
In patients with well-differentiated thyroid cancer, there is controversy about the prognostic importance of a large number of positive neck nodes [1][2][3][4][5][6]. The limitation of published studies on this subject is that they include patients with advanced T stage (pT4), distant metastases (M1), low node number (< 5), inconsistent radioiodine therapy, and/or short follow-up. Our department has a long history of treating thyroid cancer with a standardized policy of moderate-dose radioiodine soon after total thyroidectomy and neck dissection to clear gross disease. The purpose of this report is to document long-term outcomes in a relatively homogenous study population for whom the issue of a high node number is most clinically important: those with AJCC stage T0-3, N1b disease with ≥ 5 positive nodes and M0.
AbstrAct background:
In patients with well-differentiated thyroid cancer, there is controversy about the prognostic importance of a large number of positive neck nodes and the potential value of radioiodine therapy. The purpose of this study was to evaluate this issue in the group of patients for whom it is most clinically important -those with classic histology and favorable T and M stage.
Materials and methods:
Twenty-five patients met the following inclusion criteria: classic histology of papillary or follicular thyroid carcinoma treated with total thyroidectomy and neck dissection followed by adjuvant I-131 treatment in our department between January 1, 2003, and December 31, 2013; adult age of > 21 years; and american Joint committee on cancer (aJcc) stage (8 th edition) of T0-3, N1b with ≥ 5 positive nodes, and M0.
Materials and methods
With approval from our institution's review board (IRB201500689), we reviewed the medical records of the 25 patients who met the inclusion criteria of this study: classic histology papillary or follicular thyroid carcinoma treated with total thyroidectomy and neck dissection followed by adjuvant I-131 treatment in our department between January 1, 2003, and December 31, 2013; adult age of > 21 years; and American Joint Committee on Cancer (AJCC) stage (8 th edition) of T0-3, N1b with ≥ 5 positive nodes, and M0. Study population characteristics are detailed in Table 1.
Although we did not exclude older patients, all patients in our study were < 50 years old at the time of thyroidectomy. All patients were pathologic stage N1b based on positive nodes in the lateral neck, with or without nodes also in the central compartment. During the years of treatment in this study, we did not check serum thyroglobulin (Tg) level after thyroidectomy to determine the need or dose of I-131 treatment. All I-131 treatments were delivered by the senior author of this paper (RJA) with a standard protocol for iodine depletion. Following adjuvant I-131 treatment, patients where followed by an endocrinologist with neck ultrasound and unstimulated serum Tg level. Abnormal findings were further evaluated with additional imaging studies, as indicated.
statistics SAS and JMP software were utilized for statistical analyses (SAS Institute, Cary, NC). The Kaplan-Meier product limit method provided actuarial outcome estimates. Primary endpoints in this study included overall survival with an event being death from any cause, cause-specific survival (CSS) with an event being death from thyroid cancer or complications of treatment, visible recurrence-free survival with an event being recurrent cancer that was visible on ultrasound or computer tomography (CT) scan, and Tg-only recurrence-free survival with an event being rising Tg with no visible evidence of tumor on ultrasound or CT scan.
results
The median Tg follow-up for patients without disease recurrence after initial radioactive iodine was 73 months and the median clinical follow-up was 88 months. Seven (28%) patients demonstrated a tumor recurrence after adjuvant I-131 treatment: 5 with visible tumor in the neck and 2 with the only evidence of recurrence being rising serum Tg. Of the 7 patients with a tumor recurrence, 6 underwent salvage therapy with curative intent with neck surgery, external-beam radiotherapy, and/or repeat I-131 treatment. At last follow-up, 92% of the 25 patients (23/25) had no evidence of disease and no patient had died of thyroid cancer. Figure 1 shows actuarial plots for the main outcome endpoints.
There was no significant association between number of nodes and tumor recurrence (visible plus Tg-only): ≤ 10 nodes, 23% vs. > 10 nodes, 33%; and no significant association between the presence of extra nodal extension (ENE) and tumor recurrence: yes ENE, 31% vs no ENE, 25%; all comparisons had a p-value > 0.5. Table 2 lists the major published series that analyzed the prognostic importance of a high number of positive nodes. Most of these studies conclude that a higher positive node number is a major prog- nostic factor -some with the cutoff at 5 positive nodes and others with it at 10 positive nodes.
Discussion
The limitation of these studies is that most include patients with advanced T stage (pT4), distant metastases (M1), low node number (< 5) of nodes, inconsistent radioiodine therapy, and/or short follow-up. Moreover, most of these studies show high recurrence rates in patients with a high number of positive nodes. For example, the recent series from the University of Alberta reported a 10-years relapse-free survival rate of only 40% in node-positive patients [5].
The value of our small series is in its relatively homogenous study population in terms of those factors for which the issue of node number influences the decision for I-131 treatment. Using a standardized program of near-total thyroidectomy, neck dissection to clear gross adenopathy, and 150-200 mCi I-131, we achieved excellent long-term outcomes in patients with a high number of positive nodes (minimum 5, median 10).
Our series has two major limitations: First is its inability to compare I-131 doses and lack of postoperative serum Tg levels. Our results suggest that 150-200 mCi adjuvant I-131 produces high cure rates, but we are unable to say if lower doses would be equally efficacious. Similarly, beyond positive node number, measuring serum Tg levels at least 4 weeks after thyroidectomy can inform the risk to our patients.
conclusion
Recurrence was infrequent in our study population with ≥ 5 positive nodes following moderate-dose adjuvant I-131 treatment. These results are valuable in directing initial adjuvant therapy and follow-up intensity. Our results do not inform the question of the use of postoperative Tg level to select N1b patients for low-dose I-131 treatment. | 2022-04-07T15:24:42.860Z | 2022-03-14T00:00:00.000 | {
"year": 2022,
"sha1": "0783b699ce0a24bcccd256d7ddf43c2ba5fe6c5c",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.viamedica.pl/rpor/article/download/RPOR.a2022.0010/66026",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f57a00fcae00117a234280357bc520f5d35fb10d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232125510 | pes2o/s2orc | v3-fos-license | Increased region of surround stimulation enhances contextual feedback and feedforward processing in human V1
The majority of synaptic inputs to the primary visual cortex (V1) are non-feedforward, instead originating from local and anatomical feedback connections. Animal electrophysiology experiments show that feedback signals originating from higher visual areas with larger receptive fields modulate the surround receptive fields of V1 neurons. Theories of cortical processing propose various roles for feedback and feedforward processing, but systematically investigating their independent contributions to cortical processing is challenging because feedback and feedforward processes coexist even in single neurons. Capitalising on the larger receptive fields of higher visual areas compared to primary visual cortex (V1), we used an occlusion paradigm that isolates top-down influences from feedforward processing. We utilised functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis methods in humans viewing natural scene images. We parametrically measured how the availability of contextual information determines the presence of detectable feedback information in non-stimulated V1, and how feedback information interacts with feedforward processing. We show that increasing the visibility of the contextual surround increases scene-specific feedback information, and that this contextual feedback enhances feedforward information. Our findings are in line with theories that cortical feedback signals transmit internal models of predicted inputs. Significance Statement The visual system has circuit mechanisms for processing scene context. These circuits involve lateral and feedback inputs to neurons. These inputs interact with feedforward inputs and modulate neuronal responses to visual stimuli presented outside their receptive fields. Systematically investigating independent contributions of feedback and feedforward processes is challenging because they coexist even in single neurons. Here we use an occlusion paradigm to isolate feedback and lateral signals in human participants viewing natural scene images in fMRI. We show that increasing the visibility of the contextual surround increases scene-specific feedback information, which also enhances feedforward signals. Our findings are in line with theories that cortical feedback signals carry abstract internal models that combine with more detailed representations in primary visual cortex.
68
Sensory stimulation triggers a cascade of processing in a hierarchy of visual areas. This 69 feedforward processing meets recurrent activity from the previous sensory input and triggers 70 recurrent activity that will meet the next expected visual input. Recurrent processing 71 contextualises and predicts the incoming signal and updates internal models and future recurrent 72 streams. The contextualisation of feedforward information by feedback signals is essential for our 73 understanding of cortical processing (Gilbert and Li, 2011). We know from animal recordings that 74 cortical neurons are contextually modulated when their response to a feedforward stimulus 75 feature is modified by the presence of surrounding features (Sugita, 1999;Shushruth, 2011). In 76 visual cortex, this contextual information can be located far in the surround of a neuron's 77 receptive field. Consequently, contextual modulation of neurons is exerted by cortical feedback 78 and lateral inputs (Angelucci, 2002). Cortical feedback inputs, at least in non-human primate 79 cortex, arrive to discrete portions of cortical pyramidal neurons; mainly to the apical dendrites 80 that branch up to layer 1 (Douglas and Martin, 2007). Feedback inputs are therefore (Larkum, 2013). This perspective requires techniques to probe brain processing that detect 85 neuronal inputs, advancing previous studies that mainly measure neuronal outputs (i.e. spiking 86 activity Larkum et al., 2018;Muckli et al., 2015). Functional magnetic resonance imaging (fMRI) 87 is one such technique that detects pre-and postsynaptic inputs, offering a means to measure 88 contextual feedback information to a region of cortex. 89 Understanding the nature of contextual modulation transmitted by cortical feedback and 90 lateral interaction is vital for understanding the brain in behavioural and cognitive contexts 91 (Gilbert and Sigman, 2007). This importance of cortical feedback and lateral interaction arises 92 because contextual modulations on a neuron include influences from higher-level top-down 93 processes including expectation, prior experience and goal-directed behaviour, which originate in 94 higher cortical areas (Muckli and Petro, 2013). Therefore, describing neuronal substrates of 95 cognition in brain networks including sensory areas requires us to measure not only stimulus- 2016), therefore functionally determining the brain's response to its environment (Friston, 2010; 103 Clark, 2015). 104 We used fMRI, a brain imaging measure of energy consumption, and multivoxel pattern 105 analysis (MVPA) to investigate how global natural scene features contextually modulate human 106 V1. Our approach complements non-classical receptive field studies in rodent and monkey cortex, 107 that measure spikes in response to a feedforward stimulus relative to contextual surround 108 Friday, 26 February 2021 5 stimulation. However, the proposed tuning to pre-and post-synaptic activity in apical dendrites 109 that might be detectable by fMRI allows us to capitalise on a signal that might not always be 110 available in sharp electrode electrophysiology, where the input at the apical dendrites might not 111 lead to a change in spiking output. Using partially occluded images, we parametrically vary the 112 amount of global contextual information that we provide and measure the resulting contextual 113 feedback (and lateral interaction) information to V1 both in the absence of feedforward 114 information, and when feedback is integrated with feedforward information. If global features in 115 the surround contextually modulate human V1, we hypothesized that scene information in non- We compensated twenty-nine subjects from the University of Glasgow to participate in the 123 experiment (n = 13 males; mean age: 24.28 years, range: 19-41 years). Subjects provided informed 124 written consent and the experiment was approved by the local ethics committee at the University 125 of Glasgow (CSE01063). We excluded subjects if their data was at chance level classification 126 performance in at least one feedforward control condition (n = 5) or poorly aligned (anatomically) 127 between functional runs (n = 3, see Voxel Selection and Analysis, indicating substantial body 128 movement between scans). Below we report results from 21 subjects with stable classification in 129 feedforward control conditions (n = 10 males; mean age: 25.29 years, range 19-41 years). occluded by a white rectangle. Here we expect that the retinotopic region of V1 responding to the 136 white portion of the image receives no meaningful feedforward input and only cortical feedback 137 signals (and lateral inputs). The white rectangle was placed 0.5° of visual angle diagonally from 138 the centre of the image and spanned 11.6° × 9.2°. In the so-called 'feedforward' conditions, the 139 corresponding quadrant of the scene was shown; V1 voxels responding to the lower image 140 quadrant in this condition contain a mixture of feedforward, lateral and feedback inputs.
142
We used two natural scene images for each participant, as natural scenes induce a lot of 143 contextual associations (Bar 2004). Each scene was 600 x 480 pixels and spanned 24° × 19.2° of 144 visual angle. We did not normalize the images in terms of low-level visual features, such as 145 luminance, contrast or energy at each spatial frequency because we wanted the scenes to look as 146 natural as possible. Smith and Muckli (2010) previously showed that contextual feedback signals 147 in V1 cannot be solely attributed to these low-level visual features.
148
To investigate the contribution of surrounding contextual information on the brain activity
171
Near Surround, Inner Border. C) The activation for the contrast of (Target -Near Surround) used to map non-172 stimulated V1 is shown on the occipital cortex on one subject, with V1 in green on the inflated visualization.
173
Occluded region mapping 174 We presented subjects with three contrast-reversing checkerboards (5 Hz) twice per run.
175
The checkerboards either covered an inner rectangular part of the occluded region (Target -2.5° 176 diagonally from centre, 10.2° × 7.8° visual angle) or the border between the lower right quadrant 177 and the rest of the stimulus (Surround). There were two types of surround checkerboard stimuli 178 ( Figure 1B) -Near Surround (0.5° diagonally from fixation, 11.6° × 9.2° visual angle) and Inside Border (1.5° diagonally from fixation). The activation in the early visual areas for the (Target -180 Near Surround) contrast is shown in Figure 1C. 181
182
Task and procedure 183 We presented scenes on a uniform grey background using MRI compatible goggles 184 (NordicNeuroLab) with 800 × 600 pixel screen resolution, which corresponded to 32° × 24°
210
MRI acquisition 211 We collected MRI data using a 3T Siemens Tim Trio System with a 12-channel head coil. 212 We measured blood oxygen level dependent (BOLD) signals with an echo-planar imaging Excessive subject movement between runs is likely to affect correspondence between 228 voxels from one run to another. This could introduce noise into our analysis as we selected our would suggest a close anatomical alignment between the 4 runs. The median alignment value 235 across subjects was 98.08% and single subject values ranged from 77.85% to 99.31%. We excluded 236 data from further analysis if the alignment value was below 90%, which applied to three subjects.
237
Furthermore, we excluded any subject with chance level performance in any feedforward 238 condition in single trial analysis (significance above chance was measured using permutation not be meaningful to assess feedback classifier performance (or lack of) in such cases. This 244 excluded a further five subjects. Thus, the following analyses were performed on 21 subjects. 245 We identified the cortical representation of the occluded region using a general linear due to the one-run-out method on the four runs), to estimate the single subject mean. We then 284 bootstrapped these mean values from individual subjects 10000 times to estimate 95% confidence intervals (CIs) on the group mean. We counted classifier performances as significantly above 286 chance (50%) if the 95% CIs did not contain chance-level performance. We used a permutation 287 test (1000 samples) to compute differences between mean group classifier performances 288 (reported p values not corrected for multiple comparisons), by shuffling the observed values 289 across the conditions, and calculating the absolute differences between the conditions. If the 290 observed difference was in the top 5% of the differences distribution, we considered our 291 conditions to be significantly different from each other.
293
Our hypothesis is that the surround stimulation drives higher visual areas with larger 294 receptive fields to send a contextual feedback signal to voxels in V1 responding to the occluded 295 quadrant. We can therefore modify the surround stimulation to learn more about the nature of 296 contextual feedback.
297
Increased stimulation of the surround receptive field enhances contextual feedback 298 We have shown previously that scene features eliciting contextual feedback to non- Expanding on these findings, we assessed the amount of surrounding scene information required 304 to elicit scene-relevant information in non-stimulated V1. We parametrically modulated the 305 availability of surround information and trained the SVM classifier to decode between the two 306 scenes using voxel patterns responding to the lower right quadrant when it was either occluded 307 (feedback and lateral, but no feedforward information) or stimulated (feedforward, feedback and 308 lateral information). SVM classification performance was used as an estimate of the amount of 309 available information in the activation pattern.
When the image was occluded, scene classification in non-stimulated voxels improved with 311 increasing availability of surrounding scene information (Figure 2A, left). Averaging across 312 experiments, classification was significantly above chance once the bubbles exceeded the smallest 313 size, except for Large Bubbles Single Trial analysis (Table 1)
384
To further test how much surround information contributes to visual processing, we 385 compared the Fully Visible scene with other feedforward conditions with a reduced scene 386 surround, as well as the feedback conditions (Figure 4). We trained the classifier on the Fully Visible scene and tested on the other conditions. In a fully visible scene both parts of the 388 information are available simultaneously and the classifier might rely more on the rich, fine-389 grained feedforward information. However, we found that Fully Visible feedforward to feedback 390 cross-classification was only possible with large amounts of scene information surrounding the 391 occluded region (Table 3). Fully Visible to Full Feedback cross-classification was above chance, 392 while Large, Medium and Small Bubbles did not reach significance in the feedback conditions. In Full Feedback (AB analysis, Experiment 1 only).
472
After restricting voxels to the occluded region using pRF mapping, we saw that classifier 473 performance decreased in some conditions, but the pattern of the results remained the same 474 (Figure 6). Due to the low numbers of subjects in each experiment for whom we were able to 475 perform pRF mapping, we did not calculate confidence intervals for some of the mean values. accounts for the full extent of the surround modulation effects (Angelucci and Bressloff, 2006).
499
There was no meaningful feedforward stimulation in our occluded region of V1, and yet we could 500 decode two scenes using information patterns corresponding to this non-stimulated region. This 501 differential information must originate from contextual information in the scene surround.
502
Classical receptive fields are smaller than the surround, hence neurons in the occluded area in V1 503 most likely receive information about the rest of the scene via cortical feedback from higher areas.
504
Since we are measuring a population of neurons using fMRI, as opposed to single cells, it is hard 505 to estimate how widespread the effect of the surround receptive field is. V1 receives feedback from 506 many cortical areas, which have increasing receptive field sizes moving to higher and more 507 abstract processing areas (Dumoulin and Wandell, 2008). Therefore, we expect that influence from the surround might be restricted to regions close to the occluded region for feedback coming 509 from V2, for example, but transmit information from a larger area of the surrounding scene for 510 feedback originating from higher visual areas. 511 We found that larger bubbles in the surround lead to more informative feedback in the 512 occluded region. This may be because we are revealing more of the overall scene structure as we 513 increase the bubble size. Tang in the occluded region is present even if participants never see the fully visible scene and were not 582 familiarised with it. We also found that increased exposure to the full scene did not improve 583 feedback in the conditions with reduced surround. Therefore, it appears that the contextual 584 feedback we observed arises from the scene structure available in each trial, or knowledge of 585 natural scene properties in general, but familiarity with the specific scene is not required for 586 informative feedback signals. This could be because natural scenes have predictable scene 587 statistics and much of the information they contain is redundant (e.g. Attneave, 1954;Barlow, 588 1961; Torralba and Oliva, 2003).
590
We demonstrated that cortical feedback information forms a part of early visual cortex 591 activity during visual stimulation. Using a brain imaging technique we have corroborated | 2021-03-06T14:13:19.528Z | 2021-02-28T00:00:00.000 | {
"year": 2021,
"sha1": "509d218591b56ea64a2ab24da29d6f32aa59cc88",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/02/28/2021.02.27.433018.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "509d218591b56ea64a2ab24da29d6f32aa59cc88",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology"
]
} |
205896062 | pes2o/s2orc | v3-fos-license | Randomized trial of interferon‐ and ribavirin‐free ombitasvir/paritaprevir/ritonavir in treatment‐experienced hepatitis C virus–infected patients
Approximately 2 million Japanese individuals are infected with hepatitis C virus and are at risk for cirrhosis, end‐stage liver disease, and hepatocellular carcinoma. Patients in whom interferon (IFN)/ribavirin (RBV) therapy has failed remain at risk as effective therapeutic options are limited. This phase 2, randomized, open‐label study evaluated an IFN‐ and RBV‐free regimen of once‐daily ombitasvir (ABT‐267), an NS5A inhibitor, plus paritaprevir (ABT‐450), an NS3/4A protease inhibitor dosed with ritonavir (paritaprevir/ritonavir), in pegylated IFN/RBV treatment–experienced Japanese patients with hepatitis C virus subtype 1b or genotype 2 infection. Patients without cirrhosis (aged 18‐75 years) with subtype 1b infection received ombitasvir 25 mg plus paritaprevir/ritonavir 100/100 mg or 150/100 mg for 12 or 24 weeks; patients with genotype 2 infection received ombitasvir 25 mg plus paritaprevir/ritonavir 100/100 mg or 150/100 mg for 12 weeks. Sustained virologic response (SVR) at posttreatment week 24 (SVR24) was the primary endpoint. Adverse events were collected throughout the study. One hundred ten patients received ≥1 dose of study medication. In the subtype 1b cohort, SVR24 rates were high (88.9%‐100%) regardless of paritaprevir dose or treatment duration. In the genotype 2 cohort, SVR24 rates were 57.9% and 72.2% with 100 mg and 150 mg of paritaprevir, respectively. The SVR24 rate was higher in patients with subtype 2a (90%) than 2b (27%). Concordance between SVR12 and SVR24 was 100%. The most common adverse events overall were nasopharyngitis (29%) and headache (14%). Conclusion: In this difficult‐to‐treat population of patients in whom prior pegylated IFN/RBV had failed, ombitasvir/paritaprevir/ritonavir demonstrated potent antiviral activity with a favorable safety profile among Japanese patients with hepatitis C virus genotype 1b or 2a infection. (Hepatology 2015;61:1523–1532)
C hronic hepatitis C viral (HCV) infection is a significant global health problem affecting approximately 170 million people worldwide and causing almost 500,000 deaths each year from HCV-related liver disease. 1 Phylogenetic studies suggest that the HCV epidemic was introduced in waves across the globe; HCV began to infect large numbers of Japanese youth in the 1920s, southern Europeans in the 1940s, and North Americans in the 1960s and 1970s. 2 Although HCV seroprevalence is similar among these geographic areas, the impact of HCVrelated morbidity and mortality has been highest among the Japanese population, where HCVassociated hepatocellular carcinoma rates are three-fold higher than in Italy and six-fold higher than in the United States. Of the 2 million Japanese patients who are seropositive, roughly 70% are infected with subtype 1b, 20% with subtype 2a, and the remainder with subtype 2b. 3 In contrast to the United States and many parts of Europe, only 1%-2.5% of Japanese patients carry subtype 1a. 4 Because the risk of HCV-related morbidity is clearly linked to duration of infection, there is an urgent need for potent therapeutic interventions among Japanese patients. Although the addition of the first-generation protease inhibitors telaprevir and boceprevir improved the overall efficacy rates of pegylated interferon (pegIFN) plus ribavirin (RBV) in patients with genotype 1 infection in the United States, Europe, Asia, and Japan, the adverse event (AE) profile was also additive. [5][6][7][8][9] In addition to the well-known side effects of IFN-based therapy, rash was seen in 50% of patients receiving telaprevir, and the risk of anemia increased significantly among patients who received either telaprevir or boceprevir. Although the protease inhibitor simeprevir was shown to be efficacious and better tolerated in Japanese patients with genotype 1 infection in combination with pegIFN/RBV, patients still experienced the AEs associated with an IFN-based therapeutic backbone, such as severe fatigue, depression, poor appetite, and weight loss. 10,11 In addition, the effect of triple therapy on efficacy was blunted among those with a history of prior treatment failure with pegIFN/RBV therapy. For example, in Japanese patients with genotype 2 infection, the rate of sustained virologic response (SVR) with IFN-based regimens ranged from 64% and 82% in treatmentnaive patients [12][13][14] compared with only 40% in treatment-experienced patients. 12 Response also varied by subtype, with subtype 2a having higher SVR rates compared with subtype 2b. 12,13 Combination therapy with direct-acting antiviral (DAA) agents with different mechanisms of action have demonstrated promising efficacy rates and favorable tolerability profiles in patients with HCV. [15][16][17] Ombitasvir (formerly ABT-267) is an HCV NS5A inhibitor dosed once daily. Paritaprevir (formerly ABT-450) is an HCV NS3/4A protease inhibitor that is administered with low-dose ritonavir (paritaprevir/ritonavir) to increase paritaprevir plasma levels and halflife, enabling once-daily dosing. 18 Both ombitasvir and paritaprevir have potent in vitro antiviral activity against multiple subtypes, including 1a, 1b, 2a, 2b, 3a, 4a, and 6a. 19,20 In this study, we examined the efficacy and safety of an all-oral pegIFN-and RBV-free regimen of ombitasvir/paritaprevir/ritonavir in pegIFN/RBV treatment-experienced Japanese patients with HCV subtype 1b or genotype 2 without cirrhosis.
Patients and Methods
Study Design. This was a phase 2, randomized, open-label, parallel-arm, dose-and duration-finding study (ClinicalTrials.gov identifier NCT01672983) (Fig. 1). The study was approved by all institutional review boards and conducted in accordance with the International Conference on Harmonisation guidelines and the Declaration of Helsinki. Written informed consent was obtained from all patients before enrollment.
Patient Population. Patients were enrolled from August 2012 to December 2012 at 18 sites in Japan. Eligible patients were adults (aged 18-75 years) with chronic HCV subtype 1b or genotype 2 infection and HCV RNA levels >10,000 IU/mL without cirrhosis who previously failed pegIFN/RBV therapy. The HCV genotype was assessed at screening using the Versant HCV Genotype Inno-LiPA Assay (LiPA, v. 2.0 or higher; Siemens Healthcare Diagnostics, Tarrytown, NY).
Patients with subtype 1b were eligible if they were null responders (i.e., did not achieve a 2 log 10 IU/mL reduction in HCV RNA levels at week 12 after at least 10 weeks of treatment with pegIFN/RBV) or partial responders (i.e., achieved at least a 2 log 10 IU/mL reduction in HCV RNA at week 12 after a minimum of 20 weeks of treatment with pegIFN/RBV but had HCV RNA levels above the lower limit of detection at the end of treatment). Patients with genotype 2 were eligible if they were null or partial responders or relapsers (i.e., patients with undetectable levels of HCV RNA at the end of at least one course of pegIFN/RBV and whose levels became detectable within 24 weeks after the end of that treatment). Patients with human immunodeficiency virus or hepatitis B virus coinfection or any cause of liver disease other than chronic HCV infection were excluded. Full inclusion and exclusion criteria are in Supporting Table 1.
Efficacy. Plasma HCV RNA levels were determined by a central laboratory, using the COBAS TaqMan realtime reverse-transcriptase polymerase chain reaction assay 2.0 (Roche, Nutley, NJ), which has a lower limit of detection of 15 IU/mL for genotype 1 and 5.6 IU/mL for genotype 2 and a lower limit of quantitation (LLOQ) of 25 IU/mL for both genotypes.
The primary efficacy endpoint was the percentage of patients who achieved SVR (HCV RNA <25 IU/ mL) 24 weeks after the last dose of study drug (SVR 24 ). Secondary efficacy endpoints were the percentage of patients with SVR 12 weeks after the last dose of study drug (SVR 12 ) and the percentage of those patients with an end-of-treatment response (HCV RNA <25 IU/mL at week 12 for the 12-week treatment arms or at week 24 for the 24-week treatment arms). The percentage of patients with SVR 4 weeks after the last dose of study drug (SVR 4 ) and the percentage who achieved rapid virologic response (HCV RNA <25 IU/mL at treatment week 4) were also determined.
On-treatment virologic failure included rebound (two consecutive HCV RNA measurements greater than or equal to LLOQ after achieving HCV RNA Fig. 1. Study design. *Patients not achieving a 2 log 10 IU/mL reduction in HCV RNA at week 12 after 10 weeks of pegIFN/RBV. † Patients who achieved a 2 log 10 IU/mL reduction in HCV RNA at week 12 after 20 weeks of pegIFN/RBV but had HCV RNA levels above the lower limit of detection at treatment end. ‡ Patients with undetectable levels of HCV RNA after one or more courses of pegIFN/RBV treatment who had detectable HCV RNA within 24 weeks. § Dose: 25 mg QD. Abbreviations: OBV, ombitasvir; PTV, paritaprevir; QD, once daily; r, ritonavir. levels less than LLOQ during treatment or an increase in HCV RNA levels >1 log 10 IU/mL from nadir in two consecutive measurements at any time point during treatment) or failure to suppress (all on-treatment HCV RNA values LLOQ) with at least 6 weeks of treatment. Relapse was defined as two consecutive HCV RNA measurements greater than or equal to LLOQ between the final treatment visit and posttreatment week 24 and included patients who completed treatment (at least 77 days of study drug for the 12-week arms and at least 161 days of study drug for the 24-week arms) and had HCV RNA levels less than LLOQ at the final treatment visit.
Because of known limitations of the LiPA 2.0 assay (i.e., an inability to universally determine subtypes for genotype 2 virus), a 329-nucleotide region of HCV NS5B in baseline samples from all patients with genotype 2 infection was sequenced and used to perform a phylogenetic analysis. 21 All efficacy and safety analyses by HCV genotype were based on the LiPA 2.0 assay unless otherwise stated. Patients with virologic failure had resistanceassociated variants determined for HCV NS3/4A and NS5A at baseline and at the time of failure by population nucleotide sequencing. Translated amino acid sequences were used to identify resistance-associated variants.
Safety. At each study visit AEs were evaluated. Data on serious AEs (SAEs) were collected throughout the study period. Data on nonserious AEs were collected during the treatment-emergent period: from study drug initiation to 30 days after treatment cessation. All AEs were coded using the Medical Dictionary for Regulatory Activities. Data on treatment-emergent AEs are reported.
Statistical Analyses. A minimum of 96 subjects (16 subjects per treatment arm) and no more than 120 subjects (20 subjects per arm) were planned to be enrolled. A sample size of 16 patients per treatment arm would provide a two-sided 95% confidence interval (CI) of 54.4%-96.0% using binomial exact methodology, assuming an observed SVR 24 rate of 80%. All randomized patients who received at least one dose of study drug were included in the intent-to-treat and safety populations. Analyses of rapid virologic response, end-of-treatment response, and all SVR endpoints were performed on the intent-to-treat population. A two-sided 95% binomial exact CI was calculated for the SVR 24 rate for each treatment regimen. The effects of treatment duration and paritaprevir dose on SVR 24 in the subtype 1b cohort were assessed using the stratum-adjusted Mantel-Haenszel method, with adjustment for treatment duration (when testing for paritaprevir dose), paritaprevir dose (when testing for treatment duration), and prior treatment response (null response, partial response). In the genotype 2 cohort, the effect of paritaprevir dose on SVR 24 rate was evaluated using the Fisher exact test. The SVR 24 rates and corresponding 95% exact binomial CIs were calculated within subgroups based on prior HCV treatment response and, for the genotype 2 cohort, HCV subtype determined by LiPA 2.0 and phylogenetic analysis. All statistical tests and CIs were twosided, with a significance level of 0.05. Additional details of the statistical analysis of the primary endpoint are available in the Supporting Information.
Results
Patient Characteristics. Of 144 patients screened, 110 (subtype 1b, n 5 73; genotype 2, n 5 37) were randomized and received at least one dose of study medication (Fig. 2). Baseline characteristics are presented in Table 1. Among all patients combined, 46.4% were male, the mean age was 59.2 years, and the mean body mass index was 23.6 kg/m 2 . Most patients in the subtype 1b cohort were null responders (66.7%-72.2%), whereas patients who relapsed were predominant in the genotype 2 cohort (89.5%-94.4%). The phylogenetic analysis identified 20 patients with subtype 2a and 15 patients with subtype 2b infection. In addition, two patients initially identified as having genotype 2 based on the LiPA 2.0 assay were identified as having subtype 1b based on phylogenetic analysis (Supporting Table 2). A sensitivity analysis of SVR 24 , which was performed using HCV genotype assignments as determined by phylogenetic analysis, yielded SVR 24 rates that were similar to those using the LiPA 2.0 assay. Therefore, the two patients initially identified as having genotype 2 did not affect the overall results in patients with subtype 1b.
Virologic Response. On-and posttreatment virology results for patients with subtype 1b and genotype 2 infection are shown in Fig. 3A and B, respectively. Patients with subtype 1b had SVR 24 rates of 88.9%-100% in the 100-mg paritaprevir dosing arm and the 150-mg paritaprevir dosing arm, respectively. Two subtype 1b patients (both of whom received paritaprevir/ ritonavir 150/100 mg for 12 weeks) did not achieve SVR 24 ; one relapsed at posttreatment week 2, and one discontinued study treatment due to an AE. No significant differences in SVR 24 Fig. 3. Efficacy of ombitasvir/paritaprevir/ritonavir in patients with HCV infection. Virologic response by treatment group for patients classified according to LiPA 2.0 analysis as having HCV subtype 1b (A) or HCV genotype 2 (B) infection and by subtype of genotype 2 according to phylogenetic analysis (C). Error bars represent 95% confidence intervals. Two patients identified as genotype 2 by LiPA 2.0 analysis were found to be HCV subtype 1b during phylogenetic analysis; both of these patients achieved SVR 12 and SVR 24 . *Dose: 25 mg. Abbreviations: EOTR, end-oftreatment response; GT1b, HCV subtype 1b; OBV, ombitasvir; PTV, paritaprevir; r, ritonavir; RVR, rapid virologic response.
Complete concordance (100%) was observed between SVR 12 and SVR 24 . Further, all patients who achieved SVR 24 maintained the response throughout the follow-up period (posttreatment week 48).
Virologic Failure. No patient with subtype 1b infection had on-treatment virologic failure; one patient experienced posttreatment relapse. The patient who relapsed had resistance-associated variants (RAVs) in NS3 (D168V) and NS5A (Y93H) at the time of virologic failure. The NS5A RAV, Y93H, was present at baseline in four patients with subtype 1b infection; all achieved SVR 24 . In patients with genotype 2 infection, on-treatment virologic failure was more frequent in those who received paritaprevir/ritonavir 100/ 100 mg (36.8% [7/19] Safety. Rates of any reported treatment-emergent AEs were similar across treatment arms (73.7%-88.9%) ( Table 3, Supporting Table 3), and most were mild in severity. The most common treatmentemergent AEs were nasopharyngitis (29.1%) and headache (13.6%) ( Table 3). Five SAEs (n 5 5/110, 4.5%) were reported: autoimmune hepatitis, fluid retention, femoral fracture, tibia fracture, and ischemic colitis. Autoimmune hepatitis and fluid retention were considered by the investigator as having a reasonable possibility of being treatment-related. One patient (n 5 1/110, 0.9%) discontinued study drug because of an SAE of fluid retention. There were no grade 3 or 4 abnormalities in hemoglobin, alkaline phosphatase, or total bilirubin during treatment (Supporting Table 4). Two patients experienced grade 3 alterations in liver enzymes. One patient experienced a grade 3 alanine aminotransferase level at day 156. This patient was retrospectively found to have laboratory values at screening and baseline suggestive of previously undiagnosed autoimmune disease. Another patient, who had a grade 3 aspartate aminotransferase level (206 IU/mL) at baseline, experienced a grade 3 aspartate aminotransferase level of 171 U/L on day 7. Both patients were asymptomatic, completed treatment, and achieved SVR 24 . One patient (1/110, 0.9%) experienced a grade 2 abnormality in hemoglobin, and two patients (2/110, 1.8%) experienced a grade 2 abnormality in bilirubin during treatment. The grade 2 bilirubin abnormality was not associated with an increase in transaminase, peaked within the first 2 weeks of treatment, and resolved spontaneously with continuation of treatment.
Discussion
Patients with HCV infection who have a history of prior IFN/RBV treatment failure are a difficult-to-treat population, particularly those who are prior null responders. In this randomized phase 2 study of an IFNand RBV-free regimen of ombitasvir/paritaprevir/ritonavir in pegIFN/RBV treatment-experienced Japanese patients, high SVR rates were observed in patients with HCV subtype 1b infection, regardless of paritaprevir dose or treatment duration. Among patients with genotype 2 infection, SVR rates were higher in those receiving paritaprevir 150 mg and those with subtype 2a infection. Together, these results show promising antiviral activity for this all-oral 2 DAA combination regimen.
These results, in previously treated Japanese patients with HCV subtype 1b, compare favorably with published reports of other IFN-free and RBV-free regimens and IFN-free, RBV-containing regimens in treatment-experienced non-Japanese populations with genotype 1 infection. Among 12-week treatment regimens that contained two DAAs alone or in combination with RBV, SVR 12 rates were 82% with daclatasvir/asunaprevir in null responders or those who were intolerant of IFN 22 and between 94% and 96% with sofosbuvir plus ledipasvir in treatmentexperienced patients. 23 In addition, SVR 12 rates with sofosbuvir/ledipasvir were lower in patients with subtype 1b infection versus those with 1a infection. In trials evaluating regimens of the three DAAs ombitasvir/ paritaprevir/ritonavir and dasabuvir with or without RBV, SVR 12 rates of between 96.6% and 100% were achieved in patient populations with HCV genotype 1b infection. 15,17,[24][25][26] Rates of SVR 12 with ombitasvir/paritaprevir/ritonavir and dasabuvir were not greatly influenced by prior IFN-treatment response 17,24 but were slightly higher in patients with HCV subtype 1b versus 1a infection. 15,17,25,26 The efficacy of IFN-free DAA regimens has also been studied in genotype 2 patients. In the POSI-TRON trial, 92% of patients without cirrhosis but with genotype 2 HCV infection for whom IFN treatment was not an option achieved SVR 12 after 12 weeks of treatment with sofosbuvir plus RBV. 27 In the FUSION | 2018-04-03T05:20:08.749Z | 2015-03-23T00:00:00.000 | {
"year": 2015,
"sha1": "823a06a23c2f146e0f771fd77acfefe44182f160",
"oa_license": "CCBYNCND",
"oa_url": "https://aasldpubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hep.27705",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "823a06a23c2f146e0f771fd77acfefe44182f160",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3384641 | pes2o/s2orc | v3-fos-license | Coexistence of t(2;14;11)(p16.1;q32;q23) and t(14;19)(q32;q13.3) chromosome translocations in a patient with chronic lymphocytic leukemia
Abstract Rationale: With combination of multiple techniques, we have successfully characterized unique, complex chromosomal changes in a patient with chronic lymphocytic leukemia (CLL), a lymphoproliferative disorder. Diagnoses: The diagnosis was based on white blood cell, flow cytometry, and immunophenotypes and confirmed by karyotype, fluorescence in situ hybridization, and array comparative genomic hybridization from the patient's blood culture. Interventions: The patient was given fludarabine, cyclophosphamide and rituximab (FCR) for 6 cycles. Outcomes: After completion of 6 cycles of FCR, the computed tomography scans of the neck/chest/abdomen/pelvic showed that the patient in CR. During the 10-month follow-up, the patient's clinical course remained uneventful. Lessons: The translocation t(14;19) identified in this patient is a recurrent translocation found in patients with chronic B-cell lymphoproliferative disorders and the 3-way translocation involving chromosomes 2, 14, and 11 may play a role as an enhancer.
Fluorescence in situ hybridization (FISH) analyses were performed in the uncultured and cultured cells using the LSI IGH and LSI MLL dual color break-apart rearrangement probes (Abbott Molecular, Inc., Des Plaines, IL) and LSI BCL11A and LSI BCL3 dual color break-apart rearrangement probes (Empire Genomics, Inc., Buffalo, NY). The uncultured cells were also tested using CLL panel (Abbott Molecular, Inc., Des Plaines, IL). All the experimental procedures followed the manufacturers' instructions.
Further array comparative genomic hybridization (CGH) analyses on the patient's DNA sample revealed the presence of an extra chromosome 12 and deletion of 13q14.11-q21, which was consistent with our karyotype analyses. Interestingly, we also found a gain of 4p16.2 (4,788,290-5,227,609 bp hg19; 0.4 Mb) containing MSX1 (msh homeobox 1) gene (Fig. 4), which plays a pivotal role in early hematopoietic development and malignancy transformation.
The MLL gene rearrangement often occurs in acute myelocytic leukemia (AML), acute lymphoblastic leukemia, and myelodysplastic syndrome. In hematologic malignancies such as CLL, the most common abnormality is the deletion of the MLL(11q23), [19] whereas the MLL gene rearrangement has not been previously observed in CLL. In this study, we revealed this interesting 3-way translocation of the MLL gene rearrangement; whether it contributes to the leukemia progression or even an unfavorable prognosis in CLL warrants further investigation.
It is widely believed that presence of only the IGH rearrangement is not sufficient to induce tumorigenesis, and acquisition of additional genetic aberrations is necessary for malignant transformation. [8] Trisomy 12, observed in this case, is one such genetic anomaly. The cytogenetic abnormality of trisomy 12 associated with intermediate prognosis is observed in up to 50% of IGH/BCL3-positive B-CLLs and was considered to act cooperatively with t (14;19) in leukemogenesis. [20] Interestingly, however, it was reported that patients with 13q deletions as a sole abnormality had the longest estimated survival times compared with other cytogenetic abnormalities. [4,21] Moreover, miR-15a and miR-16-1 locate in this region, and negatively regulate BCL2 expression at a posttranscriptional level. [22] MSX1 was found to be overexpressed in cell lines derived from MCL and leukemia AML as well as in 3% of patients with MCL and AML. [23] In the present study, array CGH revealed a cryptic gain of MSX1 gene besides trisomy 12 and del(13q14.11-q21), which has not been reported previously in CLL. These data suggest an oncogenic role for MSX1 in leukemogenesis.
In summary, we reported a rare case of an adult CLL patient with the coexistence of classical IGH/BCL3 translocation and a three-way variant translocation BCL11A/IGH/MLL, as well as trisomy 12 and del(13q). Furthermore, a cryptic genomic alteration involving leukemia-related MSX1 gene was found in this case at the level of the array CGH. | 2018-04-03T00:00:36.909Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "ca6c56b54292c4515ff739edefae739f964cb0ef",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000009169",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca6c56b54292c4515ff739edefae739f964cb0ef",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1200155 | pes2o/s2orc | v3-fos-license | Induction of Cytochrome P 450 2 A 6 by Bilirubin in Human Hepatocytes
The influence of bilirubin on mRNA expression of cytochrome P450 (CYP), UDP-glucuronosyltransferase (UGT) and nuclear receptors in human hepatocytes was investigated. The treatment of the hepatocytes with 40 μg/mL bilirubin, which corresponds to hyperbilirubinemia, resulted in 1.7-fold increase of CYP2A6 mRNA compared to the vehicle control while CYP2A6 mRNA did not change after treatment with 1 μg/mL bilirubin, corresponding to physiologically normal level. No significant change of mRNA expression by 40 μg/mL bilirubin treatment was observed for CYP1A2, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP3A4 and CYP3A5, UGT1A1, UGT1A3, UGT1A6, UGT2B4, UGT2B7, UGT2B10 and UGT2B15, constitutive androstane receptor (CAR), pregnane X receptor (PXR), retinoid X receptor α (RXRα) and hepatocyte nuclear factor-4α (HNF-4α). The induction profile of bilirubin was different from that of rifampicin, a typical PXR activator. This study demonstrated that CYP2A6 can be induced by bilirubin in a concentration dependent manner.
Introduction
Induction of drug metabolizing enzymes is of concern for clinical application of medicines, especially with narrow therapeutic windows, such as immunosuppressants and anti-coagulants.For drugs whose effect is achieved by the parent drug, the enzyme induction would increase the systemic clearance of the drug, resulting in lower drug exposure, and reduction in the pharmacological efficacy.For instance, rifampicin causes acute transplant rejection in patients treated with cyclosporine, presumably because of induction of the CYP3A4-mediated metabolism of cyclosporine [1].In some cases, the enzyme induction may increase the formation of reactive metabolites, leading to an increase in the risk of metabolite-induced toxicity.For example, induction of CYP1A enzymes results in rise of conversion rate of some xenobiotics such as 2,3,7,8-tetrachlorodibenzo-p-dioxin and benzo[a]pyrene to their reactive metabolites [2].The induction mechanisms of CYPs have been extensively investigated.Many of the CYPs are induced in humans including CYP1A, CYP2A, CYP2B, CYP2C, CYP2E1, and CYP3A by a large variety of compounds including drugs, chemicals and natural products.In most cases, induction of CYPs occurs by de novo RNA and protein synthesis that has been demonstrated in studies using transcription and translation inhibitors [3].The induction of many CYPs occurs by a similar mechanism, where ligand activation of receptor transcription factors including pregnane X receptor (PXR), constitutive androstane receptor (CAR), retinoid X receptor α (RXRα), hepatic nuclear factor (HNF), aryl hydrocarbon receptor (AhR) and others, leads to increased transcription [4].
In our previous study, significant correlation was observed between total bilirubin level and the systemic clearance (CL/F) of an aromatase inhibitor letrozole in healthy postmenopausal women in a population pharmacokinetic analysis, although it was not significant when other factors were incorporated to the final analysis model [5].The significance of the total bilirubin level for CL/F of letrozole was confirmed when the sample size was increased by pooling with the data obtained in breast cancer patients (data not shown).Because elevated serum bilirubin level suggests hepatic impairment like hepatocyte damage and biliary obstruction, the positive correla-tion between total bilirubin level and systemic clearance of letrozole was unexpected.The elimination pathway of letrozole is metabolism to carbinol-metabolite CGP44645 by hepatic CYP2A6 and CYP3A4, followed by its glucuronidation and subsequent renal excretion [6,7].Thus, one of the potential causes of the correlation is an induction of metabolic enzymes by bilirubin.The induction mechanism of CYP3A4 had been intensively examined and is illustrated that mRNA expression is increased through PXR activation by ligand binding, such as rifampicin, lovastatin and nifedipine [8][9][10].On the other hand, the induction of CYP2A6 by dexamethasone, rifampicin and phenobarbital was reported to involve PXR, CAR and HNF4-α [11][12][13][14][15][16][17].However, there is no report to show the induction of CYP2A6 or CYP3A4 by bilirubin in human.Thus, in order to find out the cause of the positive correlation between bilirubin level and CL/F of letrozole, we investigate the influence of bilirubin on mRNA expression of CYP2A6, CYP3A4, and UGTs examined in human hepatocytes by the reverse transcription polymerase chain reaction (RT-PCR).The effect of bilirubin on other CYPs and nuclear receptors in human hepatocytes were also investigated.
Cell Culture
Cryopreserved human hepatocytes (60-year-old Caucasian male, Celsis IVT, Baltimore, MD, USA) were thawed at 37˚C, suspended in thawing medium without glucose (Biopredic International, Rennes, France), and centrifuged at 160 × g for 2 min.Hepatocytes were resuspended in William's medium E supplemented with 10% FBS, 4 μg/mL bovine insulin, 100 IU/mL penicillin and 100 μg/mL streptomycin (Biopredic International) and cultured in a collagen-coated 24-well plate (BD Biosciences, Franklin Lakes, NJ, USA) at a density of 2.5 × 10 5 cells/500μL/well in a 37˚C incubator with 5% CO 2 and 95% air.After 4 hours, the culture medium was replaced with serum-free William's medium E supplemented with 4 μg/mL bovine insulin, 100 IU/mL penicillin, 100 μg/mL streptomycin and 50 μM hydrocortisone hemisuccinate (incubation medium), and cultured for 20 hours in a CO 2 incubator.Then, the medium was replaced with the incubation medium containing 1 or 40 μg/mL bilirubin (Wako Pure Chemical Industries, Osaka, Japan) or 50 μM rifampicin (Wako Pure Chemical Industries) and cultured for 48 hours in a CO 2 incubator before total RNA extraction for RT-PCR.During the exposure to bilirubin or rifampicin, the culture medium was replaced with freshly-prepared one containing bilirubin or rifampicin every 24 h.
RT-PCR
At the end of the culture period, the medium was removed and total RNA was extracted from human hepatocytes using TRIzol reagent (Life Technologies Corporation, Carlsbad, CA, USA) according to the manufacturer's protocols.The concentration and purity of RNA were determined spectrometrically.Reverse transcription was performed using the TaKaRa RNA PCR Kit (AMV) Ver.3.0 (Takara Bio Co., Ltd, Shiga, Japan), according to the manufacturer's instruction.Total RNA (400 ng) was mixed with reaction buffer, 5 mM MgCl 2 , dNTP mixture (1 mM each), RNase inhibitor (1 U/μL), AMV reverse transcriptase XL (0.25 U/μL), and random 9 mers (2.5 μM) in a final volume of 20 μL.The reaction mixture was incubated at 30˚C for 10 min followed by 42˚C for 30 min and then heated at 95˚C for 5 min to inactivate the enzyme.PCR was carried out using the TaKaRa PrimeSTAR Max DNA Polymerase (Takara Bio Co., Ltd), according to the manufacturer's instruction.The reaction was performed in a total volume of 10 μL consisting of 2 × PrimeSTAR Max Premix, 0.3 μM forward primer, 0.3 μM reverse primer and the reverse transcription product as a template corresponding to 2 ng RNA.The amplification was performed by denaturation at 98˚C for 10 sec, annealing at an appropriate temperature for 5 sec, and extension at 72˚C for 5 sec for appropriate cycles.The primers used, the annealing temperatures and number of cycles of the PCR were listed in Table 1.The number of cycles was optimized to fall within a linear amplification range.The amplified PCR products were separated by polyacrylamide gel electrophoresis on 8% polyacrylamide gel, followed by staining with SYBR Green I Nucleic Acid Gel Stain (Cambrex Bio Science Rockland, Inc., Rockland, ME, USA) and detection using ChemiDoc XRS plus (Bio-Rad Laboratories, Inc., Hercules, CA, USA).Quantification of the target gene band was performed by Quantity One (Bio-Rad Laboratories, Inc.).To standardize the amount of sample, the calculated amount of the gene of interest was divided by the calculated amount of the constitutively expressed glyceroldehyde-3-phosphate dehydrogenase (GAPDH) gene in the sample.These normalized amounts were then used to compare the relative amount of target mRNA between different samples.
Statistical Analysis
Data were expressed as mean + standard deviation (SD).Statistical analyses and significance were performed using the one-way ANOVA followed by Dunnett's multiple comparision test.In all comparisons, p < 0.05 was considered statistically significant.
Results
In order to investigate the influence of bilirubin on metabolic activity of CYP2A6 and CYP3A4, mRNA of these enzymes was measured in the human hepatocytes after treatment by 1 μg/mL bilirubin, corresponding to physiologically normal level, or 40 μg/mL bilirubin, corresponding to hyperbilirubinemia, for 48 hours.As a positive control, hepatocytes were also treated by 50 μM rifampicin.As shown in Figure 1(a), CYP2A6 mRNA was not induced by 1 μg/mL bilirubin but induced 1.7-fold by 40 μg/mL bilirubin.In case of CYP3A4, neither 1 nor 40 μg/mL bilirubin induced mRNA while 50 μM rifampicin induced it (Figure 1
(b)).
To further explore the influence of bilirubin on other hepatic metabolic enzymes, mRNA of CYPs expressed in human liver was measured (Figure 2).As shown in Figure 2(a), CYP1A2 mRNA was increased to 1.4-folds of the vehicle control by 40 μg/mL bilirubin although the difference did not reach statistically significant level (p = 0.0528).Compared to 1 μg/mL bilirubin, it was increased to 1.8-fold by treatment of 40 μg/mL bilirubin.mRNA levels of other CYP enzymes did not change by the treatment of bilirubin.Rifampicin induced mRNA of CYP2B6 and CYP2C8.
Because bilirubin is conjugated with glucuronic acid by several UGT enzymes in humans, influence of bilirubin on mRNA levels of UGT was examined as self-regulation of its metabolism (Figure 3).None of the UGT enzymes investigated in the present study was induced by the bilirubin treatment.In case of UGT1A6 which is one of the bilirubin conjugating enzymes, however, mRNA level decreased to 70% of the vehicle control by 1 μg/mL bilirubin and recovered to the vehicle control level by 40 μg/mL bilirubin (Figure 3(c)).
In order to explore induction mechanisms of metabolic enzymes by bilirubin, influence of bilirubin on CAR, PXR, RXRα and HNF-4α, which are reported to contrib-ute to the CYP2A6 induction, was investigated (Figure 4).CAR mRNA did not change by the bilirubin treatment.PXR, RXRα and HNF-4α tended to decrease by μg/mL bilirubin and they were recovered to the control level by 40 μg/mL bilirubin.
Discussion
In our previous study, the positive correlation between bilirubin level and CL/F of letrozole was indicated by population pharmacokinetic analysis [5].Because letrozole is mainly eliminated though oxidative metabolism by hepatic CYP2A6 and CYP3A4 followed by glucuronidation in humans [6], influence of bilirubin on mRNA of CYP2A6 and CYP3A4 as well as UGTs was examined.Current study showed that mRNA expression of CYP2A6 was induced by bilirubin in a concentration dependent manner.The treatment of the hepatocytes by μg/mL bilirubin did not change the mRNA of CYP2A6 but 40 μg/mL bilirubin resulted in 1.7-fold increase compared to the vehicle control.In case of CYP3A4, neither 1 nor 40 μg/mL bilirubin induced mRNA.Regarding UGTs, no influence was observed by bilirubin treatment except for UGT1A6, which was decreased to 70% of the vehicle control by 1 μg/mL bilirubin and recovered to the vehicle control level by 40 μg/mL bilirubin.Thus, UGT1A6 may be induced by the elevated bilirubin concentration under the physiologically relevant conditions.These results prove that the higher CL/F of letrozole in patients with elevated bilirubin is attributable to induction of CYP2A6 by bilirubin.Also, induction of UGT1A6 may contribute to the upraised CL/F of letrozole.Recently, Abu-Bakar et al. [18] reported that bilirubin upregulates the CYP2A6 activity not mRNA induction but by protein stabilization using HepG2 cells.Therefore, increased CL/F of letrozole in patients with high bilirubin level may also be attributable to the CYP2A6 protein stabilization by bilirubin.Although no mRNA induction was reported in the study, it must be noted that enzyme activities and protein and mRNA expression levels including several CYPs such as CYP2A6, CYP3A4 and CYP2D6 are remarkably low in HepG2 compared to primary human hepatocytes and induction profile by typical inducers are quite different from human hepatocytes [19,20].Therefore, the contradicting observation to our study will be due to the difference of cells used in the examinations.
Rifampicin is well known as an agonist of PXR and induces CYP3A4 by activating it [21][22][23].CYP2A6 is also reported to be induced by rifampicin via PXR.Ac-tually, CYP3A4 and CYP2A6 were induced by rifampicin treatment in the current study.
Induction of CYP2B6, CYP2C8, UGT1A1 and UGT1A3, which are reported to be regulated by PXR [21][22][23][24][25][26][27], by rifampicin was also observed.However, these enzymes were not induced by bilirubin in the present study.The induction profile of the enzymes by bilirubin observed in the study was different from the profile of rifampicin, suggesting that induction of CYP2A6 by bilirubin may not be caused via activation of PXR.
Bilirubin is known to cause translocation of CAR from hepatocyte cytoplasm to nucleus and induces UGT1A1 which is responsible for the glucuronidation of bilirubin [28].It is suggested that CAR will regulate gene expression of CYP2A6 by regression analysis among several metabolic enzymes and transporters [29].Also, it was reported that dimer of CAR and retinoid X receptor-α binds to CYP2A6 gene [14].Thus, it can be speculated that elevation of bilirubin level in plasma induces CYP2A6 activity via CAR translocation and increase letrozole clearance accordingly.CYP2B6, which is known to be induced by phenobarbital through CAR activation [17,30], was somewhat induced by treatment of high bilirubin concentration compared to lower concentration.However, mRNA of other enzymes known to be regulated by CAR such as CYP2C8, CYP2C9, CYP2C19, CYP3A4 and UGT1A1, was not induced by bilirubin in our study.It is suggested that CYP2A6 is more responsive to CAR: because CYP2A6 hydroxylates a variety of steroid hormones, including androgen and estrogens, and because CAR is activated by estrogens but inactivated by androgens [12].Therefore, treatment with bilirubin higher than 40 μg/mL, which was used in the current study, may be required to induce those enzymes other than CYP2A6.Actually, it was reported that UGT1A1 was induced by 50 μg/mL or higher bilirubin but not by 30 μg/mL or lower bilirubin by reporter gene assay in HepG2 cells suggesting threshold around 30 to 50 μg/mL [31].
Bilirubin is also known to be a ligand of human aryl hydrocarbon receptor (AhR) [32,33].Actually, mRNA of CYP1A2, which is known to be regulated by AhR [4], tended to be increased by bilirubin treatment although the effect was not statistically significant.It was reported that UGT1A6 is also inducible by AhR activation [34].Aligned with the report, mRNA of UGT1A6 was increased after the treatment by 40 μg/mL bilirubin compared to 1 μg/mL bilirubin in our study.However, contribution of the AhR activation by bilirubin to the induction of CYP2A6 is not clear since involvement of AhR to the gene regulation of CYP2A6 is not known.
As shown in Figure 4, PXR, RXRα and HNF-4α tended to decrease by 1 μg/mL bilirubin and they were recovered to the control level by 40 μg/mL bilirubin.These results suggest that under physiologically relevant conditions, PXR, RXRα and HNF-4α may be induced by the elevated bilirubin concentration.However, induction of CYP2A6 by bilirubin will not be through the induction of these nuclear receptors' expression because the change of the mRNA levels of NRs by bilirubin did not correspond to CYP2A6 mRNA change.
In order to elucidate the induction mechanism of CYP2A6 by bilirubin, further studies e.g.knock down or overexpression of the nuclear receptors [15,16], should be required.Because CYP2A6 is well known for the genetic polymorphism [35][36][37][38] and involved in the metabolism of drugs with narrow therapeutic window such as tegafur and cyclophosphamide [39,40], difference of the bilirubin influence among the genotypes is of interest.
In summary, we present here an unprecedented finding of the CYP2A6 induction by bilirubin in human hepatocytes.In addition, we revealed the influence of bilirubin on CYPs, UGTs and nuclear receptors.Although the induction mechanism of bilirubin for CYP2A6 cannot be fully clarified, it will not be through PXR activation, or the induction of CAR, PXR, RXRα and HNF-4α expression.These findings can be utilized as a tool to predict a drug metabolizing capability in patients with hyperbilirubinemia. | 2017-10-24T07:40:54.074Z | 2013-04-11T00:00:00.000 | {
"year": 2013,
"sha1": "6279d55d0771303b459c7ed8f5ea86d019b84c55",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=29747",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6279d55d0771303b459c7ed8f5ea86d019b84c55",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54537652 | pes2o/s2orc | v3-fos-license | Using Seasonal Climate Forecasts to Guide Disaster Management: The Red Cross Experience during the 2008 West Africa Floods
1 Johns Hopkins University School of Advanced International Studies (SAIS), 1717 Massachusetts Avenue, N. W., Office 715, Washington, DC 20036, USA 2 Red Cross/Red Crescent Climate Centre, The Hague, The Netherlands 3 International Research Institute for Climate and Society (IRI), Palisades, NY, USA 4 UNISDR Africa Regional Office, Nairobi, Kenya 5 African Center for Meteorological Applications to Development (ACMAD), Niamey, Niger
West Africa's Vulnerability to Climate
Shocks. Seasonalto-interannual variability of the climate system has major impacts on the populations of West Africa, one of the world's lowest-income regions. Here, 75% of the active population is employed in a rain fed agricultural sector [1], which is highly climate sensitive. Only 2 percent of the total cultivated land in West Africa is irrigated or under some other form of water management, the remaining 98% being rain fed [1]. In countries such as Niger or Burkina Faso, up to 92% of the active population is employed in the rain-fed agricultural sector [1]. A growing majority of the population also lives in ill-planned urban shantytowns built on flood plains where they settled during the prolonged Sahelian drought period from the early 1970s to the late 1980s [2,3]. The droughts drove peasants out of the countryside and into unplanned periurban settlements where functioning drainage systems are rare and so are vulnerable to flooding [4,5]. Fifteen of West Africa's seventeen countries lie at the bottom of the human development ladder, according to the United Nations Development Program Human Development Index ranking, and are classified among the twenty-two poorest countries in the world [6].
Against this context of high exposure to climate variability and low coping capacity, even slight changes in temperatures and expected rainfall patterns can affect vast numbers of vulnerable people. Thus, when natural hazards occur in the region, they tend to cause disasters that increase poverty further. Disasters result from the combination of naturally driven hazards, human-induced conditions of vulnerability, and insufficient capacity or measures to reduce the potential negative consequences of risk [7].
Science-Based Forecasts to Benefit the Most Vulnerable.
The extreme vulnerability of West Africa to climate variability makes the region an ideal potential beneficiary of the type of seasonal climate information provided through the Regional Climate Outlook Forums (RCOFs; [8]). The Prévisions Saisonnières en Afrique de l'Ouest (PRESAO), West Africa's Seasonal Outlook Forum, was established in 1998 and has occurred annually each May to provide a consensus forecast for the coming July-September rainfall season. The PRESAO brings together scientists and hydrologists from National Meteorological and Hydrological Services (NMHSs), and regional and international climate centers to discuss and agree on the forecast for the July-August-September (JAS) rainy season over West Africa. This consensus-based forecast issued at the end of the PRESAO forum is considered an authoritative voice on conditions most likely to prevail over the upcoming JAS season in West Africa, Cameroon, and Chad.
In the past, seasonal forecasts have been underutilized for many reasons. Institutionally, there had been no sustained dialogue between climate institutions and humanitarians in the region in the past. Culturally, a shift was needed from a mindset of disaster response to one of preparedness and early action. Financially, convincing donors to fund preparedness activities for predicted events that were only probable, not certain, was a challenge. Technically, information provided by meteorologists was largely incomprehensible to the regional decision maker/disaster manager/Red Cross volunteers. Scientifically, the information was often not salient to the latter's information needs and was given in terms of probabilities to reflect uncertainty inherent in the forecast, requiring new strategies for taking decisive action.
Not unique to West Africa, this disconnect between forecasters and humanitarians partly explains why vulnerable populations worldwide continue to be impacted by predictable natural hazards, as illustrated, for instance, by the 2005 famine in Niger, cyclone Nargis in Myanmar, and hurricane Katrina in the United States [9]. Despite the significant potential of early information about likely climatic hazards to aid vulnerable groups in better coping with climate variability, save lives, and preserve livelihoods in this highly climate-sensitive region, across Africa there are only a few haphazard instances of successful transmission and use of available climate and weather forecasts, and other climate risk management tools by policy makers and communities at risk.
Challenges to the Uptake of Seasonal Climate Information
in Africa. Regional climate outlook forums (RCOFs) are the chief means through which seasonal climate forecasts are developed for subregions within Africa. RCOFs are held on an annual or biannual basis in advance of the rainy season, with the seasonal forecast usually developed for a 90-day climate window and at national or regional scales. Forecasts are expressed in probabilistic terms, reflecting the likelihood of below-normal, normal, or above-normal rainfall. Following the rainy season, some simple verification analyses are performed, but end-user communities are typically not included in retrospective analyses to understand the use and value of preseason information.
The top-down information flow that characterizes the RCOF process tends to preclude input from intended beneficiaries as to how forecasts can best be translated to address specific societal needs, or how to develop knowledge packages that bundle climate predictions with information about appropriate remedial actions or other livelihood priorities [10][11][12]. This reliance on RCOFs as the predominant vehicle for seasonal forecast dissemination has proven inadequate for reaching vulnerable populations and being integrated into their decision making processes [11].
Patt and Gwata [13] delivered seasonal forecasts to farmers in four villages in Zimbabwe, one in each of the country's four predominant natural regions, between 2000 and 2001. Their experience shows that the factors constraining use of the forecasts by the farmers include: (i) credibility (communities do not trust the message), (ii) legitimacy (lack of trust in the people who deliver the message, that is, the messengers), (iii) scale (the need for better geographic and temporal downscaling of region-wide seasonal forecasts), (iv) cognition (if users do not understand a forecast, they will not use it), (v) procedures (a forecast that arrives too late, after farmers have already purchased their crops and fertilizers for the season, is not salient to their needs), and (vi) choices (when forecast does not contain enough new information, decisions will not be changed based on a forecast).
Thus, the causes of limited dissemination and use of seasonal information across Africa appear to be as follows: (a) lack of outreach to key stakeholders at the national and subnational levels, such as disaster management agencies, public health officials, community-based humanitarian organizations, and water managers, including vulnerable groups, such as urban slum dwellers, farmers, and fishermen; in line with the lack of outreach to key stakeholders, the inexistence of communication systems to communicate hazard alerts to communities at risk. (c) forecasts in their current form are insufficiently relevant to information needs and decision making timelines (non-salience); (d) inability to act on forecasts; (e) lack of trust in seasonal forecasts.
Can these systemic constrains to seasonal climate information use be overcome? If so, what is required to spur decision makers to action based on ex-ante seasonal predictions?
In the following section, we review each of these constraints and then proceed to analyze the no-regret strategies carried out by the International Federation of Red Cross and Red Crescent Societies (IFRC) ahead of the 2008 floods in West Africa that substantially helped to address the constraints to seasonal forecast use for disaster prevention. These strategies included mobilizing funding ex-ante flooding occurrence, hiring an in-house intern to translate scientific seasonal climate information into lay language operational for decision making, prepositioning disaster relief items across the region in the likely event that the forecast materialized, securing partnerships with regional climate production centers, as well as training additional staff and communicating the seasonal forecast through trusted Red Cross branches and community volunteers at the national and subnational levels. [4,14]. Community radios constitute an effective means of reaching remote communities with information, but they are only seldom used to ensure that forecasts reach vulnerable communities in the region [15,16]. The lack of operational community-level relays of climate information, media outlets, and information-sharing systems that ensure the trickling down of climate information to the communities that most need it further constrains the ability of communities to access forecasts. For example, in Southern Africa, available forecast information does not specifically target vulnerable groups, and so the information either does not reach the community level at all or fails to reach the more marginalized groups [10,[17][18][19][20].
Poor Communication of Forecasts.
Issues relating to language, content, and format of forecasts compound the poor accessibility of climate information. Indeed, these aspects of the forecast are not adequately considered to ensure forecast comprehension by community-level users [10,14,17,19]. In Southern Africa, for example, very few NMHSs translate their forecasts beyond English, potentially excluding the most vulnerable yet important sectors of the target population (e.g., farmers, pastoralists, fishermen, and urban slum dwellers) from receiving and being able to use the forecasts [10]. Also, the probabilistic nature of seasonal forecasting is prone to misinterpretation and confusion if probabilities are translated into deterministic statements and warnings or are otherwise manipulated.
Nonsalience of Current Forecasts (Scientific Barrier).
The current content and mode of climate information does not effectively address many of the concerns of community-level stakeholders, due to present limits in climate forecasting science. This lack of relevance, or specificity, in seasonal forecasts has multiple dimensions, including issues of poor spatial resolution of forecasts with respect to local-scale decision making needs, and an absence of information about intraseasonal rainfall distribution, as well as about climate parameters other than rainfall. Improving the local specificity of climate information in Africa will require greater investments in infrastructure to support hydro-meteorological applications. In Africa, the density of meteorological stations is about eight times lower than the minimum recommended by the World Meteorological Organization (WMO); many of these stations are nonfunctional and governments have failed to invest in equipment and trained personnel [21]. While rainfall is the chief concern for many end users, providing information about other climate parameters, such as relative humidity and temperature, would help countries to monitor soil moisture conditions in cropping areas and protect particular commodities, such as livestock, if temperatures exceed critical thresholds. For example, Ingram et al. [22] found that farmers in different agroecological zones in Burkina Faso were interested in receiving seasonal precipitation forecasts, but they were much more interested in receiving forecasts of when the rains would start and end, and whether there would be interruptions in rains.
Also related to insufficient forecast specificity is the apparent or potential lack of relevance of seasonal climate forecasts in some situations. For example, contradictions can exist between climatologically and agronomically optimal windows for seeding crops in high-risk farming environments. While the former relies on evidence from large-scale weather and climate dynamics to determine the "safe" start of the growing season, the latter considers effects of nitrogen leaching, weed competition, pest pressure, and seedling damage from heavy precipitation in deciding when to begin cultivation [23].
Low Capacity to Act on Forecasts.
Climate information must compete with other livelihood demands, and while there may be sound climate-related reasons for heeding a forecast, there are often equally compelling reasons not to, particularly where resources or capacity are inadequate for effective action. The extent to which knowledge and information are acted upon at the local level depends upon perceptions of risk from current and future hydrometeorological hazards, as well as the influence that the array of nonclimate factors bring to bear on the risk calculus. Grothmann and Patt [24] describe how farmers in Zimbabwe, given a forecast of below-normal rainfall, still chose to grow maize over millet when the potential crop loss risks from drought were weighed against the substantial risks that might ensue from not growing maize, given the extent to which institutional, societal, and market forces were aligned in favor of maize production.
If farmers lack sufficient access to inputs, land, equipment and other capital, and credit, then they generally cannot effectively apply climate information, as in Southern Africa [18,19]. Similarly, in Burkina Faso, farmers need greater access to basic agricultural technologies, such as plows, new crop varieties, and fertilizers, before they could benefit fully from precipitation forecasts [15]. Bundling capacity building efforts for seasonal climate forecast use with timely access to credit or agricultural production inputs can help lower the threshold for acting on climate forecast information. The dissemination of the forecasts should be part of an extension package that includes a discussion of the probabilistic nature of the forecasts, potential response strategies, and risk management [22]. Social constraints, for example, the inability of younger decision makers to bypass family hierarchy in changing decisions about resource allocation and crop choice for a given season, can also constrain the ability to act on forecasts [15].
Lack of Trust in the Forecasts.
Before communities can absorb incoming scientific information, trust needs to be built because acting on the information could mean shifting decisions and making costly, high-risk investments. Caution is justifiable given the marginality of household resources within many vulnerable communities. Building a trusting relationship between communities at risk from climate hazards and forecasters (or their messengers) is of central importance in understanding whether community-level decision makers will use received forecasts or not. One way to instill understanding of and trust in climate information is through the use of participatory workshops designed to help farmers and other target groups to better understand and use seasonal climate forecasts [22]. These workshops can improve trust and credibility of forecasts and provide an opportunity for farmers to experience repeated exposure to and become familiar with the concepts behind probabilistic forecasting, thus allowing better comprehension of what forecasts can and cannot do. Workshop participation can positively influence the anticipatory behavior of participants, through broadening their perceived range of options for the coming cultivation season, and the workshops provide spillover effects to the larger community as participants share information with nonparticipants.
An example from Mali illustrates well the importance of participatory processes in communicating climate information to community-level stakeholders. The Direction Nationale de la Météorologie du Mali, the country's NMHS, has been transmitting detailed agrometeorological information to groups of farmers since 1983. They have found that the participatory processes developed to provide agrometeorological advice to farmers have aided understanding of how to provide the information in a comprehensible and suitable manner, such that it is congruent with traditional practices or agricultural calendars [16]. Forecasters at Mali's national meteorological agency found that they could not hold stakeholder meetings in all the villages where demand for agrometeorological advice was increasing, so they gradually scaled up the project while heavily relying on rural radios and dispensing advisories for wider regions and for each different crop type. Figure 1 summarizes the bottlenecks that have traditionally constrained the use of climate forecasts in Africa, and their trickling down to communities most at risk. Despite these barriers to the use of climate information, in 2008, the seasonal forecast for West Africa was utilized by a regional humanitarian organization, the IFRC office of West and Central Africa, to plan its disaster response operations for the flood season, and as a result, lives and livelihoods were preserved in the region. In the next sections of this paper, we attempt to understand what changed in 2008 to enable the use of seasonal forecast information by a decision maker in the region, and how this year differed from just a year prior when flood disasters affected West Africa.
Methods
In order to understand the processes that triggered IFRC's utilization of the 2008 seasonal forecast, we undertook a number of process tracing steps. Firstly, we compared IFRC's disaster management strategies in 2008 with those in 2007, when the signals of probable flood conditions were not used to trigger preparedness measures. Secondly, we traced the processes that helped build capacity within IFRC (the largest humanitarian organization in the world), and particularly in its West and Central Africa Zone Office (WCAZ) to understand and use of science-based forecasts leading up to 2008. Then, through interviews with all of the actors involved in IFRC WCAZ's 2008 disaster management process, we proceeded with an events analysis of the organization's disaster preparedness and planning before the flood season in 2008 in West Africa, including the forecast-based decisions made by the IFRC-WCAZ. Finally, reports and news clips enabled the characterization of the final outcome of the 2008 rainy season, as well as the analysis of the achievements, and limitations, of IFRC's forecast-triggered disaster risk management strategies. (ii) an equal likelihood of normal and above-normal rainfall conditions throughout the rest of the Sahel, with associated probabilities of 0.4 and 0.4;
IFRC's Disaster
(iii) most likely normal conditions only in the Gulf of Guinea countries, with a probability of occurrence of 0.5 [27].
When the forecast above-normal rainfall conditions did indeed materialize and severe floods occurred, the humanitarian community responded in emergency response mode, lacking advance knowledge on when and where floods were more likely to occur. The 2007 floods claimed more than 300 lives across West Africa and occasioned severe damage to crops, homes, and infrastructure [28]. The Red Cross did not access the PRESAO forecast in 2007 or during previous years, due to the weak-to-non-existent communication lines between climate forecasting centers (at the regional and national levels) and the Red Cross presence in the region (regional-level disaster planners and community-level volunteers). On the one hand, regional climate forecasting institutions such as the African Centre of Meteorological Applications for Development (ACMAD) and AGRHYMET, the regional authority for meteorological applications to food security, duly produced their forecasts and sent them to their national relays: NMHSs. On the other hand, the Red Cross continued to respond to weather-related disasters as it had always done, not using existing climate information-which they often did not know existed-as input into their decision making and contingency planning at the onset of the rainy season.
Natural hazards thus consistently became disasters despite the fact that (a) scientists in the region could anticipate their likely occurrence and (b) the humanitarian sector, and the people they serve, had the capability to act on the ground to thwart losses. The early warning was not turned into early action.
Building Capacity within IFRC-WCAZ to Understand and Use Science-Based Forecasts.
Over the last few years, various processes contributed to improving the Red Cross' capability to link climate science with humanitarian work. These were encouraged by a growing momentum at the global level within the IFRC to build preparedness in the face of the humanitarian challenges of climate change, and included the following.
Growing Role of Climate Information in the Red
Cross/Red Crescent Movement. In a context of changing climate risks, the Red Cross/Red Crescent Climate Centre was created in 2002 with the mission of supporting the Red Cross and Red Crescent movement to understand and address the humanitarian consequences of climate change and extreme weather events. The Centre's main approach is to raise awareness, advocate for climate adaptation and disaster risk reduction, and integrate knowledge of climate risks into Red Cross Red Crescent strategies, plans, and activities. By 2008, a total of 39 Red Cross/Red Crescent national societies, including 14 from Africa, had joined the "Preparedness for Climate Change" program. Instead of only focusing on climate change information based on scenarios for the end of the century, the program emphasized the need to link with forecast providers to improve decisions on all timescales and at various geographic levels from continental to community level.
In 2007, the 30th International Conference of the Red Cross and Red Crescent (its highest governing body, comprised of 186 national societies and national governments) committed to improve capacity to respond, including through better disaster preparedness, and to integrate climate risk management into policies and plans. Such high-level commitment enabled an easier uptake of offers to explore options for forecast-based humanitarian decisions. The disaster management coordinator for IFRC West and Central Africa Zonal Office attended and organized various workshops co-organized by the Climate Centre aimed at integrating climate science into disaster risk management.
First Steps in Science-Humanitarian Dialogue in West
Africa. Three noteworthy changes occurred in 2008 to initiate a dialogue between Red Cross staff and climate forecasters, and to enable the transmission and use of the 2008 forecast.
(i) A new partnership between the IFRC and the International Research Institute for Climate and Society (IRI) at the global level enhanced the capacity of the humanitarian organization to understand sciencebased forecasts. Through this partnership, the IFRC-WCAZ office was provided with ready access to expertise that could explain and interpret climate information. A help desk was established that allowed for timely and reliable responses to questions about forecasts or recent climate anomalies, and an intern went to the IFRC-WCAZ in the summer of 2008 to assist disaster managers in better understanding and incorporating climate information into disaster planning.
(ii) In 2008, the Red Cross Disaster Management Coordinator for West and Central Africa became enthusiastic on initiating a dialogue with climate scientists regionally and attended the PRESAO-11, the first time a disaster manager attended this scientific forum. Building on this initial outreach, IFRC-WCAZ spearheaded a drive for more outreach to the climate science community.
(iii) The severity of the 2007 floods, and memory of the frenzied response operations that followed, put Red Cross disaster planners in a favorable disposition towards increased preparedness.
6
International Journal of Geophysics When the 2008 seasonal forecast reached the IFRC-WCAZ and warned of a heightened risk of above-normal rainfall, and this information was translated into language that they could understand, Red Cross disaster managers set in motion early action strategies that appear to be common sense but had never been implemented in the past.
Contents of the Consensus-Based Forecast: Cause for
Concern. In May 2008, the consensus-based forecast issued at the end of the PRESAO announced that the JAS rainy season over West Africa had enhanced probabilities of abovenormal rainfall, implying that the region was likely to experience increased risk of heavy rainfall events [25]. The PRESAO forecast, issued on May 21st 2008 by ACMAD, indicated increased probabilities for higher than normal rainfall throughout much of the Sahel belt from Senegal to Cameroon (probabilities of 0.45 and 0.50 in zones I and II, respectively; see Figure 2). Against a historical probability of 0.33, these probabilities for above-normal rainfall were unusually high, and warranted heeding. An updated seasonal climate forecast issued by ACMAD on June 27th re-affirmed the forecast issued a month earlier of high probabilities of rainfall higher than normal, and very low probabilities of below-normal rainfall over the region. ACMAD concluded its seasonal forecast update by stating that if forecast conditions persisted, the risk for flood disasters and water-related diseases could be higher than usual this year, requiring "strengthened weather watch, monitoring, and warning" for sectors like health, food security, water resources, and civil protection [25].
The International Research Institute for Climate and Society (IRI) seasonal precipitation forecast for Africa for the July-September period echoed the same warnings, indicating probabilities for above-normal rainfall for July-September similar to those of the PRESAO forecast, albeit differences in the geographic area at risk at the northern and southern frontiers of the Sahel and in months covered. The IRI forecast map for extreme seasonal rainfall (more than the 85th percentile) for June-August gave an even stronger message, signaling Senegal, the Gambia, and neighboring countries, as the only areas in the world with "highly enhanced" probabilities of rainfall extremes (see Figure 3).
As of May 2008, all the main sources (national, regional, international) provided a unanimous message that most parts of West Africa were more likely to experience higher than usual rainfall over the upcoming season. Although all these seasonal forecasts referred explicitly only to the seasonally integrated rainfall, and not to individual heavy rainfall events that are the primary cause of flooding, it is reasonable to assume that the actual risk of flooding will increase during a year in which the total rainfall is expected to be unusually high. (Further research on the extent to which this assumption is valid is clearly warranted.)
Communication of the Forecast to IFRC-WCAZ.
When the seasonal forecast reached the IFRC Regional Office in Dakar, the first challenge consisted of helping decision makers at IFRC-WCAZ understand the contents and implications of the PRESAO forecast. The intern from IRI played an instrumental part in this assistance and served as in-house translator of the received seasonal forecast maps. The 2008 seasonal forecast map was largely incomprehensible to the IFRC regional disaster managers because of the technical language used, the absence of detailed, clear commentary, and the lack of clarity on how the forecast information might be relevant to their humanitarian work. To this effect, an all-staff climate briefing was organized in early June by the IFRC-WCAZ Disaster Management Unit to share the PRESAO forecast with all operational departments at IFRC-WCAZ. During this briefing, Figures 3 and 4 were shown and explained by the intern. Contrary to expectations, the probabilistic nature of the forecast did not pose any challenges to the Red Cross disaster planners; they welcomed it as an approximate answer to their question: whether or not disasters were going to occur. Even though it did not tell them with certainty whether disasters would be taking place, it still provided them with a rational and scientifically-informed process of making key planning and resource allocation decisions under uncertainty, one which was coherent with the way they usually operated. Indeed, as disaster planners, they were accustomed to preparing their relief interventions based on contingency plans and various scenarios for intervention at the beginning of the rainy season. In the past, however, none of these plans were informed by seasonal climate forecasts.
First Pre-Emptive Appeal in IFRC History Based on a Seasonal
Forecast. Based on the PRESAO seasonal July-September 2008 and the IRI June-August seasonal forecasts, the IFRC drafted a zonal Flood Contingency Plan and issued a funding request to the Disaster Relief Emergency Fund (DREF), an internal Red Cross funding source. This appeal requested a total of CHF 298,376 (USD 284,167) in order to prepare for the heightened risk of floods in the region. The IFRC WCAZ explained in this request its worry that catastrophic floods were looming, drawing upon the ACMAD and IRI forecasts as its sources of information, explaining that while not certain the high probabilities still warranted action and advanced preparation was needed. This DREF request was granted. A preliminary emergency appeal for flood preparedness in West and Central Africa was then issued on July 11th, requesting USD 730,000 in contributions from humanitarian donors to fund preparedness activities throughout the region [29]. Both PRESAO and IRI maps were again included in this appeal as justification for the need to get ready and prepare for forewarned floods. It was clearly explained therein that the forecasts were in terms of probabilities. This was the first time in the history of the Red Cross movement that funds were requested in advance to prepare for an emergency based on seasonal forecast information. It constitutes a positive instance of climate information duly transmitted and acted on.
Unfortunately, donors did not commit in time: funds from the preliminary appeal did not arrive until late August, after flood disasters were already underway. However, the IFRC-WCAZ was still able to use funds immediately available from the DREF to preposition emergency stocks in Dakar, Accra, and Yaoundé [30].
Prepositioning of Relief Items.
Once funds were made available, the IFRC-WCAZ began the prepositioning of nonfood relief items (blankets, mosquito nets, soap, bottles, tents, etc.) in Dakar (Senegal), Yaoundé (Cameroon), and Accra (Ghana) to benefit up to 9,500 families in the event of flooding. Under a typical ex-post response scenario, these items would have been flown in from the IFRC's Dubai warehouse once floods arrived, or procured separately, leading to a more time-consuming and costly shipment of relief supplies.
This "no-regret" approach was the strategy used by IFRC-WCAZ to address the uncertainty inherent in the seasonal forecast. Indeed, by prepositioning only non-perishable items that could be reused during future flood events, disaster managers at IFRC-WCAZ were able to minimize their potential losses and justify their ex-ante use of funds without complete certainty of the occurrence of a disaster.
Training of Additional Red Cross Disaster Management
Personnel. Following the forecast of a high-risk rainy season the IFRC-WCAZ trained in early July twelve leaders of Regional Disaster Response Teams (RDRTs), to be deployed to the field within 48 hours of a disaster in the region to coordinate relief operations and conduct a rapid assessment of damage and needs. These RDRT leaders were specifically trained in understanding medium-term weather forecasts accessible online via the IFRC's online Disaster Management Information System so that they would be able to monitor rainfall throughout the season, and write national flood contingency plans using real-time meteorological information. Additionally, at the end of the training, all RDRT leaders were provided with travel insurance (rendering them "deployable" within 24 hours of onset of a flood disaster in the region) and were asked to prepare a flood contingency plan for their respective countries, by July 15th [31]. 4.5.4. Partnerships with Regional Climate Centers. Finally, a significant achievement from the 2008 experience is the demand it created within IFRC-WCAZ for additional climate information. Building on this strong desire to reach out to regional climate research centers as a pathway to secure inflow of reliable and trustworthy climate information at the regional level, formal partnerships were initiated and put in writing in late July with both ACMAD and AGRHYMET. Throughout the 2008 rainy season, IFRC-WZAC reached out to these scientific partners, inquiring about likely atmospheric conditions in given countries and constantly requesting additional information to followup on the seasonal forecast. This process of interaction culminated in the signature of a Memorandum of Understanding between ACMAD and IFRC-WCAZ, the first ever partnership accord signed between climate scientists and humanitarians in the region.
Communication of the Forecast. Utilizing the Red
Cross' extensive network of volunteers on the ground, the IFRC-WCAZ was able to share the seasonal rainfall forecast with countries and communities at risk throughout West Africa. By July, the seasonal forecast was received by all Red Cross National Societies deemed to be at potential risk from flooding during the July-September rainy season (countries 8 International Journal of Geophysics located in Zones I and II, Figure 1). Varying levels of capacity of these national societies, however, hindered or facilitated the distribution of the forecast to local Red Cross branches at district and village levels. In many instances, Red Cross volunteers serving as community relays of forecast information were able to take the information to people at risk in effective ways, using distribution channels as innovative as transport buses (Togo Red Cross), cellular phone text messages (Burkina Red Cross), and word of mouth.
Outcome of the 2008 Rainy Season:
Heavy Rains and Reduced Losses 4.6.1. Rainfall during the 2008 Season. Heavy rainfall events occurred throughout the July-September rainy season in West Africa. Although the forecasts are probabilistic in nature and so cannot be properly verified considering only this single case, there are grounds for arguing that because the forecasts indicated high probabilities of above-normal rainfall, they were "good." The IFRC-WCAZ reinterpreted the forecast in terms of risk of flooding (associated primarily with individual heavy rainfall events) within the season rather than in terms of seasonal total rainfall, and so from their perspective, the season contained multiple verifications rather than a single one. In addition, for the advanced preparedness actions undertaken, precision in the location of areas of heavy rain was not necessary. The disaster managers were not concerned about whether such flooding would occur throughout the regions of increased probabilities of unusually large seasonal totals, but only that there would be an unusually large number of flooding events scattered somewhere within (or, for some decisions, somewhere near) the area of enhanced probabilities for a wet season. Figure 4 depicts the locations of flood-related disasters requiring Red Cross intervention in West Africa during the 2008 rainy season and shows some geographical coherence with the seasonal forecasts: ten out of the twelve severe flood events that took place occurred in Zone I of the PRESAO forecast map, where above-normal rainfall had been predicted as the most likely outcome. Only two floods, in Togo and Cameroon, occurred in the Zone III Gulf of Guinea countries, where the probability for above-normal seasonal rains was close to climatology.
When the floods came, the Red Cross movement on the ground was already aware, informed, and ready to intervene, thanks to the preparedness measures endeavored by the IFRC Zone Office. These initiatives helped raise awareness at the national level in the countries at risk, where humanitarian actors were able to initiate national-level actions to prepare for likely flood events. In the words of Jerry Niati, Assistant Disaster Management Coordinator at the IFRC in Dakar "In 2007, we were just being asked to do things; in 2008, we were initiating action by raising awareness and sharing forecast information" [32].
Damage and Losses Reduced.
A preliminary IFRC assessment of the results of having engaged in early action during the flood season 2008 concluded that after the initial region-wide preparations had been put in place, more direct action at the local level could be taken in response to shorterrange forecasts. Because of this early action, the following outcomes at national and community levels were possible.
(i) In Ghana, the Volta River Authority Power Company and its Burkinabe counterpart SANOBIL agreed upon a control regime to protect communities along the Black and White Volta Rivers during the 2008 rainy season. Volunteers of the Ghana Red Cross thus set out to advise fishermen not to go out on the river between August 21-23, the announced period of excess spillage from the Bagre Dam. These actions saved lives and reduced damage in August and September of that year compared t0 2007 [28]. (ii) In Togo, in response to the 2008 seasonal forecast, a communication system was established to enable the circulation of information from the national Red Cross society's headquarters, to contact focal points in the regions, districts, and communities at risk and back. In the community of Atiegou Zogbeji located north of Lomé, a community leader went through the flood-prone community with a loudspeaker when riverbed water levels reached dangerous levels asking people to evacuate [33]. With just an hour and a half 's notice, the population of approximately 2000 was able to evacuate. When the floodwaters arrived, physical damage occurred, but no loss of life [28]. (iii) In response to being informed of the seasonal forecast and participating in the RDRT leaders' training, the Gambia held its own National Disaster Response Team (NDRT) training, in which volunteers and branch officers from seven different districts were trained in disaster preparedness. As a result of this training and preparation, the Gambia Red Cross proved very efficient in performing a postflood needs assessment and submitting a funding request within two days of flooding (a process which generally took them several weeks after the flood event).
Across the region in 2008, most countries received needed relief supplies from the Red Cross in a matter of days after the flooding. In contrast, the year before it took on average forty days to deliver many relief items and services. A preliminary quantitative comparison between the costs of flood response alone (2006 and 2007) and the cost of flood response with Early Warning-Early Action (2008) also showed a 33% lower cost per beneficiary [28]. These assessments are only indicative, but they attest to the positive results that can be yielded when disaster planning is informed by climate science.
Discussion: Towards Systematic Early Warning-Early Action in West Africa
The Red Cross' 2008 experience with forecast-based disaster planning in West Africa clearly demonstrates the potential of climate forecasts to inform decision making and serve society. It also confirms all of the constraints identified by the International Journal of Geophysics 9 literature that explain why forecasters and decision makers have not been collaborating with each other. An analysis of the Red Cross' 2008 experience, however, reveals that a number of systemic elements can also serve to be instrumental to overcome these constraints. These are as follows (Table 1).
A Confident Forecast.
Two positive attributes of the 2008 seasonal forecasts were particularly instrumental in polarizing the attention of Red Cross disaster planners and spurred them to action.
(1) The forecast was confident. The signal over West Africa in 2008 was strong enough that all the forecasting models under consideration indicated enhanced above-normal rainfall probabilities over most of West Africa. The agreement between the forecasts, combined with the strong message from the IRI "Extreme Precipitation" forecast map (Figure 4), was instrumental in making decision makers appreciate the level of urgency of the forecast.
(2) The forecast was timely. Following the attendance of the IFRC regional disaster manager to the PRESAO, the seasonal forecast was received by IFRC-WCAZ in late May, which gave one-month lead time to the humanitarian organization to trigger early preparedness actions (request funds for flood preparedness, inform communities at risk, train more disaster relief personnel, etc.).
The Use of "No-Regret" Strategies, a Useful Approach
to Act on Probabilistic Forecasts. Events in 2008 evolved in line with the most likely scenario of above normal rainfall. However, the anticipated heavy rainfall could have failed to verify and events could have unfolded contrary to expectations. Indeed, there was still a 0.5 probability of normal or below-normal rainfall, combined, to occur in Zone II, and even higher probabilities in Zones I and III. Given the large inherent uncertainty in the seasonal forecasts, one positive strategy that the Red Cross adopted to address lingering uncertainty was to implement no-regret strategies. No-regret strategies consisted of actions and interventions that did not involve the commitment of resources to emergency relief goods or services that could go to waste if no floods materialized. These included prepositioning relief items that national Red Cross disaster managers could reuse during successive years if forecast floods did not occur in 2008, as well as capacity building and training of additional Red Cross community volunteers on first aid and disaster assistance procedures, needed in any case as part of the Red Cross' daily operations during peace as well as crisis time. In this sense, whether the most likely scenario forecast materializes or not, the situation that ensues is still a win-win from the standpoint of the decision maker committing resources on the basis of the forecast. This strategy of investing in noregret initiatives (which are beneficial even in the baseline without any disaster) is an effective win-win strategy to address the uncertainty inherent in climate forecasts.
Funding for Forecast-Based Early Action: A Challenge to
Continued Use of Seasonal Forecasts to Trigger Emergency Preparedness. Expanded use of forecasts to trigger emergency preparedness will require that donors are willing to support such activities. Indeed, 2008 was the first time that a seasonalforecast-based flood preparedness appeal was issued, and given the novelty of the endeavor, the severity of the 2007 floods, as well as the urgency beckoned by the forecast, the IFRC Regional Office was able to request and secure some funds for preparedness. However, most of the limited funds only arrived when the floods were already underway. The main reason explaining donors' slow response may be that forecast-based preparedness falls through the cracks of the two well-established disaster management funding channels that currently exist: (a) postdisaster work (response, recovery and reconstruction) and (b) long-term risk reduction work (often as part of regular development assistance). Most donor agencies lack mechanisms specifically designed to support humanitarian action based on forecasts on different timescales. Action based on forecasts may seem too emergency focused to justify funding through regular development assistance channels, but too early to warrant the use of disaster relief budgets, often perceived as being meant to relieve true human suffering once a disaster occurs.
Because of this gap between funding for relief and longterm risk reduction, there is a need for a dialogue between humanitarian organizations and donor agencies on how best to support forecast-based humanitarian action. Some have suggested that the new international climate change financing arrangements could play a role in filling that gap [34]. Such dialogues should be built on strong evidence on the potential benefits of forecast-based interventions, in terms of quality of humanitarian outcomes, and in terms of cost effectiveness of financing for humanitarian relief. Donors will probably not adequately support this early warning-early action approach unless they are provided with reliable, rigorous information proving how collaboration can lead to effective reduction of losses and/or more efficient use of scarce resources.
In 2008, the only truly rapid funding came from the Disaster Response Emergency Fund (DREF), a special mechanism dedicated for quick response before a full appeal is launched, for smaller disasters where no appeal may be issued at all, and to start humanitarian action for imminent (rather than only actual) disasters. Similar fund-based mechanisms perhaps specifically aimed at action based on probabilistic forecasts rather than at responding to single disasters seem to be a good vehicle for expanded forecast-based preparedness activities.
Packaging and Content of Climate Information Unfit for
Community Decision Making. Save for the notable example of the community of Atiegou Zogbeji in Togo that evacuated in response to a flood early warning issued by the National Red Cross of Togo, the 2008 seasonal forecast did not contribute to trigger meaningful behavioral change beyond the regional and national levels. This is a conspicuous limitation of the 2008 use of seasonal climate information Realization that existing seasonal forecasts over West Africa provide information useful for disaster managers (e.g., likelihood of above-normal rainfall over the course of the season) Limited forecasting skill over West Africa render current seasonal forecast irrelevant to decision making needs of community-level stakeholders; climate forecasts unable to respond to key information needs of end users in the region (e.g., onset of rainy season, rainfall distribution across season); regional climate research not driven by the information needs of decision makers Decision maker needs to inform future regional climate research (user-driven research), namely on: (a) threshold identification and probabilistic representation using terciles ("above normal" does not enable decision making) (b) additional geographic precision (c) intraseasonal extremes distribution (5) Limited capacity to act on forecasts Use of existing community-level capacity (Red Cross volunteers, solidarity groups at community level) to disseminate early warnings to communities at risk In context of low human development, very limited capacity to act on seasonal forecast information Supplement seasonal warnings with accompanying measures that enable communities to act on received information, adopt behavioral change, and increase their resilience to forecast hazards This time the high probabilities for above normal rainfall announced in the forecast materialized themselves; in the future when probability level is invalidated will the same trust remain? Is there enough understanding among Red Cross decision makers about the uncertainties inherent in the forecasting process?
(i) Better explanation of the science and limits of climate forecasting (and the uncertainties involved in the process of climate forecasting) to develop trust.
(ii) Participatory, sustained and reiterated workshops explaining, to regional, as well as national and community-level end stakeholders uses of climate information that suggests that, in their current form, seasonal forecasts are best able to inform decision making at the regional and national levels, but not to lead to useful behavioral change in communities at risk. The bottlenecks to climate information access and use beyond the regional and national levels remain and will require more focused efforts for climate information to be relevant and accessible to community-level stakeholders.
Summary of Findings.
Most obstacles to seasonal forecast use remain in place even now. The packaging of the seasonal forecast is far from ideal, capacity to absorb information remains low, procedures and donors are still focused more on response than prevention. In 2008, the strong confident signal for likely above-normal rainfall and the presence of an in-house climate information translator and of an institutional champion open to innovation allowed for the seasonal forecast to be transmitted and acted on by the Red Cross. These conditions are not guaranteed to ensure ongoing progress unless sustained efforts are maintained to turn early warnings into early actions through standard practice-not just within IFRC and other humanitarian organizations, but also in the climate-meteorological community at large, and among donors. The Red Cross in West Africa used a seasonal forecast to inform its disaster planning in 2008 in ways that saved lives, preserved livelihoods, and increased the resilience of communities at risk. This experience teaches that the distance between the climate science community and humanitarians can be bridged to generate positive outcomes for vulnerable populations, provided that there exist the following.
(1) A community of climate information providers ready to engage with users and respond to their information needs; (2) a drive among humanitarian workers, and more generally, end users, to access and understand climate information and act upon it when there is a sufficiently strong signal; (3) a disaster management framework (including the donor community) that facilitates a shift from response to preparedness, with mechanisms to mobilize resources for loss-reducing ex-ante measures at various timescales; (4) community relays able to take the information to people at risk in effective ways.
This successful example of ex-ante flood management was stimulated through capacity building and institutional investments, new partnerships, and sustained user-scientist dialogues. These are important lessons about constraints to climate information use in Africa.
The results presented in this article are only suggestive; a thorough evaluation of the early warning-early action approach that started in 2008 is needed, one that will require longer time series and more rigorous impact assessments. But the Red Cross' 2008 flood management experience in West Africa demonstrates what can happen when a seasonal rainfall forecast, one that is confident, and timely, is duly accessed, understood, trusted and acted on by decision makers. In so doing, it also provides a valuable illustration of the applications that can be made of probabilistic seasonal forecasts (notably through no-regret strategies), with positive impacts for communities at risk. | 2018-12-04T17:48:12.666Z | 2012-04-22T00:00:00.000 | {
"year": 2012,
"sha1": "42f35575b283ff321420e5f0e6db7c4d8e02a33d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijge/2012/986016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "42f35575b283ff321420e5f0e6db7c4d8e02a33d",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
226965938 | pes2o/s2orc | v3-fos-license | Mixing of Dead Sea and Red Sea waters and changes in their physical properties
The present work emphasizes on the changes in the Red Sea and Dead Sea mixed waters physical properties including: temperature, pH, dissolved oxygen, density, salinity and viscosity. It focuses on the impacts of changes in mixed water quality on the Dead Sea ecosystem and the current industrial activities. The pilot project site consisted of six water ponds (tanks) located next to Arab Potash Company point of intake about 100 m south of the Dead Sea shores. The Red Sea - Dead Sea water mixing was controlled and done based on the expected mixing ratios between Red Sea and Dead Sea waters to mimic the potential actual situation associated with Red Sea – Dead Sea project conduit. All measured properties of mixed water bodies in tanks 1 to 5 tend to behave differently from similar Dead Sea water (tank 6) properties. The properties variations depend on the rate of diluting the Dead Sea water by Red Sea water and rejected brine. The least altered physical properties were observed when the Red Sea concentrated brine was added to the Dead Sea water (tank 5). The obtained results show that transferring Red Sea water to Dead Sea would lead to dilution of Dead Sea brine and affects significantly the investigated mixed water physical properties. Water mixing project is expected to cease the halite precipitation phenomenon due to the development of stratification in the Dead Sea and halite dissolution. Based on the industrial needs for the Dead Sea brine with its current physical properties, it is recommended to add rejected brine only to the Dead Sea due to its minimal effect on physical properties variations.
Introduction
The Dead Sea is a closed lake and the riparian countries are Jordan, Palestine and Israel [1,2,3]. Its water level reported at 431 m below sea level (bsl) in August 2018, while, it was 416 m bsl in 2003 (Oren et al. [4]; Khlaifat et. al., [1,2,5,6]). This decline in water level confirmed by the hypsometric curve of Neev and Emery [7], Dead Sea covered an area of 950 km 2 at 397 m bsl. The Dead Sea area had shrunk to about 780 km 2 after 1976 and its volume decreased from 150 km 3 to 135 km 3 .
The Dead Sea is the largest hypersaline water body on the planet. It has a salinity of 340 g/L (Khlaifat et.al., [2,5]; Herut [8]). Magnesium (Mg 46 g/L) is the most dominated cation in Dead Sea water and it's called Magnesia Sea, followed by Sodium (Na 36.5 g/L), Calcium (Ca 17 g/L) and Potassium (K 7.8 g/L), while, Chloride (Cl 225 g/L) is the main anion followed by Bromide (Br 5.6 g/L). Carbonate (CO 3
À2
) and sulfate (SO 4 À2 ) are minor components in Dead Sea water (Gavrieli,[9]). Neev and Emery [7] studied the composition of the carbonate system in Dead Sea. They observed two mineral precipitation events during the summer, they called these events "brine whitening" which resulted in gypsum (CaSO 4 .2H 2 O) to aragonite (CaCO 3 ) weight ratio greater than 3.14. The major minerals in Dead Sea were aragonite (CaCO 3 ), anhydrite (CaSO 4 ), and halite (NaCl), and their concentrations found saturated to oversaturate (Gavrieli,[9]; Katz et al., [10]; and Krumgalz and Millero, [11]). The major contributors for water level decline are diversion of its main tributary "Jordan River" and chemical industry activities at the eastern and the western shores of the Dead Sea. The water level decline reported at an annual rate of more than one meter, which caused a shrunk in the total area by more than 35% in the last 30 years (Khlaifat et. al., [2,6]; Frumkin and Elitzer, [12]; Khlaifat, [1]; Asmar and Ergenzinger, [13,14]; and Gertman and Hecht, [15]). As consequences of the Dead Sea level decline two hazardous phenomena are resulting. The first phenomenon is the loss of groundwater due to the changes in the groundwater gradient and intrusion of freshwater that leads to dissolution of the salt brine that creates cavities (Salameh and El-Naser, [16,17,18]), while the second one is the sinkholes formation that causes damages for the agricultural land and development of the area on the shores of the Dead Sea (Salvati and Sasowsky, [19]; Abelson et al. [20]). It was found that the groundwater discharge rate to the Dead Sea increases with the receding base level, which overall results in an increase in the hydraulic gradient and seaward migration of brine/freshwater interface (Kiro et al. [21]; Salameh and El-Naser, [16,17,18]). Historically, the water flow in Jordan River has been decreased from 1.5 billion cubic meters annually in 1960s to less than 100 million cubic meters recently. The Jordan River catchment area is shared among Jordan, Syria, Lebanon, Israel, and Palestine. Dams, canals, and pumping stations were constructed in the catchment area countries to divert the water for irrigation and drinking purposes. Even the water quality in the Jordan River has been deteriorated due to the remaining brackish water flow and discharging of sewage.
The proposed Red Sea-Dead Sea Conduit would provide soundly environmental solution to save the Dead Sea and its ecosystem through eliminating the impact of the level decline and providing a continuous flow of Red Sea water. This project will provide drinking water and electricity to Jordan and the riparian countries of the Dead Sea, and will, sustain and raise the Dead Sea water level. During the filling period of the project, to raise the Dead Sea level, the salinity of the upper water layer will decrease. Also, the composition of this layer is expected to change. When filling process is over (steady state regime) evaporation will be compensated by Red Sea water discharge and the upper water layer salinity will start to increase.
The length of the proposed conduit is about 180 km and runs as a Canal from Red Sea to Dead Sea inside Jordanian territories only (Sarah and Fine [22]). The inflow of Red Sea water or concentrated desalination rejects (after water desalination) into the Dead Sea will have major impacts on its dynamics, physical, chemical, and the whole biological ecosystem. Water body stratification would be resulted due to the large surface flow (Beyth et al. [23]).
Dead Sea -Red Sea water mixing, experimental site and sampling
Mixing of Dead Sea water which is high in calcium (Ca !18 g/L) with Red Sea water that has high concentrations of sulfate (SO 4 À2 ! 3 g/L) would promote the natural formation of gypsum (CaSO 4 .2H 2 O) precipitate (Katz et al. [24], Reznik et al. [25,26,27]). Moreover, it was found that Halite (NaCl) began precipitating in 1983 as a consequence of increase in the Dead Sea water salinity and sustained constant precipitation rate since then (Steinhorn, [28]), (Gavrieli et al. [29]; Gavrieli, [9]). The Dead Sea water salinity is a function of evaporation and the atmospheric relative humidity. It was found that Dead Sea surface water salinity increased from 225 g/kg to 279 g/kg during the period from late 1950s-1980s (Levy [30]). The reported Dead Sea salinity in the current study reached little above 325 g/kg. Recent studies showed that Dead Sea evaporation rate varied from 1.05 to 2.0 m/year for the current salinity (Stanhill [31]; Alpert et al [32]; Salameh and El-Naser [16] and Lensky et al. [33]). These values which based on water and heat balance calculations were in contrary with other studies that estimated the evaporation rate at the Dead Sea from 1.30 to 1.54 m (Al-Weshah, [34]).
Dead Sea hydrography was distinguished by two periods meromictic and holomictic in a long term monitoring study extended for 9 years from 1992 to 2000 (Gertman and Hecht, [15]). The meromictic period prolonged from 1992 to 1995, while the holomictic period from 1996 to 2000. The study described throughout the nine years the changes in both temperature and salinity in Dead Sea water body. Other investigations have described specific characteristic of the Dead Sea for shorter periods (Frumkin and Elitzur, [12]; Oren et al., [4]; Sinder et al., [35]).
Moreover, as the Dead Sea brine is the raw material for the existing chemical industries, it was assessed for its physical and chemical properties for a period extended for 22 years from 1987 to 2008 (Khlaifat et. al., [36]). The proposed water conduit project is expected to have a direct impact on the Dead Sea water salinity and other water properties. The Red Sea water and/or desalination plant reject discharge point location that flow into Dead Sea would have a vital effect on the surface water salinity (Abu Qdais, [37]). Consequently, chemical industries located at both sides of Dead Sea will be adversely impacted. After this intensive literature review of the available data on properties of Dead Sea brine, the researchers concluded that there is still need for more basic data on a longer study period. Therefore, in the present work the physical properties of the mixing water scenarios were carried out for one year.
Experimental site
This pilot project study used well-controlled mixed water ponds to investigate the interaction between Red Sea and Dead Sea waters. Experiments were conducted on the Dead Sea ground level. The experimental site was located next to Arab Potash Company point of intake about 100 m south of the Dead Sea shores. The site consists of six cylindrical tanks made from high density polyethylene (HDPE) material (diameter ¼ 3 m and height ¼ 3.5 m) buried for more than 3 m in the ground and open from the top as shown in Figure 1.
Four tanks (1)(2)(3)(4) were filled with an approximately 25 m 3 of a mixture of Dead Sea and Red Sea waters. The source of the Dead Sea water was from the pipeline that feeds the evaporation ponds of the Arab Potash Company. On the other hand the Red Sea water was transported via water tank trucks from the Gulf of Aqaba.
Before mixing the two water bodies, large particles in the Red Sea water were removed by filtration through a filter. Part of the filtered Red Sea water was concentrated by evaporation, in a water pool, to about twice its original salinity and mixed with the Dead Sea water in tank number 5 with the volume content shown in Table 1. Tank number 6 was filled only with Dead Sea water and used for benchmarking (Khlaifat et. al. [2,5,6]). During the monitoring period water level controller is used to maintain the mixed water level in each tank at a fixed height. Red Sea water was added to the top of tanks 1-4, concentrated Red-Sea water was added to the top of tank 5, and tank 6 surface level was kept constact by adding Dead Sea water. This water addition was done to compensate for water losses by evaporation. It was controlled by level control system, which consists of a float located at the mixed water-air interface, thus allowing more or less water to flow into each of the experimental tanks from the feed tanks. Each large tank is connected with a two cubic meter feed tank.
No external force is applied to mix the waters in the tanks. The two waters are mixed by natural mass transfer between lighter and denser sea waters in a system of fixed volume that is affected by seasonal variations. Diffusion, natural convective mass transport, and the motion of the interface (between each tank water surface level and atmosphere) upon evaporation and dissolution were the main modes of mixing.
Sampling
Water samples were collected from different water tanks (5 tanks of mixed waters and one tank of Dead Sea water only). During the winter, rain samples were collected as well. A collection time interval was adequate to monitor a variety of changes in salt concentration, temperature and rainfall. As long as the experimental work lasted for one year, having a two weeks' time interval for sampling was enough to capture different occurring phenomena caused by mixing. Samples for analysis were collected, from each tank, at different depths from the surface (surface level or top (up to 0.5m), middle (1.5 m), and bottom (2.5 m)) respectively using a close-bottle sampler.
Collected samples were kept away from light, even during transportation to the laboratory. Samples were stored in dark conditions at 4 C. Water samples collected in clean polyethylene bottles (1.725 L) were divided into eight equal portions of 200 ml that were used for the analysis of anions, cations, physical parameters, chemical properties, and heavy metals.
Measurements and analyses
All analyses were conducted, right after the samples being collected, in the laboratories of Prince Faisal Center for Dead Sea, Environmental and Energy Research (at Mutah University) and Arab Potash Company laboratories. The collected samples were analyzed and investigated for different physico-chemical parameters, salt types and microbial effects. This paper discusses the results of physical properties measurement and analysis only, namely: temperature, pH, dissolved oxygen (DO), density, salinity, and viscosity. Interrelationships between these properties were investigates as well.
As long as Arab Potash Company's point of intake pumps is located at about 20m below Dead Sea level, then it is important to know the changes in the mixture physical properties at the bottom of each of the six tanks throughout the monitoring year.
Mixed water temperature is a key measurement obtained, in situ, by sensors located at different depths and verified using a conventional thermometer measurement.
The acidity, pH, of the mixed water was measured using a pH meter in situ right after collecting the water samples. The density of the collected samples was measured in situ using a hydrometer.
The salinity was measured in situ and in the lab too. Hydrometer was used to measure the specific gravity of the collected mixed water samples and then it was converted to salinity. Lab measurement of salinity followed the standard method for examination of water and wastewater for chloride amount in the mixed water samples (APHA, [38]). Salinity value was double checked for few samples using gravimetry by taking the weight of the total dissolved solids per a given volume of mixed water.
Dissolved oxygen was measured in situ using DO meter. The viscosity of the collected mixed water samples had been measured in the lab by master viscometer.
Results and discussions
3.1. Temperature Figure 2 shows temperature variation at the bottom of each of the six tanks throughout the year.
It is well known that temperature contributes to all, mixed water, physical and chemical properties differences. The temperature profiles show variations in all tanks, Figure 2. The temperature changes depend on the climate conditions, highest temperatures were above 46 C during the summer season, while during the winter season the temperature touched 17 C. Likewise, after six month from monitoring the temperature profile, the rejected brine tank-5 had a similar temperature behavior like the DS water tank-6 temperature variation. While, all other tanks showed temperatures higher than either the DS water tank (tank-6) or the rejected brine tank (tank-5). The results of temperature profile explain the proportionality relationship between the water temperature and density. Lower temperatures in tanks 5 and 6 could be attributed to lower stratification level compared to other tanks (1)(2)(3)(4). It will be shown later that continuous rise in temperature results in increasing both density and salinity.
Acidity (pH)
The water acidity which can be expressed by the pH value plays a vital role for the aquatic life. Most of marine organisms live in a pH range of 6.5-9.0, though some of them can live in ocean water with pH levels outside of this range. Therefore, pH value was monitored during the study period in the mixing water tanks. Additionally, the water acidity controls the trace metals solubility and mixed water toxicity. Also, the rate and products of chemical reactions occurring in some of the tanks depend on the acidity of the mixed water. Figure 3 shows that the acidity of mixed water in all tanks is less than 6.1, which means that the environment is not perfect for aquatic creatures but might be good for some kind of bacteria. Mixed water acidity values range from 5.6 and 6.1 in all tanks excluding tank number 6 that contains Dead Sea water only. In tank 6 the pH reached a value of 4.8 after one year which restricts biological availability in this tank. The pH value was reported 5.9 for Dead Sea in 1977 (Ben-Yaakov and Sass, [39]). The decline in pH value in tank number 6 might be attributed due to the precipitation processes and formation of halite.
It was observed that the water acidity decreases with the increase in water temperature during the last 6 months of the experiments ( Figure 2). The acidity variation does not mean certainly that water becomes more acidic at higher temperatures. But, the mixed waters solution became more acidic due to an excess of hydrogen ions over hydroxide ions.
Dissolved oxygen (DO)
Oxygen can enter the tanks via two different sources: 1) the main mechanism is atmospheric diffusion where oxygen in the air is absorbed by surface water due to a difference in oxygen concentrations; 2) Table 1. Dead Sea (DS) and Red Sea (RS) water content (vol. %) in different tanks. continuous input of Red Sea water (tanks 1 to 4) and rejected brine (tank 5). Both dissolved oxygen and temperature of the mixed water bodies are affected by seasonal weather variations and the physical properties of mixed water. DO level is directly related to how much aquatic life mixed water tanks can support.
It is clear from Figure 4 that the concentration of dissolved oxygen throughout the year is too low (less than 0.004 g/l), that makes it impossible to sustain aquatic life at the bottoms of all tanks. Some reasons for the low DO level could be chemical and biological oxygen demands. It was found that mixed water in all tanks except tank number 6 was stratified, which means that the hypolimnion receives little oxygen from atmospheric diffusion. Moreover, continuous feed streams come from feeding tanks and have only minimal impacts on the oxygen content of larger mixing tanks. Thus, the mixed water at the bottom of the tanks receives very little dissolved oxygen during summer thermal stratification but a little more during the winter time (see Figure 4). Moreover, DO levels for Dead Sea water column was reported at 0.0014 g/L during the period 1987-1989 (Shatkay, 1991 [40]; Shatkay et al., 1993 [41]). These values are comparable to this project/study findings.
Mixture density
Temperature variations and dissolved substances contribute to minor density differences in different tanks. The density of Dead Sea water in tank 6 was increasing in a similar salinity trend increase. This indicates that the most important changes of mixed water physical properties in all tanks are the changes in the relationship among water density ( Figure 5), salinity ( Figure 6) and temperature (Figure 2). Laboratory tests and analyses performed included: temperature, density and salinity (APHA test procedure [38]).
Salinity (TDS)
Salinity is a measure of the amount of salts in the mixed water tanks. Red Sea water was considered as a freshwater where the term "total dissolved solids" (TDS) was used instead of "salinity".
The Dead Sea water owes its high salinity due to a combination of dissolved ions of different salts such sodium chloride (NaCl), potassium chloride (KCl), magnesium chloride (MgCl 2 ) and calcium chloride (CaCl 2 ). The high concentration of dissolved ions results in increasing the salinity as well as conductivity of the Dead Sea water (tank 6). Salinity variation with time ( Figure 6) has similar trends in all tanks to the trends observed for density variation and with time ( Figure 5) this is due to the fact that these two properties strongly depend on salt contents. Salinity, likewise density, rise is driven by increased rate of evaporation and precipitation of sodium chloride (salt) from the saturated brine [9].
Viscosity
It is clear from Figure 7 that the viscosity in all tanks but tank number 6 is decreasing. The viscosity of the mixed water in tank 6 was increased significantly. High viscosity values result in lowering diffusion rate.
It is well known fact that both viscosity and surface tension of viscous fluid increase as the temperature decrease. This fact is violated here with respect to the mixed water in tank 6, this is attributed to an increase in both salanity and density.
Both the viscosity ( Figure 7) and density ( Figure 5) of the mixed water samples collected from different tanks are functions of salinity (Figure 6), temperature ( Figure 2) and pressure. However, for the depth range of our experiments, the dependency of pressure is negligible. Comparison between salinity ( Figure 6) and viscosity ( Figure 7) shows a direct proportionality.
Conclusion
This study presents the first experimental efforts in establishing a database for the physical properties of mixed Dead Sea and Red Sea waters that were conducted on the southern shore of the Dead Sea throughout 12 months to monitor all changes induced by seasonal variations. All the monitored physical properties, temperature, acidity (pH), dissolved oxygen, density, salinity and viscosity, had different degrees of changeability on timescales of months as shown in the results and discussions part of the paper.
None of the measured properties of mixed waters in tanks 1 to 5 tends to behave exactly like Dead Sea water (tank 6). The results showed that diluting the Dead Sea water, by Red Sea water and rejected brine, affects all its physical properties significantly. Physical properties measured when the rejected brine was added to the Dead Sea brine (tank 5) where the closest to Dead Sea brine properties (tank 6). Based on the industrial needs for the Dead Sea brine with its current physical properties, it is not recommended to add Red Sea water directly to the Dead Sea. The effect of adding rejected brine to the Dead Sea is minimal.
The obtained experimental results showed that natural convection influenced the mixing effect drastically. The observed changes in the studied physical properties confirm that convective flux is much higher than diffusive flux in the top layers (Red Sea water) of the tanks and the opposite is true for the lower layers (Dead Sea brine). It is not clear which flux (convective or diffusive) is dominating across the interface, this can be investigated further by chemical properties study and by numerical modeling and simulation which are beyond the scope of this paper. | 2020-11-12T09:03:14.513Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "dba15eac303b6aa5da6439a6cc66fb98332f53ad",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844020322878/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6927f7b7885f5efcd7feb15963b74e5bf09924b",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
17296866 | pes2o/s2orc | v3-fos-license | SHetA2 interference with mortalin binding to p66shc and p53 identified using drug-conjugated magnetic microspheres
Summary SHetA2 is a small molecule flexible heteroarotinoid (Flex-Het) with promising cancer prevention and therapeutic activity. Extensive preclinical testing documented lack of SHetA2 toxicity at doses 25 to 150 fold above effective doses. Knowledge of the SHetA2 molecular target(s) that mediate(s) the mechanism of SHetA2 action is critical to appropriate design of clinical trials and improved analogs. The aim of this study was to develop a method to identify SHetA2 binding proteins in cancer cells. A known metabolite of SHetA2 that has a hydroxyl group available for attachment was synthesized and conjugated to a linker for attachment to a magnetic microsphere. SHetA2-conjugated magnetic microspheres and unconjugated magnetic microspheres were separately incubated with aliquots of a whole cell protein extract from the A2780 human ovarian cancer cell line. After washing away non-specifically bound proteins with the protein extraction buffer, SHetA2-binding proteins were eluted with an excess of free SHetA2. In two independent experiments, an SDS gel band of about 72 kDa was present at differential levels in wells of eluent from SHetA2-microspheres in comparison to wells of eluent from unconjugated microspheres. Mass spectrometry analysis of the bands (QStar) and straight eluents (Orbitrap) identified mortalin (HSPA9) to be present in the eluent from SHetA2-microspheres and not in eluent from unconjugated microspheres. Co-immunoprecipitation experiments demonstrated that SHetA2 interfered with mortalin binding to p53 and p66 Src homologous-collagen homologue (p66shc) inside cancer cells. Mortalin and SHetA2 conflictingly regulate the same molecules involved in mitochondria-mediated intrinsic apoptosis. The results validate the power of this protocol for revealing drug targets. Electronic supplementary material The online version of this article (doi:10.1007/s10637-013-0041-x) contains supplementary material, which is available to authorized users.
Summary SHetA2 is a small molecule flexible heteroarotinoid (Flex-Het) with promising cancer prevention and therapeutic activity. Extensive preclinical testing documented lack of SHetA2 toxicity at doses 25 to 150 fold above effective doses. Knowledge of the SHetA2 molecular target(s) that mediate(s) the mechanism of SHetA2 action is critical to appropriate design of clinical trials and improved analogs. The aim of this study was to develop a method to identify SHetA2 binding proteins in cancer cells. A known metabolite of SHetA2 that has a hydroxyl group available for attachment was synthesized and conjugated to a linker for attachment to a magnetic microsphere. SHetA2-conjugated magnetic microspheres and unconjugated magnetic microspheres were separately incubated with aliquots of a whole cell protein extract from the A2780 human ovarian cancer cell line. After washing away non-specifically bound proteins with the protein extraction buffer, SHetA2binding proteins were eluted with an excess of free SHetA2. In two independent experiments, an SDS gel band of about 72 kDa was present at differential levels in wells of eluent from SHetA2-microspheres in comparison to wells of eluent from unconjugated microspheres. Mass spectrometry analysis of the bands (QStar) and straight eluents (Orbitrap) identified mortalin (HSPA9) to be present in the eluent from SHetA2microspheres and not in eluent from unconjugated microspheres. Co-immunoprecipitation experiments demonstrated that SHetA2 interfered with mortalin binding to p53 and p66 Src homologous-collagen homologue (p66shc) inside cancer cells. Mortalin and SHetA2 conflictingly regulate the same molecules involved in mitochondria-mediated intrinsic apoptosis. The results validate the power of this protocol for revealing drug targets.
Introduction
Advancement of investigational new drugs to clinical trials requires detailed knowledge of the pharmacokinetics, metabolism and toxicity to document that the compound has sufficient pharmaceutical qualities for clinical application. Knowledge of the mechanism can be used to select the appropriate patient population and to design improved analogs. Pre-clinical testing for our lead compound, a flexible heteroarotinoid (Flex-Het) called Sulfur Het A2 (SHetA2, NSC721689) demonstrated reasonable pharmacokinetics and lack of mutagenicity, carcinogenicity, teratogenicity and toxicity [1][2][3][4][5]. SHetA2 has a wide therapeutic window as indicated by the No Observed Adverse Effect Level (NOAEL) of >1,500 mg/kg/day identified in the 28 day dog toxicity model in comparison to the ability of 10 to 60 mg/kg/day to inhibit xenograft tumor growth [5][6][7][8]. This lack of in vivo toxicity, along with oral bioavailability and documented inhibition of colorectal tumorigenesis in the APC min/+ mouse model at 30 mg/kg given 5 days per week, make SHetA2 an ideal candidate for a cancer prevention drug [9]. SHetA2 sensitization of resistant cells to death receptor activating ligands offers promise for advancement of SHetA2 toward clinical trials as combination therapy with death receptor activating antibodies that are currently in clinical trials [10,11]. Mechanistic studies identified that SHetA2 induces G1 cell cycle arrest through reduction of cyclin D1; induces intrinsic apoptosis through direct effects on mitochondria associated with reduction of Bcl-2; enhances death receptor activation of the extrinsic apoptosis pathway through repression of nuclear factor κB (NF-κB) and upregulation of the CAAT/ enhancer binding protein homologous protein (CHOP) transcription factor; and induces autophagy and endoplasmic reticulum stress, however the direct mediators of these events remain to be elucidated [7,[12][13][14]. The mechanism of SHetA2 is independent of the nuclear retinoid receptors and retinoid toxicities, despite the emergence of this compound from a series of structure activity relationship (SAR) studies of retinoic acid receptor-active Hets [3,6,15,16]. SHetA2 was derived from a Het backbone that was shown to have 1000-fold decreased in vivo toxicity in comparison to the parent arotinoid structure while retaining the ability to induce retinoid biological effects [6,17]. The increased flexibility of the Flex-Hets was conferred by substituting the more-rigid two-atom linker with a moreflexible, three-atom urea or thiourea linker [16]. The Flex-Hets differ from the more conformationally-restricted heteroarotinoids in that they act independently of the retinoic acid receptors and are potent inducers of apoptosis in cancer cells while not harming normal cells [3,7,10,13,16,18]. The potent and differential apoptosis-inducing activity and lack of retinoid and other toxicities of SHetA2 offer a dramatic improvement in the therapeutic ratio over receptor active Hets. The purpose of this project was to identify SHetA2-binding proteins that may be responsible for mediating the mechanism by which SHetA2 kills ovarian cancer cells.
A variety of approaches have been used for target identification including direct biochemical methods, genetic interactions, and computational inference. Mass spectrometry analysis of drug binding proteins isolated by affinity chromatography is currently the most powerful and frequently-used technology; however, a generic workflow for this procedure has not been established due to the wide variation in the types of drugs and the affinity and expression levels of their targets [19]. The primary limitation of affinity chromatography is the need to derivatize the drug in order to attach it to a scaffold, because the derivative has high risk for losing the bioactivity of the parent compound [20]. Linkers between the drug and scaffold may be needed to avoid interference of the scaffold with protein binding to the attached drug. In this study, a known metabolite of SHetA2 was synthesized, which permitted attachment to a linker designed in such a way that allowed conjugation to a magnetic microsphere with a physical separation between the drug and microsphere. Identification of a SHetA2-binding protein using this affinity chromatography approach was validated by identification of the same protein in two types of mass spectrometry analyses and demonstration that treatment of cells with SHetA2 interferes with target protein binding to client proteins.
Chemistry
In order to attach compound SHetA2 [1] to a solid support while retaining the majority of the structure intact for protein binding, the known metabolite 2, which has a hydroxyl group available for attachment, was synthesized as described [21] (Structures of 1 and 2 are shown in Fig. 1). The SHetA2 metabolite 2 was then further modified as shown in Fig. 1. Amino-dPEG 4 acid was purchased from Quanta Biodesign Limited, Powell, OH 43065, and used as received. Tetrahydrofuran was dried over potassium hydroxide pellets and distilled from lithium aluminium hydride prior to use. Dichloromethane was used from a freshly opened bottle. All reactions were run under dry nitrogen in oven-dried glassware. The reaction was monitored by thin layer chromatography (TLC) on silica gel GF plates (Analtech No 21521). Purification was performed using preparative thin layer chromatography (PTLC) on a 20-cm×20-cm silica gel GF plate (Analtech No 02015). Band elution for both methods was monitored using a hand-held UV lamp. The infrared (IR) spectrum was run as a thin film on sodium chloride disks. 1 H and 13 C nuclear magnetic resonance (NMR) spectra were measured in deuteriochloroform at 300 MHz and 75 MHz, respectively, and were referenced to internal tetramethylsilane; coupling constants (J) were reported in Hz. The ultravioletvisible (UV-vis) spectrum was collected for the sample using a Varian Cary 5000 spectrophotometer. The mass spectrum was collected for the sample using a Shimadzu LC-MS instrument. A 25-mL, three-necked, round-bottomed flask, equipped with a reflux condenser and a magnetic stirrer was charged with 100 mg (0.24 mmol) of metabolite 2 and 5 mL of tetrahydrofuran. The solution was cooled to −40°C, 29 mg (0.24 mmol) of 4-(dimethylamino)pyridine was added. The mixture was stirred for 5 min to dissolve the solid. To the resulting yellow solution was added dropwise a solution of 48 mg (0.239 mmol) of 4-nitrophenyl chloroformate in 1 mL of tetrahydrofuran, and the reaction mixture was allowed to stir for 30 min. At this time, TLC eluted with ether:hexane (4:1) confirmed that the reaction was complete (and formed 3).
A solution of exactly 63 mg (0.24 mmol) of amino-dPEG 4-acid in 3 mL of dichloromethane was carefully added to the above reaction mixture containing 3 at −40°C over a period of 10 min. The reaction mixture was then slowly allowed to warm to room temperature over a period of 1 h. The reaction mixture then was stirred for 20 min, and concentrated under vacuum. Purification was achieved by PTLC eluted with 98:2 chloroform:methanol. After one elution, the product was washed from the silica gel using ethyl acetate followed by chloroform:methanol The 13 C NMR experiment could not be carried out due to the instability of the product. The product decomposed rapidly at 20°C. Thus, the polyether was attached to the microspheres without further purification.
Only a small quantity of conjugate 4 was obtained, and it was transferred in dry ice to SoluLink (San Diego, CA). Compound 4 was converted to compound 5 by a proprietary method of SoluLink who provided a mass spectrum of the compound as MW -Na+984, the primary m/z occurring at 884.20. Compound 5 was reacted with the magnetic microspheres through an amide linkage. This led to conjugate 6 attached to the NanoLink Amino-Magnetic Microspheres which were utilized to identify SHetA2 binding proteins.
Isolation of SHetA2 binding proteins
Protein extracts of the A2780 human ovarian cancer cell line grown to 90 % confluency in 10 cm plates were isolated using m-PER-PPI (m-PER solution [Thermo Scientific] containing a protease inhibitor cocktail [Active Motif] and a phosphatase inhibitor cocktail [Active Motif]) and stored at −80°C until use. The protein concentration of the extracts was measured with the BCA Protein Assay (Thermo Scientific). SHetA2conjugated NanoLink Amino-Magnetic Microspheres 6 were pelleted with a magnet for 2 min, and the supernatant was removed. The pellet was washed once in 100 μL PBS and re-suspended in 100 μL of m-PER-PPI. Unconjugated NanoLink Amino-Magnetic Microspheres were manipulated under the same conditions in parallel as a negative control. The general experimental procedure is as follows with specific modifications detailed in the subsequent paragraphs and Table 1. Equal volumes of the SHetA2-and unconjugated-microsphere suspensions were pelleted and re-suspended in A2780 protein extract and incubated. The microspheres were pelleted with a magnet and washed one to three times with m-PER-PPI at a volume equal to the incubation volume. Bound proteins were then eluted with excess free SHetA2 in m-PER-PPI, followed by removal of the microspheres with a magnet. Aliquots of the washes and eluents of the SHetA2-and unconjugated microspheres were electrophoresed into 2-D SDS-PAGE gels. Four repeats of the experiment were performed in efforts to optimize differential bands observed in the SDS gel wells corresponding to SHetA2-microspheres in comparison to unconjugated microspheres. In experiment 1, 100 μL of A2780 protein extract (6.8 mg total) was mixed with a volume of 100 μL of SHetA2 microspheres (containing approximately 5 μg or 12.45 nM SHetA2) from the starting solution and incubated at room temperature in the dark with agitation for 30 min. After 3 washes, bound proteins were eluted by incubating the microspheres in 10 μM SHetA2 for 10 min with agitation in the dark. Differential bands in the lanes corresponding to SHetA2 eluents from the SHetA2-and unconjugated microspheres could not be visualized upon staining the SDS-PAGE gel with Coomassie Blue. Experiment 2 was designed to reduce the non-specific binding in SDS-gel bands corresponding to unconjugated microspheres by decreasing the amount of protein added and to increase the specific binding in SDS-gel bands corresponding to SHetA2-microspheres by increasing the incubation temperature. Because a limited amount of microspheres was available, both parameters were altered simultaneously. In experiment 2, 50 μL of A2780 protein extract (1.8 mg) was incubated with the SHetA2-or unconjugated magnetic microspheres at 37°C for 30 min in the dark without agitation, followed by 5 min of slow agitation at room temperature in the dark. After one wash, bound proteins were eluted by incubating the microspheres with 1 mM SHetA2 in 20 μL of m-PER-PPI for 30 min at 37°C in the dark. Again no differential bands were observed between the lanes corresponding to the SHetA2-and unconjugated microspheres eluents when the gel was stained with Coomassi blue; however, a differential band was observed upon staining of a repeat gel with the Plus One DNA Silver Staining Kit (Amersham Biosciences) (Fig. 2a).
Experiment 3 was designed to increase the yield of this specific band to levels that could be evaluated by mass spectrometry. Based on the presumption that the SHetA2 attached to the microspheres was the limiting factor, the ratio of protein to microspheres was altered to 1:100 by using a volume of 50 μL of A2780 protein extract containing 2.6 mg of protein and incubating with 500 μL of unconjugated microspheres or SHetA2-microspheres (25 μg or 62.25 nM SHetA2). The extract was incubated and eluted as described for experiment 2; however, the levels of proteins present in the SDS-PAGE gel lanes corresponding to SHetA2-and unconjugated microspheres were too filled with non-specific binding proteins to reveal a differential band. Experiment 4 was designed to increase the yield by increasing the amount of protein added and to reduce the nonspecific background binding by increasing the number of washes. Thus, 275 μL of SHetA2-microspheres (13.75 μg or 34.24 nM SHetA2) or unconjugated-microspheres were mixed with 200 μL of A2780 protein extract (approximately 10 μg), agitated for 5 min in the dark at room temperature and then incubated without agitation at 37°C for 25 min in the dark. Two washes with 300 μL of m-PER-PPI were performed, followed by an elution step with 30 μL of 1 mM SHetA2 in m-PER-PPI for 5 min at 37°C. A differential band was discerned between the lanes corresponding to the SHetA2-and unconjugated microspheres upon Coomassie blue staining of the SDS-PAGE gel; however, the nonspecific binding was very high (Fig. 2b).
Efforts to regenerate and re-use the SHetA2-microspheres were unsuccessful. The specific bands in experiments 2 and 4 were evaluated by QStar Mass spectrometry and frozen aliquots of the eluents were thawed and evaluated by Orbitrap Mass spectrometry.
Mass spectrometry analysis of bands excised from SDS-PAGE gels Identification of proteins that bind to the SHetA2 affinity resin was determined by a standard proteomics analysis as briefly described below. The protein bands were excised and cut into small pieces and destained with 5 mM sodium thiosulfate/ 15 mM potassium ferricyanide in water. After rinsing with 25 mM ammonium bicarbonate in 50 % acetonitrile, the proteins in the gel were reduced in 55 mM tris [2carboxylethyl]phosphine in 25 mM ammonium bicarbonate at 60°C for 10 min, followed by alkylation in 100 mM iodoacetoamide/25 mM ammonium bicarbonate at room temperature for 60 min. After washing the alkylation buffer, the gel pieces were washed in 50 % acetonitrile, followed by 100 % acetonitrile. In-gel tryptic digestion was conducted by adding 100 ng of trypsin (Sequencing Grade Modified Trypsin; Promega, Madison, WI) in 10 μL of 25 mM ammonium bicarbonate. The gel pieces were allowed to swell, and an additional 25 μL of 25 mM ammonium bicarbonate was added to submerge the gel pieces. The digestion reaction was incubated at 30°C for 16 hrs. Twenty-five microliters of 1 % trifluoroacetic acid was added to stop the digestion reaction, and the tryptic peptides were extracted into 50 % acetonitrile/0.5 % trifluoroacetic acid. The extract was evaporated by a SpeedVac concentrator (Thermo Electron Corporation, Waltham, MA). All water used was an ultra-pure grade. A Dionex UltiMate 3000 HPLC interfaced to an ABI Sciex QStar Elite Hybrid Quadrupole TOF mass spectrometer was used to analyze the tryptic digests by peptide mass fingerprinting. MASCOT (Matrix Science, Boston, MA) analysis of the MS/MS data against NCBInr 20100405 identified three candidates that show Probability Based MOWSE Score higher than 41 (p <0.05); gi|6470150 (Homo sapiens BiP protein; MOWSE Score, 1202), gi|5729877 (Homo sapiens heat shock cognate 71 kDa protein isoform 1; MOWSE Score, 1030), and gi|292059 (Homo sapiens MTHSP75; MOWSE Score, 423).
Shotgun-mass spectrometry analysis
Aliquots of eluents from the SHetA2-microspheres and unconjugated microspheres were submitted for mass spectrometry analysis without electrophoresing them into gels. MS-grade solvents were from Burdick and Jackson, or Baker. Sequencing grade trypsin was from Promega. Other solutions were the highest grade available from Sigma-Aldrich. Samples were dissolved in 8M urea, 100 mM Tris-HCl pH=8.5 at RT, 5 mM tris(2-carboxyethyl)phosphine, and reduced at room temperature for 20 min. After incubation, 1/20th volume of 200 mM iodoacetamide was added. The alkylation was allowed to proceed for 15 min in the dark at room temperature, after which the samples were diluted with four volumes of 100 mM TrisHCl pH 8.5 and digested with 4 μg/mL trypsin overnight at 37°C. Digested samples were acidified with 1 % formic acid, and purified by tip-based C18 chromatography (OMIX tips from Agilent). Samples were analyzed on a hybrid LTQ-OrbitrapXL mass spectrometer (Thermo Fisher Scientific) coupled to a New Objectives PV-550 nanoelectrospray ion source and an Eksigent NanoLC-2D chromatography system.
Peptides were analyzed by trapping on a 2.5 cm precolumn, followed by analytical separation on a 15-20 cm 75 μm ID fused silica column, both packed with Magic C18 AQ (Bruker). Columns were terminated with an integral fused silica emitter prepared in house. Peptides were eluted using a 5-40 % ACN/0.1 % formic acid gradient performed over 116 min at a flow rate of 250-300 nL/min. For each fullrange Fourier transform mass spectrometry (FT-MS) scan (nominal resolution of 60,000), the six most intense ions were analyzed via data-dependent MS/MS in the linear ion trap using dynamic exclusion for 150 % of the observed chromatographic peak width. MS/MS settings used a trigger threshold of 8,000 counts, monoisotopic precursor selection (MIPS), and rejection of parent ions that had unassigned charge states or were previously identified as contaminants. Centroided ion masses were extracted using the extract_msn.exe utility from Bioworks 3.3.1 and were used for database searching with MASCOT v2.2.04 (Matrix Science) and X Tandem v2007.01.01.1 (www.thegpm.org).
Searches utilized a local database of human sequences, as well as sequences for 114 common adventitious laboratory contaminants. Trypsinolytic parent ions were searched with a parent ion tolerance of 10 ppm, and fragment ion masses were searched with a mass tolerance of 0.8 Da. Variable modifications included modification of cysteine by iodoacetamide or acrylamide, oxidation of methionine, N-terminal peptide cyclization via pyroglutamate or S-carbamoylmethylcysteine, and N-terminal protein modification by formylation or acetylation.
Scaffold (Version 3, Proteome Software Inc., Portland, OR) was used to validate MS/MS based peptide and protein identifications. Peptide identifications were accepted if they could be established at greater than 95 % probability as specified by the Peptide Prophet algorithm [22]. Protein identifications were accepted if they could be established at greater than 99.0 % probability and contained at least 2 identified peptides. Protein probabilities were assigned by the Protein Prophet algorithm [23]. Proteins that contained similar peptides and could not be differentiated based on MS/MS analysis alone were grouped to satisfy the principles of parsimony. A t-test was performed to determine significant differences in proteins identified in the eluents from SHetA2-microspheres in comparison to the eluents from unconjugated microspheres. P values of less than 0.05 were considered to be statistically significant.
Cell culture and co-immunoprecipation experiments
The A2780 human ovarian cancer cell line (gift of Michael Birrer, Harvard Medical School, Boston, MA) was cultured in RPMI 1640 tissue culture medium supplemented with 10 % fetal bovine serum (FBS), antibiotic/antimycotic, 1 nM sodium pyruvate and 1 mM HEPES buffer. Whole cell protein extracts were prepared from cultures treated with SHetA2 or DMSO for various times using M-PER Mammalian Protein Extraction Reagent (Thermo Scientific) or Triton×100 lysis buffer consisting of 1 % Triton×100, 10 mM Tris pH 7.4, 5 mM EDTA pH 8.0, and 50 mM NaCl. Cells were incubated and agitated by pipette or vortex every 5 min for 30 min. After incubation, samples were spun down for 3 min at 10,000 g to remove debris. Cell lysates were centrifuged to remove debris for 5 min at 3,000 g using a 1:100 ratio of anti-p66shc antibody (Santa Cruz, Cat # sc-967), anti-p53 antibody (Santa Cruz, Cat # sc-126) or anti-mortalin antibody (Cell signaling Cat # 3593) to lysate was added to each lysate immediately and incubated at 4°C overnight. The next day, cells were incubated with Protein G-PLUS Agarose microspheres (Santa Cruz) for 1 hr. Microspheres were then washed 3 times with 1 mL of 10 % NP-40/Tris pH 8.0 buffer and centrifuged at 3, 000 ×g for 4 min. Twenty microliters of SDS loading buffer was then added to each sample, and the solution was boiled for 5 min before being electrophoresed into a 12-15 % sodium dodecyl sulfate-polyacrylamide (SDS) gel electrophoresis, transferred to a polyvinylidene difluoride (PVDF) membrane, blocked in 5 % milk for 1 hr at room temperature, and then immunoblotted with the desired primary antibody (antimortalin antibody (Cell signaling Cat # 3593), anti-p53 antibody (Santa Cruz, Cat # sc-126) or anti-p66shc antibody (Santa Cruz, Cat # sc-967) and then incubated overnight at 4°C. The membranes were then washed 3 times with 10 mL of PBS-Tween 20 (0.1 %), incubated with the appropriate HRP-conjugated secondary antibody for 30 min at room temperature and twashed 3 times with 10 mL of PBS-Tween 20 (0.1 %). Finally, specifically bound antibody was detected using Western Blotting Luminol Reagent (Santa Cruz Biotechnology) and exposure to X-Ray negative film.
Identification of mortalin as an SHetA2-binding protein
To identify SHetA2 binding proteins, whole cell protein extracts isolated from the human A2780 ovarian cancer cell line were incubated with NanoLink Amino-Magnetic Microspheres conjugated to SHetA2. Unconjugated NanoLink Amino-Magnetic Microspheres microspheres were used as a control for non-specific binding. Solutions obtained through washing the microspheres and eluting with excess SHetA2 were evaluated on SDS-PAGE gels. Conditions were modified in 4 independent experiments to optimize the specificity and yield of proteins that bound to the SHetA2 microspheres (Table 1). In the second experiment, a band of approximately 75 kDa was observed in the lane corresponding to the SHetA2 microsphere eluent and not in the lane corresponding to the unconjugated microsphere eluent in a silver-stained SDS-PAGE gel (Fig. 2a). The amount of protein present in the gel however was insufficient for detection by the less-sensitive Coomassie blue stain or QTrap mass spectrometry. By increasing the microsphere to protein extract ratio and the number of washing steps, conditions were optimized to enable a sufficient amount of protein differentially bound to the SHetA2-microspheres compared to the unconjugated microspheres that could be detected in an SDS-gel stained with Coomassie blue (Fig. 2b). The areas of the dried gels for experiments 2 and 4 corresponding to the differential 75 kDa band in the lanes corresponding to the SHetA2-and unconjugated microsphere eluents were excised and subjected to QStar mass spectrometry analysis. Although no proteins could be detected in the bands from experiment 2, three related heat shock protein A (HSPA) family members, HSPA5, HSPA8 and HSPA9/mortalin, were identified to be present in the band from the SHetA2-microspheres and not in the band from the unconjugated-microspheres from experiment 4 ( Table 2).
To further validate the identification of these proteins, aliquots of the eluents from the SHetA2-and unconjugated microspheres from experiments 2 and 4 were evaluated by "Shotgun" Orbitrap Mass spectrometry analysis. Twenty-five individual proteins were identified to be present at significantly different levels in the SHetA2-microsphere eluent compared to the unconjugated-microsphere eluent in experiment 2 as determined by a t-test with p values below 0.05 being considered significant (Table 3). Among these proteins, all three HSPA5, HSPA8 and mortalin proteins, which were identified in the previous QStar analysis, also were found to be differentially present in the eluent from the SHetA2microspheres in comparison to the eluent from the unconjugated microspheres. Because experiment 4 had a much higher background of non-specifically bound proteins, it was not surprising that a much higher number (224) of individual proteins, were identified to be differentially present by using the identical analysis procedure (Supplemental Table 1). Among these proteins, mortalin, (with a p value of 0.00031 in Supplemental Table 1) was among the significant proteins in both experiments 2 and 4, while HSPA5 and HSPA8 were found to be significant in experiment 2, but had p-values slightly above 0.05 in experiment 4 (listed at the bottom of Supplemental Table 1). Thus, in two independent experiments evaluated by two different proteomics approaches, mortalin was identified to be specifically bound by SHetA2. Validation of SHetA2 effects on mortalin Mortalin is a molecular chaperone that binds client proteins and supports their functional configuration and intracellular localization [24]. To determine if SHetA2 interferes with mortalin binding to client proteins in situ, the ability of mortalin protein to be co-immunoprecipitated with its client proteins was evaluated in protein extracts from the A2780 and SK-OV-3 human ovarian cancer cell lines treated with SHetA2 or control solvent for 4 hrs. Western blot analysis of co-immunoprecipitates of protein extracts from both cell lines demonstrated that an antibody to the mortalin client protein called p66shc can pull down mortalin protein in the untreated controls, but not in the SHetA2-treated cultures, indicating that SHetA2 disrupted the interaction of these two proteins inside the cell (Fig. 3a). Another mortalin client protein, p53, was found to co-immunoprecipitate with the anti-mortalin specific antibody in untreated control A2780 cultures, but not in the SHetA2-treated cultures indicating that SHetA2 disrupts the interaction of these two proteins inside cells also (Fig. 3b). To further confirm SHetA2 disruption of this intera c t i o n , t h e A 2 7 8 0 p r o t e i n e x t r a c t s w e r e c oimmunoprecipitated with an anti-p53 specific antibody.
Western blot analysis confirmed that mortalin was coimmunoprecipitated with p53 in the control cultures, but not in the SHetA2-treated cultures (Fig. 3c). The SK-OV-3 cell line was not evaluated for mortalin/p53 interactions, because this cell line does not express p53. To verify that this effect was not due to a decrease in mortalin levels potentially caused by SHetA2 treatment, protein extracts from A2780 and SK-OV-3 cultures treated with SHetA2 or control solvent over a range of treatment times were evaluated by Western blot, which demonstrated similar levels of mortalin regardless of the treatment time (Fig. 3d).
Discussion
This work has demonstrated that SHetA2 conjugated magnetic microspheres in concert with mass spectrometry approaches can be used to identify SHetA2 binding proteins. In our review of the literature to this date, there is no other report of using drug conjugated microspheres to identify a drug binding protein. The validity of the identified mortalin protein as a SHetA2-binding protein was supported by the identification of mortalin in two different approaches: QStar Fig. 3 SHetA2 disrupts mortalin binding to client proteins p66shc and p53. a. Protein extracts from cultures of the A2780 and SK-OV-3 human ovarian cancer cell lines that were either untreated (0), treated with 10 μM SHetA2 for 4 hrs (4) or treated with the same volume of DMSO solvent used to administer the SHetA2 for 4 hrs (4c) were incubated with an anti-p66shc antibody overnight. The next day, antibody/protein complexes were immunoprecipated with protein G microspheres and non-specific binding was washed away with a Tris buffer containing NP-40 detergent. The microspheres were boiled to remove the antibody and the samples were electrophoresed into an SDS-gel and transferred to a Western blot membrane which was probed with an antibody to mortalin to detect coimmunoprecipitation and with an antibody to p66shc to detect input. b The experiment was performed as for A, except that the immunoprecipitation antibody used was against mortalin and the Western blot antibody used recognized p53. c, The experiment was performed as for A, except that the immunoprecipitation antibody use was against p53 and the Western blot antibody used recognized mortalin. d Western blot of protein extracts from A2780 and SK-OV-3 cells treated with 10 μM SHetA2 for the number of hours indicated at the bottom of the gel or treated with the same volume of DMS0 solvent for 24 hrs (24c) or 36 hrs (36c). The blots were stripped and re-probed with an antibody that recognizes GAPDH as a protein loading control. These results are representative of at least three separate experiments. IP = immunoprecipitating antibody, IB = Immunoblotting antibody analysis of excised SDS-Gel bands and Orbitrap analysis of aliquots of the whole microsphere eluents. Although QStar analysis could not detect proteins in SDS-gel bands that were too low for visualization by Coomassie blue staining, the higher level of a non-specific unconjugated microsphere band present when the experiment was scaled up to a Coomassie blue-detectable level did not interfere with the ability of mortalin to be identified as being differentially present in the specific over the non-specific bands. Biological validation of mortalin was demonstrated by SHetA2 interference with mortalin binding to client proteins in human ovarian cancer cell lines. Inhibition of mortalin is a likely mechanism of the SHetA2 effects on mitochondria and apoptosis in cancer cells. Within 30 min of treating cancer cells, SHetA2 induces mitochondrial swelling and loss of mitochondrial membrane potential, leading to release of cytochrome c, generation of reactive oxygen species (ROS) and activation of the intrinsic apoptosis pathway [7,13,15,16,18,25,26]. Although mortalin can be found in the endoplasmic reticulum (ER), cytoplasmic vesicles and cytosol, the majority of the protein present in the cell is located within mitochondria [24]. Maintenance of mitochondrial membrane potential needed for electron transport chain function and ATP generation is dependent on mortalin interaction with a protein called p66shc [27]. Release of p66shc from mortalin has been shown to cause opening of mitochondrial pores, release of cytochrome c and ROS generation [28]. Overexpression of mortalin can reduce ROS and protect against ischemic injury in vitro and in vivo [29]. Thus, we hypothesize that release of p66shc from mortalin repression mediates the mechanism by which SHetA2 induces intrinsic apoptosis. Current experiments are testing the validity of this hypothesis. Many of the other proteins identified in the two mass spectrometry approaches are known to be localized to the mitochondria and involved in metabolism. The fact that such proteins were identified in both approaches suggests that they may have attached indirectly to the magnetic microspheres through their affinity to mortalin attached to SHetA2.
SHetA2 disruption of mortalin binding to p53 is hypothesized to contribute to the mechanism by which SHetA2 regulates transcription factors and apoptosis. SHetA2 induces expression of the CHOP protein leading to enhancement of the death receptor extrinsic apoptosis pathway [10,11,30]. Mortalin binds to and sequesters p53 from translocating to the nucleus where it acts as a transcription factor to induce multiple genes that mediate apoptosis [31][32][33]. One of these p53induced transcription factors is the SHetA2-induced CHOP [34,35]. CHOP expression could also be altered as a result of indirect effects of SHetA2 releasing p66shc, thereby allowing it to interact with signal transducer and activator of transcription 3 (STAT3) and Forkhead Box O3A (FOXO3) [36][37][38][39][40], both of which regulate CHOP expression [41,42]. Thus, we hypothesize that SHetA2-induced release of p53 from mortalin repression could augment SHetA2 mediated apoptosis. The ability of SHetA2 to induce apoptosis in p53 null cell lines, such as SK-OV-3, suggests that SHetA2-induced release of p53 from mortalin repression can augment but is not required for SHetA2 apoptosis. The observation that mortalin binds and inhibits p53 induction of apoptosis in stressed cells, but not in weakly stressed or unstressed cells, could explain the differential induction of apoptosis by SHetA2 in cancer over normal cells [43].
Mortalin and SHetA2 also conflictingly regulate the Bcl-2 family of proteins. Upon activation, the pro-apoptotic Bax protein of this family undergoes a conformational change and migrates to the mitochondria where it forms a pore that allows release of cytochrome c and other pro-apoptotic factors. The anti-apoptotic Bcl-2 protein binds to Bax and prevents this pore formation. The ratio of Bax to Bcl-2 has been shown to be a key determinant that can drive the initiation of apoptosis. SHetA2 reduces the expression of Bcl-2, but not Bax in cancer cells in vitro [7,13,26] and tumors in vivo [9] leading to induction of the intrinsic apoptosis pathway, while mortalin can prevent reduction of Bcl-2 and conformational changes in Bax leading to inhibition of the intrinsic apoptosis pathway [44][45][46].
Conclusions
SHetA2 binding proteins, such as mortalin, can be identified by SHetA2-affinity chromatography combined with mass spectroscopic analysis leading to testable hypotheses regarding the SHetA2 molecular mechanism of action. SHetA2 interferes with mortalin binding to client proteins in ovarian cancer cells. Inhibition of mortalin interaction with client proteins represents a logical mechanism for SHetA2-induced apoptosis, a theory that will be tested in future studies. | 2017-08-02T22:24:14.663Z | 2013-11-20T00:00:00.000 | {
"year": 2013,
"sha1": "dad889722cd94629d921be7366817a4813079e94",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10637-013-0041-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dad889722cd94629d921be7366817a4813079e94",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
226731037 | pes2o/s2orc | v3-fos-license | Seroprevalence of hepatitis A virus, hepatitis B virus, hepatitis C virus, and syphilis among human immunodeficiency virus-infected people at a university hospital, Turkey
Introduction: Infections such as viral hepatitis and syphilis that share similar transmission routes with human immunodeficiency virus (HIV) may adversely affect the course of the disease. We aimed to determine the seroprevalence of viral hepatitis and syphilis among HIV-infected people at the initial stage of diagnosis. Material and methods: The medical records of 336 HIV-infected people aged 18 years and older, who were followed up between 2005 and 2018 at a university hospital in Samsun, Turkey, were evaluated retrospectively in terms of initial serological markers for viral hepatitis and syphilis. Results: Hepatitis B surface antigen (HBsAg) was positive in 13 (4.2%) of 303 patients, antibody to HBs antigen (anti-HBs) in 117 (39.2%) of 298 patients, antibody to hepatitis C virus (anti-HCV) in 3 (0.9%) of 301 patients, total antibody to hepatitis B core antigen (anti-HBc total) in 70 (29.2%) of 239 patients and total antibody to hepatitis A virus (anti-HAV total) in 224 (84.5%) of 265 patients. Hepatitis B virus (HBV) deoxyribonucleic acid (DNA) was detected in one (12.5%) of eight patients with isolated anti-HBc. Of 224 patients who were examined for syphilis, 34 (15.1%) were positive for Treponema pallidum hemagglutination (TPHA). Conclusions: In our study, high seroprevalence of syphilis and low immunity to HBV were detected. Health care facilities that follow up HIV-infected people should determine the serological profiles initially once the patients are diagnosed. It should be kept in mind that due to behavioral risk factors among HIV-infected people prevalence of coinfections may be higher than the rate in the community.
Introduction
The life expectancy of human immunodeficiency virus (HIV)-infected people has significantly increased with highly active antiretroviral therapy [1]. However, among these individuals, there is a high likelihood of coinfections such as viral hepatitis and syphilis that share similar transmission routes with HIV, and these factors may adversely affect the course of the disease in the long term [2,3]. In co-infected patients with hepatitis B virus (HBV) or hepatitis C virus (HCV), antiretroviral thera-py-related hepatotoxicity may lead to the progression of liver damage [4,5]. Therefore, chronic liver infections are among the most important causes of hospitalization and death among HIV-infected people today [6][7][8]. Syphilis, as another common infectious disease in the same group of people, serves as the gateway to HIV by causing ulcerative genital lesions. Syphilis may also facilitate HIV transmission by activating immune cells and increasing viral load. On the other hand, HIV infection may adversely affect the natural course of clinical manifestations and response to syphilis treatment [9][10][11]. Therefore; serological markers of viral hepatitis and syphilis should be investigated in terms of coinfections in HIV-infected people in order to select the appropriate treatment [12]. This study aimed to determine the seroprevalence of hepatitis A virus (HAV), HBV, HCV, and syphilis among HIV-infected people followed up at a university hospital at the initial stage of diagnosis. ). Isolated anti-HBc total positivity was defined as the presence of anti-HBc total in absence of any other serological markers of HBV infection.
Material and methods
An automated electrochemiluminescent immunoassay method (Cobas e411, Roche Diagnostics) was used to detect serologic markers of HIV, HAV, HBV and HCV infections during the study period. HIV-1 RNA, HBV DNA and HCV RNA levels were determined when necessary by real time-poly-merase chain reaction (RT-PCR) using commercial kits (COBAS AmpliPrep/COBAS TaqMan HIV-1Test, COBAS AmpliPrep/COBAS TaqMan HBV Test, and COBAS AmpliPrep/COBAS TaqMan HCV Quantitative Test).
Definitive diagnosis of HIV infection was considered as reactive human immunodeficiency virus 1/2 antigen/antibodies (HIV 1/2 Ag/Ab) verified with a confirmatory method (Western blot, line immunoassay or indirect immunofluorescence) at a central public health laboratory. Serum RPR (Rapid labs, Great Britain) and TPHA (Plasmatec, United Kingdom) kits were used for syphilis and TPHA positivity (with positive or negative RPR) was considered as exposure to Treponema pallidum.
All procedures were performed according to the manufacturer's application instructions. The study was approved by the regional scientific ethics committee (B.30.2.ODM.0.20.08/398).
Isolated anti-HBc total positivity was present in 12 (5%) of the patients with anti-HBc total positivity. Eight of the patients with isolated anti-HBc total positivity were studied for HBV deoxyribonucleic acid (DNA) and HBV DNA was detected (116 IU/ml) in 1 (12.5%) of these patients.
Of the 224 patients examined for syphilis by RPR, results of 37 patients (16.5%) were reported as positive and 4 (1.7%) were reported as the gray zone. Of the 37 patients who were positive for RPR, TPHA was positive in 34 (91.8%), gray zone in 2 (4%) and negative in one (3.3%) patient. The TPHA results of four patients whose RPR test was reported to be in the gray zone were positive in one patient, gray zone in two patients and negative in one patient. Generally, of the 224 patients screened for syphilis, 35 (15.6%) had a positive TPHA confirmation test at the first screening tests.
Discussion
In our study, HBsAg seroprevalence was found to be 4.2%, while anti-HBc total and anti-HCV seroprevalences were 29.2% and 0.9%, respectively. Although the prevalence of coinfections varies according to the epidemiology of the disease, it is estimated that approximately 10% of HIV-infected people worldwide are co-infected with HBV and the prevalence of coinfection may be as high as 25% in countries in Asia and Africa, which are endemic for chronic hepatitis B [13,14]. In terms of the prevalence of chronic HBV infection, Turkey is in the moderately endemic group, similar to the other Mediterranean and Middle Eastern countries, and HBsAg positivity is reported to be around 4% in the normal population [15][16][17]. Although studies have reported some regional differences, anti-HCV seroprevalence in the general population has been reported to be between 0.4 to 1.5% in Turkey [18][19][20][21]. In a global meta-analysis evaluating HCV co-infections among HIV-infected people, the mean HCV seroprevalence was found to be 6.2%, while prevalence was 27% in eastern Europe and central Asia, where intravenous drug use was the main source of transmission for HIV [22]. Among 949 HIV-infected people from Istanbul, Turkey, HBsAg and anti-HCV seroprevalences were determined as 6.2% and 0.9%, respectively [23]. In another study evaluating serological data of 3,896 HIV-infected people across Turkey, HBV and HCV co-infections were reported as 3.2% and 0.5%, respectively [24]. Serological prevalence rates obtained in our study showed similarities with chronic viral hepatitis infection rates among the general population and other HIV-infected people in Turkey. In countries where higher seroprevalence rates for chronic viral hepatitis have been reported among HIV-infected people, this situation is often attributed to higher rates of intravenous drug usage. HIV is mainly transmitted sexually and intravenous drug use rates are low among HIV-infected people, as in the general population in Turkey [25][26][27]. In our study, 39.2% of HIV-infected people had antibodies to HBV and 84.5% to HAV. In this case, more than half of the diagnosed individuals were found to be susceptible to HBV. HAV seropositivity is up to 90% in adulthood in Turkey, which is similar to the seropositivity results of HIV-infected people in our study [28]. HIV-infected people should be vaccinated against HBV, as HBV co-infection causes a higher risk of cirrhosis and hepatocarcinoma. Also, the viral load is higher during HAV infection, and the duration of viremia is prolonged with simultaneous fecal excretion [29,30]. While both the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) recommend vaccination against HAV if any other medical, behavioral, epidemiological or occupational condition is added to the HIV infection, the Turkish Ministry of Health recommends the implementation of the HAV vaccine to all HIV-infected people susceptible to HAV infection [31][32][33].
In our study, isolated anti-HBc total positivity was detected in 12 (5%) patients, of whom HBV DNA was detected in 1 (12.5%) of 8 patients who were evaluated by a molecular test. The prevalence of isolated anti-HBc total positivity varies between 1% and 30% in different populations depending on the endemics of HBV [34][35][36]. Isolated anti-HBc total positivity was found to be between 1.8 and 5% in the studies conducted in the general population in our country, which is similar to the HBsAg seropositivity rate in the population [37][38][39]. Isolated anti-HBc total positivity is a common serological pattern in HIV-infected people. Studies published in different regions have reported that the prevalence of isolated anti-HBc total in HIV-infected people is between 10.6% and 45%, and the prevalence of occult infection with intermittent viremia has been reported to range from 0% to 89.5% [40][41][42][43]. Two studies conducted among HIV-infected people from Turkey determined the isolated anti-HBc total positivity as 2.8% and 13.5%, and occult HBV infection as 100% and 10.3%, respectively [12,44]. Although the prevalence of isolated anti-HBc total positivity and occult HBV infection rates obtained in our study have shown similarities with the data obtained from Turkey and with other national prevalence study results, rates have generally a very wide range because of the limited number of patients. Therefore, it is not appropriate to generalize the results, and studies conducted with higher numbers of patients are needed.
Of 224 patients who were examined for syphilis, 34 (15.1%) patients were positive for TPHA. Syphilis and HIV have similar routes of transmission and the same risk factors. Sex workers, intravenous drug users, men who have sex with men, those with a history of sexually transmitted diseases and people with multiple partners have a higher risk of acquiring HIV, syphilis and other sexually transmitted diseases [45]. Syphilis coinfection rates in HIV-infected people are in the range 2-43% in Europe and 1-21% in North America [46]. While syphilis prevalence in the general population in Turkey is between 0% and 0.46%, it is has been reported between 8.7% to 31.6% among transgenders and sex workers [47][48][49][50]. The seroprevalence of syphilis in HIV-infected people from Turkey has been found in two studies as 8% and 9.8% [51,52]. In our study, the prevalence of syphilis was significantly higher than the prevalence in the general population but was similar to the domestic and international prevalences with similar patient groups. As the risk factors (occupation, number of partners, sexual orientation) of HIV-infected people in the patient group may be different in global and local studies, prevalence of syphilis may be affected by this variables [53].
In conclusion, in 2018, a total of 6519 people were newly diagnosed with HIV from the 15 countries in the Centre of the WHO European Region, giving a rate of 3.3 per 100 000 population. The highest rates (> 3.0) were reported by Cyprus (9.0), Bulgaria (4.4), Turkey (3.9), Montenegro (3.7), Albania (3.5), Romania (3.4) and Poland (3.1) [54]. With the increasing number of patients, surveillance is important for newly diagnosed patients in terms of the frequency of sexually transmitted diseases such as viral hepatitis and syphilis.
In the study conducted in our center, a high seroprevalence rate of syphilis and low rate of immunity to HBV were detected. Screening of infection agents such as HAV, HBV, HCV, and syphilis among HIV-infected people in the initial period of diagnosis is important in preventing the negative effects of coinfections on the course of the disease and provides the chance of protection in people who are susceptible to HAV and HBV by vaccination.
In addition, our findings have shown that HIV infection should be considered in patients newly diagnosed with syphilis, which is detected at a higher rate in HIV-infected people compared to the general population; in this way early detection and treatment can prevent HIV transmission as well as syphilis.
Therefore, health care facilities that follow up HIV-infected people should determine the serological profiles of their patients initially once the patients are diagnosed. | 2020-08-27T09:08:54.530Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "9adfd53e54da6b3d106d172f56cacc9ec2706860",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.archivesofmedicalscience.com/pdf-118939-59553?filename=Seroprevalence%20of.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "feca1f307c1298cc2f7292bc1cc54fad242bb33c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237366008 | pes2o/s2orc | v3-fos-license | TRPV1 Channel Activated by the PGE2/EP4 Pathway Mediates Spinal Hypersensitivity in a Mouse Model of Vertebral Endplate Degeneration
Low back pain (LBP) is the primary cause of disability globally. There is a close relationship between Modic changes or endplate defects and LBP. Endplates undergo ossification and become highly porous during intervertebral disc (IVD) degeneration. In our study, we used a mouse model of vertebral endplate degeneration by lumbar spine instability (LSI) surgery. Safranin O and fast green staining and μCT scan showed that LSI surgery led to endplate ossification and porosity, but the endplates in the sham group were cartilaginous and homogenous. Immunofluorescent staining demonstrated the innervation of calcitonin gene-related peptide- (CGRP-) positive nerve fibers in the porous endplate of LSI mice. Behavior test experiments showed an increased spinal hypersensitivity in LSI mice. Moreover, we found an increased cyclooxygenase 2 (COX2) expression and an elevated prostaglandin E2 (PGE2) concentration in the porous endplate of LSI mice. Immunofluorescent staining showed the colocalization of E-prostanoid 4 (EP4)/transient receptor potential vanilloid 1 (TRPV1) and CGRP in the nerve endings in the endplate and in the dorsal root ganglion (DRG) neurons, and western blotting analysis demonstrated that EP4 and TRPV1 expression significantly increased in the LSI group. Our patch clamp study further showed that LSI surgery significantly enhanced the current density of the TRPV1 channel in small-size DRG neurons. A selective EP4 receptor antagonist, L161982, reduced the spinal hypersensitivity of LSI mice by blocking the PGE2/EP4 pathway. In addition, TRPV1 current and neuronal excitability in DRG neurons were also significantly decreased by L161982 treatment. In summary, the PGE2/EP4 pathway in the porous endplate could activate the TRPV1 channel in DRG neurons to cause spinal hypersensitivity in LSI mice. L161982, a selective EP4 receptor antagonist, could turn down the TRPV1 current and decrease the neuronal excitability of DRG neurons to reduce spinal pain.
Introduction
Low back pain (LBP) is the primary cause for disability globally [1], with a 1-month prevalence of 23.2% [2]. Since LBP is generally a persistent symptom, about 2/3 of the patients with LBP complained about their pain-related symptoms even after 12 months [3]. This persistent painful condition is associated with the development of multiple physical and psychosocial disabilities [4]. In 2017, a total of 577 million people experienced LBP, and more than 60 million healthy life years were lost worldwide, which resulted in a huge financial burden [5]. Unfortunately, we still do not understand the natural course of LBP, and there is no effective therapeutic approach to modify this multicause induced disease.
To search the main cause of LBP, many research groups have been concentrating on the aneural [6,7] intervertebral disc (IVD). Since there is only sporadic nerve ending existing in the outmost layer of the annulus, IVD as the main source for LBP remains debatable [8]. However, the endplate, which is rich in nerve endings in its ossified structure [7,9], has been overlooked. In patients with LBP, researchers have detected signal changes in the degenerative endplates by magnetic resonance imaging (MRI) [10,11]. Moreover, the close relationship between Modic changes or endplate defects and LBP has also been verified in some previous studies [12,13].
Endplates undergo ossification and become highly porous during IVD degeneration [14][15][16], and more nerve innervation occurs in degenerative endplates than in healthy endplates [17]. It has been reported that osteoclasts generated porous endplates with calcitonin gene-related peptide-(CGRP-) positive nerve ending innervation in the mice with lumbar spine instability (LSI) surgery [18]. As pain is generated by nociceptors, porous endplates with sensory nerve innervation should be the precondition for spinal pain in LSI mice.
Prostaglandin E2 (PGE2) is a lipid factor generated at the damaged region in diverse tissues, which could lead to inflammatory or neuropathic pain [19]. In the peripheral nerve system, PGE2 evokes primary sensory neurons, dorsal root ganglion (DRG), through its E-prostanoid (EP) receptors. There are 4 types of G protein-coupled EP receptors (including EP1, EP2, EP3, and EP4) mediating PGE2's function. In the previous studies, the EP4 receptor has been shown to participate in PGE2-induced inflammatory pain and sensory neuron excitability [20,21]. In addition, selective EP4 receptor antagonists could relieve PGE2-induced inflammatory pain. For instance, it has been reported that some kinds of EP4 receptor antagonists could suppress inflammatory pain caused by carrageenan or by complete Freund's adjuvant [22][23][24].
The PGE2/EP4 pathway could activate a series of painrelated ion channels, such as transient receptor potential vanilloid 1 (TRPV1) [25]. TRPV1 is made up of four subunits. It is a nonselective, outwardly rectifying cation channel [26], which is distributed not only in the DRG neurons but also in the peripheral terminals [27]. Various factors could activate the TRPV1 channel, such as ligand binding [28], voltage [29], or temperature [30]. The TRPV1 channel is considered to be an aggregator of nocuous chemical, mechanical, or thermal stimuli and is demonstrated to be one of the most important ion channels participating in inflammatory or neuropathic pain [31,32].
In this study, we found an elevated concentration of PGE2 in the porous endplate of LSI mice. This high-level PGE2 activated the TRPV1 channel in DRG neurons via its EP4 receptor in the CGRP + sensory nerve, which causes spinal hypersensitivity. In particular, L161982, a selective EP4 receptor antagonist, turned down the TRPV1 current and decreased the neuronal excitability of DRG neurons to reduce spinal pain.
Mice and In Vivo Treatment.
All animal experiments in this study were approved by the Local Committee of Animal Use and Protection of the Third Hospital of Hebei Medical University (Hebei, China). The C57BL/6J male mice were obtained from Shanghai SLAC Laboratory Animal Co. Ltd. (Shanghai, China). We anesthetized the 2-month-old mice with ketamine (at a dosage of 100 mg/kg) and xylazine (at a dosage of 10 mg/kg). For the spinous processes, supraspinous and interspinous ligaments of L3-L5 vertebrae were resected to create the LSI model that led to vertebral endplate degeneration. Correspondingly, the posterior paravertebral muscles of L3-L5 vertebrae were detached in the sham group. At 8 weeks after operation, LSI mice received vehicle or L161982 (5 mg/kg/d) (Tocris, U.S.) by intraperitoneal injection for 2 weeks. To overactivate the TRPV1 channel, LSI mice received capsaicin injection at caudal endplates of L4-L5. Specifically, 2 μL capsaicin (2 mg/mL) was injected into the left part of caudal endplates of L4-L5 using borosilicate glass capillaries after drilling a hole at the left part of the endplate. The drilling holes were sealed with bone wax immediately after injection to prevent tracer leakage. After capsaicin injection, the wound was sutured, and a heating pad was used to protect mice during recovery from anesthesia. Using an overdose of isoflurane, we euthanized the animals at 4 or 8 weeks after sham or LSI operation or at 2 weeks after L161982 or vehicle treatment.
Histomorphometry and
Immunofluorescence. The lumbar spine or DRG samples were dissected from mice and then were fixed in 10% buffered formalin (4°C, 24 h). The samples of the lumber spine were decalcified by 0.5 M ethylenediamine tetraacetic acid at 4°C for 3 weeks, and the L2 DRGs were dehydrated by 30% sucrose at 4°C for 48 h. The spine samples were embedded in optimal cutting temperature compound (OCT) or paraffin. The DRG samples were embedded in OCT. We used the 4 μm thick sections (lumber spine) for safranin O and fast green staining. 40 μm thick sections of the spine samples were used for nerve fiber-related immunostaining. 10 μm thick sections of the spine or DRG sample were used for other immunostaining. For immunofluorescent staining, we incubated the sections (lumber spine or DRG) with primary antibodies to CGRP (1 : 100, Abcam, U.S.), COX2 (1 : 100, Abcam, U.S.), EP4 (1 : 100, Abcam, U.S.), and TRPV1 (1 : 200, Abcam, U.S.) (4°C, overnight). Then, we incubated the sections (lumber spine or DRG) with secondary antibodies (room 2 Oxidative Medicine and Cellular Longevity temperature, 1 h, avoiding light). The fluorescence or confocal microscopes were used to capture the images of spine or DRG samples. ImageJ software (National Institutes of Health, U.S.) was used for the quantitative analysis.
Behavioral
Testing. Pressure tolerance was measured by the vocalization thresholds (as a nociceptive threshold) using a force gauge (Bioseb). Animals were gently restrained and received the pressure force by a sensor on their skin over the L4-L5 spine. A gradual increase in pressure force (50 g/s) was performed on the mice until the animals made an audible vocalization. To prevent tissue injury, the maximum force was limited to 500 g. Spontaneous activity was measured by several indicators (including distance traveled, active time, and maximum speed) using the activity wheels (Bioseb). Animals were kept in the cages which are similar to their home cages, and the wheels of the device could be rotated by animals in both directions. The software of this device could record the real-time data of the animals' spontaneous activity.
The pain hypersensitivity in response to mechanical stimulation was measured by hind paw withdrawal frequency (PWF) using the von Frey test with 0.07 or 0.4 g filament (Stoelting). Animals were restrained in a transparent plastic cage, which was put on a metal mesh grid. The midplantar position of the animal's hind paw was stimulated by 0.07 or 0.4 g filament through the mesh grid. The filaments should be buckled by enough pressure, and the frequency of mechanical stimulus was 10 times at a 1 s interval. When the hind paw was withdrawn after the stimulation by von Frey filaments, it was recorded.
Quantitative Real-Time Polymerase Chain Reaction
(qRT-PCR). The total RNA of the L4-L5 caudal endplate was extracted by using the TRIzol reagent (Tiangen, Beijing, China). We measured RNA purity by the absorbance of 260/280 nm. With the RevertAid™ First Strand cDNA Synthesis Kit (Thermo Fisher, U.S.), we reverse transcribed 1 μg RNA into cDNA. Then, we performed qRT-PCR by using the SuperReal PreMix Plus (Tiangen, Beijing, China). Relative expression of target genes was analyzed by the 2 −ΔΔCT method. The primers used in our study are listed in Table 1.
Current Clamp
Recording. The pipette solution contained the following (in mM): KCl 140, EGTA 0.5, HEPES 5, and Mg-ATP 3 (pH 7.3 with KOH). The bath solution for DRG neurons was as follows (in mM): NaCl 140, KCl 3, MgCl 2 2, CaCl 2 2, and HEPES 10 (pH 7.3 with NaOH). Cells were examined for action potential firing with a series of 1 s current from 50 pA to 500 pA in 50 pA increments or with a liner ramp of current from 0 pA to 1000 pA (500 ms duration). -200 pA (200 ms) was injected to measure membrane input resistance (R in ).
2.11. Statistical Analysis. We conducted data analyses by using SPSS15.0 software. Data were shown as means ± standard deviations. We used unpaired two-sample t-test to compare the means of two groups. We used one-way ANOVA with Bonferroni's post hoc test to compare the means of multiple groups. With the two-way ANOVA with repeated measures, we analyzed the effects of LSI surgery on animals' spinal hypersensitivity and movements at different time points. We established inclusion or exclusion criteria before each experiment and did not exclude any sample during data analysis. p < 0:05 was regarded as the statistical significance for all experiments.
Sensory Innervation in the Porous Endplate in LSI Mice.
To demonstrate the endplate porosity in LSI mice, we examined the L4-L5 caudal endplates after 4 and 8 weeks of surgery using histological staining and 3-dimensional μCT. Safranin O and fast green staining results revealed that bone marrow cavities appeared in degenerative endplates in LSI mice, while the endplates in the sham group were cartilaginous and homogenous (Figure 1(a)). Moreover, the reconstruction of 3-dimensional μCT also showed porous endplates in the LSI mice, while the microstructure of endplates was intact in the sham group (Figures 1(b) and 1(c)). However, LSI surgery did not influence the bone mass of the lumbar vertebra (Supplementary Figure 1A-E).
Immunofluorescent staining showed the innervation of CGRP + nerve fibers in the porous endplate at 4 and 8 weeks 3 Oxidative Medicine and Cellular Longevity after LSI surgery, but the CGRP + nerve endings did not exist in homogenous endplates of sham surgery mice (Figures 1(d) and 1(e)).
Spinal Hypersensitivity Increased in LSI Mice.
In the behavior test experiments, the vocalization threshold was recorded as an indicator of pressure tolerance. We found that LSI surgery significantly decreased the pressure tolerance at 4 and 8 weeks, as compared with the sham surgery mice (Figure 2(a)).
We further examined LSI surgery effects on animals' voluntary and spontaneous activity, including distance traveled, active time per 24 h, and maximum speed of movement. All three indicators decreased significantly in LSI mice rather than in the sham group at 4 and 8 weeks ( Finally, we performed the von Frey test to evaluate the mechanical hypersensitivity of the hind paw, which could indirectly reflect the severity of LBP. The PWF was increased significantly by LSI surgery at 4 and 8 weeks (Figures 2(e) and 2(f)).
PGE2 Concentration and EP4 Expression Increased in
the Porous Endplate of LSI Mice. Since PGE2 is the cyclooxygenase 2 (COX2) product in the inflammatory environment, we examined COX2 expression, prostaglandin E synthase (PGES) expression, and PGE2 concentration in L4-L5 endplates at 8 weeks in the two groups. qRT-PCR and immunostaining showed an increase in COX-2 expression at 8 weeks in the LSI group relative to the sham group (Figures 3(a)-3(c)). Similarly, PGES mRNA and PGE2 concentration was significantly increased after 8 weeks of LSI surgery in qRT-PCR and ELISA, respectively, relative to the sham group (Figures 3(d) and 3(e)).
Since there were four types of EP receptors (EP1-EP4) mediating PGE2's functions, we used qRT-PCR to evaluate the change of the mRNA levels of these four types of EP receptors after LSI surgery. Interestingly, we found a 6-fold increase in EP4 expression and a 2-fold increase in EP2 expression in the LSI group relative to the sham group by qRT-PCR. But there was no significant difference in EP1 and EP3 expression between the LSI and sham groups (Figure 3(f)).
EP4/TRPV1 Expressed in CGRP + Nerves in the Porous
Endplate and in the CGRP + Neuron of L2 DRG in LSI Mice, Respectively. Immunofluorescent staining showed that EP4 expression existed in CGRP + nerve fibers in degenerative endplates (Figure 4(a)). Moreover, there was also colocalization of TRPV1 and CGRP in the degenerative endplates, as examined by immunofluorescent staining (Figure 4(b)).
In a previous study, a retrograde tracing experiment was conducted in LSI mice. They found that Dil was significantly retrograded to L1-L2 DRG, especially to L2 DRG [18]. Therefore, we performed the costaining of EP4 and CGRP in L2 DRG. We found that the percentage of EP4 + CGRP + /-CGRP + neurons was increased in the LSI group than in the sham group (Figures 4(c) and 4(d)). Meanwhile, we conducted the costaining of TRPV1 and CGRP in L2 DRG. The percentage of TRPV1 + CGRP + /CGRP + neurons was also increased in the LSI group (Figures 4(e) and 4(f)). L2 DRG neurons were isolated from the mice at 8 weeks and then were cultured overnight. With the whole-cell patch clamp, we did the electrophysiological experiments in smallsize neurons (Cm < 42 pF) taken from L2 DRGs [33]. The TRPV1 current amplitude (1 μM capsaicin) increased significantly in LSI mice (Figures 5(c) and 5(d)). Furthermore, the proportion of capsaicin-responsive neurons also increased in LSI mice relative to the sham group ( Figure 5(e)).
L161982, a Selective EP4 Receptor Antagonist, Reduced
Spinal Hypersensitivity in LSI Mice. We used L161982, an EP4-receptor antagonist, to investigate the effects of blocking PGE2/EP4 signaling on spinal hypersensitivity. In pressure tolerance and spontaneous activity tests, L161982 treatment increased pressure tolerance and spontaneous activity of LSI mice compared to the vehicle group (Figures 6(a)-6(d)).
Similarly, the inhibitory effect of L161982 on hind paw mechanical hypersensitivity, as indicated by decreased PWF to 0.07 g or 0.4 g stimulation, was also demonstrated at 2 weeks after treatment (Figures 6(e) and 6(f)).
However, the EP4 receptor antagonist L161982 did not influence the endplate porosity of LSI mice (Supplementary Figure 1F, G).
Moreover, we injected capsaicin at the caudal endplate of L4-L5 of LSI mice to overactivate the TRPV1 channel. We Figure 2A-F).
3.7. L161982 Reduced TRPV1 Channel Current Density in L2 DRG Neurons. Western blotting analysis showed that EP4 and TRPV1 expression decreased in L2 DRG of mice with L161982 treatment relative to vehicle treatment (Figures 7(a) and 7(b)).
Oxidative Medicine and Cellular Longevity
The TRPV1 current amplitude (1 μM capsaicin) decreased significantly in LSI mice with L161982 treatment relative to vehicle treatment (Figures 7(c) and 7(d)).
In addition, the capsaicin-responsive neuron percentage decreased in the L161982 group compared to the vehicle group (Figure 7(e)).
We found that TRPV1 overactivation by capsaicin injection increased TRPV1 current measured with a patch clamp. And the TRPV1 current was obviously increased in the LSI +capsaicin+L161982 group, compared with the LSI +L161982 group (Supplementary Figure 3A, B).
L161982 Reduces the Excessive Neuronal Excitability of DRG Neurons Induced by LSI.
To determine whether LSI surgery increases DRG neuronal excitability and whether PGE2/EP4/TRPV1 pathway activation is responsible for DRG neuron hyperexcitability of LSI mice, evoked action potentials (APs) were studied by current clamp recording.
With step current injection, LSI surgery increased AP firing frequency compared to the sham group, and the AP firing frequency could be reduced by L161982 treatment (Figures 8(a) and 8(b) and Table 2). The minimal depolarizing current that could evoke APs was significantly decreased after LSI operation, which could also be reversed by L161982 (Figure 8(c) and Table 2).
In addition, we evaluated the neuronal hyperexcitability by ramp current stimulation. LSI surgery significantly increased the firing of APs relative to the sham group, and the firing of APs was lowered by L161982 treatment (Figures 8(d) and 8(e) and Table 2). The percentage of neurons which fired APs was also calculated under the simulation of ramp current injection. We found a higher responding rate in LSI mice compared with the sham group, and the responding rate was significantly lowered by L161982 treatment (Figure 8(f)).
Discussion
The IVD degeneration is regarded as one of the most common diseases causing LBP [34]. In recent decades, Modic changes, manifested as signal changes in endplates by MRI, have been demonstrated to be a specific cause of LBP [35]. Endplates undergo ossification and become porous during IVD degeneration, which leads to LBP [36,37]. It has been reported that more nerve innervation occurs in degenerative endplates than in healthy endplates [17]. In our study, we used a mouse model of vertebral endplate degeneration by LSI surgery [14]. According to behavior test experiments, the pressure tolerance and spontaneous activity significantly decreased in LSI mice, whereas the hind paw mechanical hypersensitivity significantly increased in this model.
Consistent with the previous study [18], we demonstrated that CGRP + nerves innervated in the porous endplate of LSI mice. It has been reported that CGRP could be generated from peripheral or central nerve fibers as the mechanical stimuli on skin [38]. CGRP receptors are demonstrated to be widely distributed in the pain-related pathway [39]. Acute or chronic nociception could promote sensory nerves Oxidative Medicine and Cellular Longevity or central terminals to generate more CGRP into the dorsal horn [40,41]. Thus, the CGRP + nerve innervated in the porous endplate, which was the precondition for spinal hypersensitivity in LSI mice. In our study, we found that COX2 expression and PGE2 concentration were significantly increased in the porous endplate in LSI mice. Moreover, there was a 6-fold increase in EP4 expression and a 2-fold increase in EP2 expression in the endplate of LSI mice relative to sham mice, but there was no significant difference in EP1 and EP3 expression between the two groups. Thus, the PGE2/EP4 pathway might play a crucial role in spinal hypersensitivity of this animal model. When tissue was damaged, the inflammatory mediators, such as PGE2, were released at the local region or in the spinal cord [42]. PGE2 induces pain sensitization and leads to CGRP release in sensory nerves in vivo [43], as well as in cultured DRG neurons in vitro [44]. PGE2 displays functions via its G protein-coupled receptors (EP1-EP4) [45]. The EP4 receptor is coupled with G protein and activates adenylate cyclase, which enhances the intracellular activation of cAMP-dependent protein kinases (e.g., PKA) [46]. PGE2 has been reported to promote the capsaicin-evoked CGRP generation by DRG neurons via its G protein-coupled EP receptor, EP4 receptor [21]. In our study, we demonstrated the colocalization of EP4 and CGRP in the nerve endings both in porous endplates and in the DRG neurons. Besides, we also found the colocalization of TRPV1 and CGRP in the nerve endings both in porous endplates and in the DRG neurons by immunofluorescent staining.
The crucial role of TRPV1 activation in spinal pain of LSI mice was also demonstrated in our present study. We found a higher expression of TRPV1 in L2 DRG which innervated in L4-L5 endplates of LSI mice. The upregulated expression of TRPV1 in L2 DRG correlated well with the increase in spinal hypersensitivity. Furthermore, the patch clamp results showed that LSI operation increased TRPV1 current density, suggesting that the functional TRPV1 expression was increased by LSI surgery. Thus, the increased current density of the TRPV1 channel might participate in LSI-induced spinal hypersensitivity.
TRPV1, a member of TRP ion channels, has been recognized as "a molecular gateway" to nociceptive sensation. TRPV1 was mainly distributed in the dorsal root ganglion, trigeminal ganglion, spinal cord, and peripheral nerve endings. In addition, TRPV1 was also found in some nonneural tissues such as the lung, gastrointestinal tract, and respiratory tract. In recent years, it has been found that TRPV1 is important in mediating hypersensitivity mediated by inflammation nocuous chemical, mechanical, or thermal stimuli in the airway, skin, gastrointestinal tract, and other organs [47][48][49][50][51]. There is less evidence about TRPV1-mediating hypersensitivity in a vertebral endplate degeneration model. However, in the arthritis model, whose pathogenesis is similar to the vertebral endplate degeneration model, the fact that TRPV1 is important in mediating hypersensitivity has been proven. Thermal hyperalgesia and osteoarthritic pain are associated with the activation of the TRPV1 channel [52]. TRPV1 may contribute to the pain hypersensitivity and inflammation of arthritis via an ERK-mediated pathway [53]. Polypeptide APHC3, a mode-selective TRPV1 antagonist, can significantly reverse mechanical hypersensitivity in the arthritis model [54]. The above evidence shows that TRPV1 is important in mediating hypersensitivity in degenerative osteoarthritis. TRPV1 contributes to spinal hypersensitivity. Evidence proved that hypersensitivity induced by activation of spinal cord PAR2 receptors is mediated by TRPV1 receptors [55]. TRPV1 was functionally expressed in GABAergic spinal interneurons, and activation of spinal TRPV1 resulted in long-term depression of excitatory inputs and a reduction of inhibitory signaling to spinothalamic tract projection neu-rons and eventually leads to central sensitization [56]. Evidence has demonstrated that blocking TRPV1 could relieve spinal hypersensitivity. The thermal and mechanical hypersensitivity in the spine can be relieved by the TRPV1 selective antagonist [57]. Intrathecal administration of the antisense oligonucleotide against TRPV1 reduced mechanical hypersensitivity in rats with spinal nerve ligation [58]. The hypersensitivity induced by lumbar 4 spinal nerve ligation in mice was completely reversed by the TRPV1 antagonist A-425619 [59]. The threshold against heat sensitivity in the L5 ipsilateral dorsal horn of the spinal cord was markedly prolonged in Trpv1-/-mice than in WT mice [60]. Capsazepine, a TRPV1 blocker, could greatly inhibit thermal hypersensitivity in a spinally sensitized state [61]. AMG9810, the specific antagonist of TRPV1, could significantly attenuate the activation of bilateral spinal astrocytes and microglia [33]. The above evidence indicates that blocking TRPV1 could relieve spinal hypersensitivity.
Actually, there is a close relationship between the PGE2/EP4 pathway and TRPV1 channel. PGE2 has been shown to increase surface trafficking of EP4 and TRPV1 in vitro [62]. In a restraint stress rat model, overproduced PGE2 in injured nerves chronically increased EP4 and TRPV1 expression in primary sensory neurons, and EP4 antagonists relieved both inflammatory and neuropathic pain [25]. In our study, using behavior test experiments, we found that L161982, an EP4 receptor antagonist, relieved spinal hypersensitivity by blocking the PGE2/EP4 pathway PGE2 acts on target cells through its receptors EP1, EP2, EP3, and EP4. Interactions of PGE2/EP4 and TRPV1 in pain hypersensitivity have been proven. PGE2 enhanced capsaicin-induced currents in DRG neurons through EP4 [20] and EP4-PKA signaling cascades [63]. PGE2 potentiated pain evoked by the TRPV1 agonist [64]. The upregulation of TRPV1 in DRG neurons was suppressed by a selective COX2 inhibitor, suggesting that PGE2 stimulates TRPV1 synthesis in DRG neurons [65]. Furthermore,
10
Oxidative Medicine and Cellular Longevity PGE2-induced thermal hyperalgesia was abolished in TRPV1-knockout mice [63]. The above evidences suggest that functional interactions between PGE2/EP4 and TRPV1 are crucial to PGE2-induced nociceptor sensitization. A recent study has proven that PGE2/EP4 increased TRPV1 cell surface trafficking in DRG neurons via cAMP/PKA/ERK/MAPK sig-naling pathways. Moreover, PGE2 induced TRPV1 externalization and enhances TRPV1 activity [62]. In our study, we showed that L2 DRG neurons exhibited an increased excitability in the LSI model. The hyperexcitability of DRG neurons was decreased by the inhibition of the PEG2/EP4 pathway with L161982. These results showed
12
Oxidative Medicine and Cellular Longevity that TRPV1 channel activated by the PEG2/EP4 pathway participated in the enhancement of the excitability of DRG neurons in LSI mice. It has been reported that the hyperexcitability of DRG neurons leads to central sensitization and chronic pain [66]. Therefore, the TRPV1 channel activated by the PEG2/EP4 pathway caused the hyperexcitability of DRG neurons, which could drive spinal pain.
In conclusion, the PGE2/EP4 pathway in the porous endplate could activate the TRPV1 channel in DRG neurons to cause spinal hypersensitivity in LSI mice. L161982, a selective EP4 receptor antagonist, could turn down the TRPV1 current and decrease the neuronal excitability in DRG neurons to reduce spinal pain.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest. 20JCJQIC00230), and National Natural Science Foundation of China (No. 81971660). | 2021-09-01T05:38:07.214Z | 2021-08-21T00:00:00.000 | {
"year": 2021,
"sha1": "4f2ce2483dcbec7b994a43570bcc6054c36699e9",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2021/9965737.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f2ce2483dcbec7b994a43570bcc6054c36699e9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227250456 | pes2o/s2orc | v3-fos-license | Socioeconomic inequalities in the place of death in urban small areas of three Mediterranean cities
Background Dying at home is the most frequent preference of patients with advanced chronic conditions, their caregivers, and the general population. However, most deaths continue to occur in hospitals. The objective of this study was to analyse the socioeconomic inequalities in the place of death in urban areas of Mediterranean cities during the period 2010–2015, and to assess if such inequalities are related to palliative or non-palliative conditions. Methods This is a cross-sectional study of the population aged 15 years or over. The response variable was the place of death (home, hospital, residential care). The explanatory variables were: sex, age, marital status, country of birth, basic cause of death coded according to the International Classification of Diseases, 10th revision, and the deprivation level for each census tract based on a deprivation index calculated using 5 socioeconomic indicators. Multinomial logistic regression models were adjusted in order to analyse the association between the place of death and the explanatory variables. Results We analysed a total of 60,748 deaths, 58.5% occurred in hospitals, 32.4% at home, and 9.1% in residential care. Death in hospital was 80% more frequent than at home while death in a nursing home was more than 70% lower than at home. All the variables considered were significantly associated with the place of death, except country of birth, which was not significantly associated with death in residential care. In hospital, the deprivation level of the census tract presented a significant association (p < 0.05) so that the probability of death in hospital vs. home increased as the deprivation level increased. The deprivation level was also significantly associated with death in residential care, but there was no clear trend, showing a more complex association pattern. No significant interaction for deprivation level with cause of death (palliative, not palliative) was detected. Conclusions The probability of dying in hospital, as compared to dying at home, increases as the socioeconomic deprivation of the urban area of residence rises, both for palliative and non-palliative causes. Further qualitative research is required to explore the needs and preferences of low-income families who have a terminally-ill family member and, in particular, their attitudes towards home-based and hospital-based death.
Background
Interest in the study of decision-making regarding the place of death has increased in recent decades [1,2]. Dying at home is the most frequent preference of patients with advanced chronic diseases [3][4][5][6], their caregivers [4], and the general population [4], and so it has become an indicator of the quality of palliative care [7]. However, up until today, most deaths continue to occur in hospitals [6,8].
Place of death predictors are traditionally grouped into three categories: disease-related factors, individual factors, and environmental factors [2]. Research on these predictors through death certificates has usually examined a country's global information, or that of a region, a municipality, or a specific population subgroup in relation to individual factors such as age [6,7,9,10], sex [6,7,9,10], educational level [7,11], marital status [6,10], place of residence in the rural or urban context [6,7] or the patient's own preferences [2]. Similarly, the relationship with disease factors, like the diagnosis of the patient's pathology [6][7][8][12][13][14] and environmental factors such as the availability of family support and home care resources [15][16][17], as well as other more generally related to or derived from local laws or health policies, has also been examined [6,18].
In addition, recent studies have also explored the relationship between place of death and aggregate indicators of socioeconomic deprivation [11,19] with the areas of greatest deprivation being those with the highest hospital mortality rates. Furthermore, various studies seem to indicate that social differences have a lesser effect on the place of death when measured at the individual level (for example, educational level) than when measured at the aggregate level (commonly by means of deprivation at the area level) [19]. In this regard, an adequate instrument to measure inequalities is a deprivation index (DI), like those used in the MEDEA and INEQ-CITIES studies on social inequalities and risk of death [20][21][22], that combine a set of indicators such as the percentages of unemployed, temporary workers, low educational level of different population groups, or manual workers and allow classifying small geographic areas according to their level of socioeconomic deprivation [23].
In Spain, although some retrospective studies have included indicators of deprivation with respect to the place of death, they have only done so for some specific pathologies [24][25][26]. No studies of the relationship between these indices of deprivation and place of death for general mortality and according to different causes of death have been carried out as far as this team is aware. Knowledge of these aspects, especially in urban settings, where the majority of the population in Europe and Spain concentrates, would allow the adoption of organizational and guidance measures for health services, particularly those related to palliative care and end-of-life care. Therefore, the objective of this study was to analyse the socioeconomic inequalities in the place of death among the deaths in the large cities of the Valencian Community (Alicante, Castellón, and Valencia) during the period 2010-2015, using levels of deprivation by small areas of the cities, and also to assess if such inequalities are different depending on palliative (oncological, non-oncological) or non-palliative causes of death.
Study design
This is a cross-sectional study of the resident population in the cities of Alicante, Castellón, and Valencia aged 15 years or over whose death took place between 2010 and 2015. These cities are located on the Mediterranean coast, in the Valencian Community, with an average annual population during the study period of 333,198 inhabitants in Alicante, 177,784 in Castellón, and 794,874 in Valencia.
Data sources and variables
All deaths of residents in those cities during the study period were included in the analysis. Data obtained from the Valencian Community Mortality Registry, anonymised and included in the Medical Death Certificate -Statistical Death Bulletin (Spanish initials: CMD-BED), were used. For each city, deaths were geo-referenced and assigned to their resident census tract (CT) using the address included in the CMD-BED.
The response variable was the place of death. The Spanish registry of deaths includes five possible categories for this variable: home, hospital, residential care, place of work, or other, and the place may be left blank (not recorded). Deaths in the first three places were the focus of this analysis.
The explanatory variables included in the CMD-BED, were: sex (male, female); age (15-64, 65-74, 75-84, 85 and over); marital status (not declared, single, widowed, separated, married); place of birth (Spain, another country) and basic cause of death coded according to the International Classification of Diseases, 10th revision (ICD-10). As this research is based on retrospective anonymized administrative data, the approval of the ethical committee is not necessary for its implementation in Spain.
Thus, the result is the variable 'type of cause' with 3 categories: OCNPC, NOCNPC, CnotPC.
Socioeconomic level
In each city, a deprivation index (DI) was calculated for each CT from the following indicators: unemployment, manual workers, temporary employees, insufficient instruction in young people (16 to 29 years old), and insufficient instruction in general, all of them in percentage and obtained from the Spanish Population and Housing Census of 2011. These indicators have been used in the coordinated national project MEDEA to construct a deprivation index through a principal components analysis based on census data in the main Spanish cities [23]. The index used in this study was developed within the framework of the MEDEA III project (third edition of the coordinated MEDEA project).
The DI values of each CT, in each city, were classified by percentiles: 10 (P 10 ), 25 (P 25 ), 75 (P 75 ), and 90 (P 90 ), according to the methodology described in Oliva-Arocas et al. (2020) [28] that classifies the CT into five levels of deprivation (DL) according to its value: (DL1, DI values below P 10 ; DL2, DI values between P 10 and P 25 ; DL3 DI values between P 25 and P 75 ; DL4 DI values between P 75 and P 90 and DL5, values DI higher than P 90 ). This classification was defined, according to the objective of the study, to identify and quantify the most extreme inequality, that between the most socioeconomically favoured areas (DL1) and the most deprived areas (DL5).
Statistical analysis
Frequencies and percentages were calculated for the following variables: death place (home, hospital, and residential care), cause of death (CNPC, OCNPC, NOCNPC, CnotPC), and deprivation level (DL). To compare the frequencies of death between places of death, the excesses of probability of death ('odds') in hospital and residential care were calculated dividing the percentage of death in each location by the percentage of death at home. The Nelson approximation was used in order to calculate the 95% confidence intervals (95% CI) [29].
Multinomial logistic regression models were adjusted in order to analyse the association between the place of death and the DL, with the place of death (home, hospital, or residential care) as the response variable and the DL as the explanatory. Simple and adjusted models taking into account the rest of the sociodemographic variables were estimated. In addition, the existence of different socioeconomic inequalities according to the type of cause was surveyed by including in the model an interaction term between the DL and the type of cause. Finally, in all the models, the reference category was domicile (D), estimating the 'odds ratio'(OR), and its corresponding 95% CI, as a measure of association. The SPSS statistical program version 25 was used, with a 0.05 level to establish statistical significance.
Results
From 2010 to 2015, there were 67,521 deaths among the resident population in the cities under study, of which 67,200 (99.5% of the total) occurred in the population aged 15 or over. Of these, 876 (1.3%) could not be assigned to their census tract of residence due to the unavailability of a valid residence address or that it did not belong to the city. Of the 66,324 deaths, 60,748 occurred at home, hospital, or residential care facilities, and therefore they were used in the data analysis. Of these, 49,021 were related to CNPC.
Differences between death in hospital and residential care versus home
A total of 58.5% of deaths occurred in hospitals, 32.4% at home, and 9.1% in residential care (see Table 1).
The percentage of death in hospitals ranged from a minimum of 47.3% in people aged 85 and over to a maximum of 81.2% in people aged 15 to 44 years. At home, it went from 16.5% in people aged 15-44 to 39.8% in people residing in the DL1 areas (the areas with better socioeconomic status). For residential care, the death rates oscillated between 2.3% in subjects aged 15 to 44 years and 14.4% in those over 85 years of age. Death in hospital was 80% more frequent than at home (Odds for hospital vs. home = O H/D = 1.806), while death in the nursing home was more than 70% lower than at home (Odds for residential care vs. home = O R/D = 0.281).
When comparing hospital with domicile (home), the O H/D were significantly greater than 1 (p < 0.05) for any category of the explanatory variables, with the highest excesses of death in men, residents of the city of Alicante, aged between 15 and 44 years old, divorced, and born in another country. Death in hospital was significantly more frequent for CnotPC, followed by OCNPC and finally NOCNPC. Regarding the socioeconomic level For residential care, the O R/D was always significantly (p < 0.05) less than 1, with lower mortality in men, residents in Alicante, aged between 45 and 64 years, married, and born in another country. Compared to domicile, death in residential care was lower for OCNPC, with NOCNPC and CnotPC presenting similar deficits. The socioeconomic level of the area of residence did not present a clear trend, with low values both in favoured levels such as DL1 and high deprivation levels such as DL5. Regarding overall mortality, Fig. 1 shows both a slight decrease of deaths in hospital and a slight increase of deaths in residential care homes from the beginning to the end of the period with statistically significant differences (p < 0.001) for these two settings. As well, there is a slight increase in residential care deaths for CnotPC. Deaths at home remained stable over time.
Specific conditions needing palliative care Regarding death in residential care, all causes presented O R/D significantly lower than 1 (p < 0.05). The low O R/D value for malignant neoplasm stood out, followed by liver disease. The CnotPC reached an intermediate mortality deficit in residential care.
Association between the place of death and level of deprivation Table 3 shows the crude and adjusted OR that estimate the association between the place of death and the explanatory variables. All the variables considered were significantly associated with the place of death, in the simple and multivariate analysis models, except country of birth, which was not significantly associated with death in residential care.
The multivariate model with all variables had indices pseudo-R2 Cox and Snell = 0.1034 and Nagelkerke = 0.1240. The adjusted ORs showed that the level of deprivation of the CT presented a significant association (p < 0.05) and increased from the lowest DL to the highest one. That is, the probability of death in hospital vs. home increased as the deprivation level raised. In addition, the highest probability of death in hospital vs. home was associated with residing in the city of Alicante, male sex, younger age, marital status different from married, and place of birth outside of Spain. The probability of death in hospital vs. home was higher for CnotPC, followed by OCNPC, vis-à-vis the NOCNPC.
The level of CT deprivation was also significantly associated with death in residential care, but with lower ORs than in the case of death in hospital and with a diffuse pattern, since both highly deprived and less deprived CTs presented excesses of the probability of death over the reference level (DL1). Regarding the other variables, the highest probability of death in residential care vs. home occurred in Castellón, in relation to old women, of any marital status other than married, and without significant association with the country of birth. Only OCNPC presented a significant deficit in deaths.
The estimated ORs of association between DL and place of death represent a measure of the level of inequality in the probability of death according to the level of deprivation of the area of residence. To check if these inequalities could be different depending on the cause of death (OCNPC, NOCNPC, CnotPC), an interaction term between DL and cause of death was added to the previous multivariate model. The resulting model presented indices pseudo-R2 Cox and Snell = 0.1037 and Nagelkerke = 0.1244, with little variation with the interaction-less model. Furthermore, the interaction term was not significant (p = 0.221), meaning that the estimated inequalities are not different according to these cause groupings.
The effect of sex on the relationship between DL and place of death was also checked by adding an interaction term between DL and sex to the main effects model. Nevertheless, the interaction term was not significant (p = 0.322) and presented indices pseudo-R2 Cox and Snell = 0.1036 and Nagelkerke = 0.1242, with little variation with the interaction-less model. This suggests that sex does not substantially alter the deprivation effect.
To delve into the causes that presented the highest association between death in hospital and the level of deprivation, multinomial logistic regression models were adjusted for each of the CNPC and also the CnotPC. Table 4 presents the ORs between death in hospital vs. home and level of deprivation, adjusted by the rest of the explanatory variables (city, sex, age, marital status, and country of birth). Mortality due to respiratory disease, Alzheimer's disease, dementia and senility, and mortality from CnotPC were noticeable for their significant and high association with the DL as compared with mortality by the rest of the causes of death. This effect has contributed to a greater extent to the global association. Mortality from malignant neoplasm and heart disease also presented a significant but lower association with DL than that of the group of all deaths. Regarding the rest of the causes, no significant association with DL was detected.
Discussion
The objective of this study was to analyse the socioeconomic inequalities in the place of death in the large cities of the Valencian Community (Alicante, Castellón, and Valencia) during the period 2010-2015, using levels of socioeconomic deprivation by small areas of the cities. A part of our aims was to assess if such inequalities were different depending on whether the deaths were due to conditions needing palliative care (oncological, nononcological) or conditions not needing palliative care. The results have highlighted the existence of socioeconomic inequalities in the sense that greater deprivation would clearly increase the probability of death in hospital vs. home. This effect is not so evident in the case of death in a nursing home. There was no evidence to affirm that the estimated inequalities are different according to the different cause of death groupings. As in previous research [13], other sociodemographic variables such as country of birth, sex, age, marital status, or cause of death have also been associated with place of death. Importantly, the Survey of Care for Patients with Terminal Illness shows that in 2009 the Spanish population (18+ years) preferred to be cared for at home (45.0%), followed by care in a specialized center (31.9%) and only 17.8% would choose a hospital in the case of irreversible disease in terminal phase [30]. This shows the high percentage of the population that prefers to end their days of life at home.
Regarding the excesses of deaths in hospital vs. home, significantly higher values were observed in the most deprived CT, especially in respiratory disease, Alzheimer's disease, malignant neoplasm, heart disease, and other non-palliative causes. In all cases, an association was observed between living in areas with greater deprivation and probability of dying in hospital, as indicated by previous studies [11,19,31,32]. This is an important result since death at home is usually considered an indicator of the quality of palliative care services [3,10]. The higher number of deaths in hospital may be related to the difficulty that people living in CT with higher deprivation have to access health resources, which might not reach these patients adequately [11]. Likewise, it is possible that, in line with what other authors suggest, people who live in a place with greater deprivation prefer to die in hospital as compared to home or residential care home [33,34], as well as other contextual factors that might be at work, such as the difficult economic or labour situation associated with the places of greatest deprivation [19].
Another possible explanation for these results has to do with the care burden and social support to the caregivers. A lower socioeconomic level is associated with a greater burden of care, and a greater difficulty in receiving formal support. The burden of care has been associated, in many cases, with the need for help in daily life tasks, rather than with the specific symptoms presented by the patient [35]. This burden of care can make caregivers prefer to have their family members dying in hospital, where there are more resources and will receive more support to cope with the end of life.
In this regard, a recent qualitative study explored the relationships between social disparities and the burden of care in cancer patients and showed how the social determinants of health such as low income, low education, precarious housing conditions, rurality (associated with difficulty in the access to palliative care) or lack of social support could exacerbate the caregiver's overload [36]. Likewise, social support is an important variable that can mediate and positively regulate the perceived care burden [37,38]. Neergaard et al. identified in their review a series of variables related to social support such as living with other family members, having family support, being married, availability of space at home, the region of residence, as well as the caregiver's sociodemographic variables (age, sex, and relationship with the patient) [19]. Regarding the diagnoses most associated with the different levels of deprivation, a very heterogeneous profile was found. This includes the three trajectories associated with the end-of-life process: advanced cancer (malignant neoplasm), advanced organ disease (heart and respiratory disease), and advanced dementia (Alzheimer's dementia and senility) [39]. In addition to these CNPC, diagnoses for CnotPC were also significant, and deprivation seems to have an important role. This great variability in the diagnoses found is consistent with studies that indicate that the burden of care is similar among those diagnosed with an oncological process or in cases of diseases not related to cancer [35,40].
Results regarding the risk of death in residential care home vs. home showed a more complex association pattern, with both high and low deprivation CTs showing an excess probability of death in residential care vs.
home. An increase in the number of deaths in residential care facilities can be observed, related to the ongoing aging of the population, as well as an increase in pathologies such as dementia, which can mean that, regardless of the level of deprivation, many people end up dying in residential care homes. Also, the situation of nursing homes in Spain includes both public and private institutions, and so, regardless of the level of deprivation of a person's CT, it is possible to move to them. In this regard, various studies in countries such as the United States have associated low socioeconomic status with access to poorer quality nursing homes [41,42]. It is important to highlight that the progressive breakdown of the Spanish health system, in particular primary care and public health, due to the long period of austerity and privatizations (particularly in some regions) [43,44], plus the overload of care resulting from budget cuts has had a serious impact on the quality of primary care [45]. The consequences of this deterioration have differentially affected the most deprived populations. This may explain why the hospital is the main place of death for many people living in more deprived places.
This work presents a series of limitations among which are those related to the use of data from the CMD-BED, since there may have been undetected errors in the diagnosis or during encoding and transcription. The CMD-BED was not modified during the study period. On the other hand, the CMD-BED in Spain includes a limited number of variables, so it was not possible to consider some of them individually, i.e. the employment situation or the type of work. Instead, the deprivation level variable included information on such variables, considered at the contextual level of the area of residence, and thus, the excess probability of death in one or other locations according to the level of deprivation could reflect both the effect of the individual socioeconomic level as much as the contextual effect of the area of residence.
Another limitation comes from not having georeferenced all deaths. Nevertheless, only a very small percentage (1.3%), lower than usual in this type of study, was not included. These losses should have had little effect on the results obtained.
It should be borne in mind that the classification in DL would not be the only possible one either. Nevertheless, it responds to the objective of preferentially evaluating the inequality existing between the population groups of greater and lesser deprivation, with consistent results across the different categories used.
Finally, this work did not include preferences about the place of death. Further research is needed to
Conclusions
The results of this study indicate that the probability of dying in hospital as compared to dying at home, increases as the socioeconomic deprivation of the urban area of residence rises, and this generally happens either for any type of palliative death (oncological and nononcological) or non-palliative. However, when comparing death in residential care vs. home it can be seen that the effect of the level of socioeconomic deprivation is very limited since only the areas of least socio-economic deprivation (the first level) are slightly associated with a lower probability of death in residential care. While socioeconomic differences in access to formal and informal care may explain the greater probability of death in hospital for people living in areas of greater deprivation, the way these factors influence death in residential care vs. home is largely unknown. Further qualitative research is required to explore the needs and preferences of lowincome families who have a terminally-ill family member and, in particular, their attitudes towards home-based and hospital-based death. | 2020-12-03T14:46:38.429Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "f678a9071e2e8f25e606ac28a684f6ebae8c252c",
"oa_license": "CCBY",
"oa_url": "https://equityhealthj.biomedcentral.com/track/pdf/10.1186/s12939-020-01324-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f678a9071e2e8f25e606ac28a684f6ebae8c252c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
897493 | pes2o/s2orc | v3-fos-license | Involvement of tyrosine-specific protein kinase and protein kinase C in J774A.1 macrophage functions activated by Tinospora cordifolia
Background Macrophages are the first line of defense and constitute important participant in the bi-directional interaction between innate and specific immunity. Macrophages are in a quiescent form and get activated when given a stimulus. In our previous studies we have reported that guduchi or LPS treatment of macrophages enhanced production of nitric oxide (NO) and increased tumoricidal activity against L929 fibroblast cells. Objective In the present study effect of Tinospora cordifolia commonly known as guduchi on macrophage activation and the mechanism of action i.e. involvement of protein kinase C inhibitor and tyrosine-specific protein kinase inhibitor was investigated. Materials and Methods The present study was undertaken to determine whether H-7 (inhibitor of protein kinase C) and/or genistein (inhibitor of tyrosine-specific protein kinase) could inhibit guduchi or LPS-induced macrophage NO and TNF-α production or reduce the cytolysis of L929 fibroblast cells. Results It was observed that in vitro incubation with H-7 and/or genistein completely inhibited guduchi or LPS-induced NO and TNF-α production by macrophages (J774A.1). Conclusion The inhibitory effects of H-7 and/or genistein, suggest that phosphorylation via these kinases may upregulate the NO synthase activity in macrophages.
Introduction
Macrophages are quiescent cells, which get activated when stimulated. Different types of agents such as antibiotics, anti-metabolites and cytokines may exert an immunomodulating action that is expressed in the augmentation and/or inhibition of different immune responses [15]. One of the most promising recent alternatives to classical antibiotic treatment is the use of immunomodulators for enhancing host defense responses [23]. Number of natural products and synthetic immunopotentiators termed as Biological Response Modifiers (BRMs) are becoming increasingly popular for their potential in augmenting immune responses. Among the natural BRMs many herbs and medicinal plants have long been known for their immunoaugmentary potential; however, only recently scientists have recognized them for their possible BRM actions. The herb guduchi isolated from botanical sources have attracted a great deal of attention in the biomedical arena because of its broad spectrum of therapeutic properties and relatively low toxicity. Our study is based on the investigation of the BRM Tinospora cordifolia (guduchi) as an activating agent of macrophages in vitro and the bacterial endotoxin LPS as a positive control.
In the present study two different inhibitors (H-7), an inhibitor of protein kinase C and/or (genistein), a tyrosine-specific protein kinase inhibitor were used to analyze the regulation of guduchi or LPS mediated macrophage activation. Protein kinase C (PKC) has been shown to be a signal transducer during tumorigenesis, tumor cell invasion, and metastasis. Recent studies have reported that the PKC inhibitor, 7-hydroxystaurosporine (H-7), inhibits tumor cell invasion [22]. Genistein, an isoflavone, has been shown to inhibit cell proliferation and enhance apoptosis in cancer cells. Accumulating bodies of evidence suggest that genistein is expected to synergistically promote the anti-proliferative effects of chemotherapeutic agents on neoplasia without toxicity [24]. In the present study involvement of these specific kinases, in guduchi or LPS-mediated macrophage activation has been investigated.
Macrophage activation is known to occur through a series of stages ranging from a level equivalent to resident tissue macrophages and culminating at an activated state whereupon macrophages become competent to kill several pathogens and lyse tumor cells [1,14]. Although activated macrophages produce a number of physiologically active molecules with cytotoxic and/or cytostatic effects, only interleukin-1 (IL-1), tumor necrosis factor (TNF) and reactive nitrogen intermediates (RNI) have been clearly implicated in monocyte/macrophage mediated tumor cytotoxicity [2,5,8,9,13,21].
Nitric oxide (NO) is formed enzymatically from a terminal guanidine-nitrogen of L-arginine by the so called NO synthases (NOSs) that yield L-citrulline as a co-product [16,17]. Both in the NO producing cell and in specific NO target cells, NO functions as the first messenger of nitrinergic signal transduction, activating GC-S [GTP pyrophosphate-lyase (cyclizing)] [3] and thereby increasing the intracellular concentration of the second messenger molecule cGMP [4,18]. Whether the complete pathway operates in macrophages has not been investigated. In guduchi activated macrophages NO biosynthesis could also be the mediator of macrophagemediated tumor cytotoxicity [7,9e11,21]. While our understanding of the mechanism of action of this BRM is still developing, it appears that the primary mechanism involves induction of the immune system. Our previous studies demonstrate that the basic mechanism of the immunostimulatory, antitumor, bactericidal and other therapeutic effects of guduchi is thought to occur via macrophage stimulation [10e12]. We have focused this study on the involvement of protein kinase C and tyrosine-specific protein kinase in the BRM (guduchi) or LPS-mediated macrophage functions. The purpose of the present study was to investigate whether H-7 (inhibitor of protein kinase C) and/or genistein (inhibitor of tyrosine-specific protein kinase) could decrease macrophage derived NO and TNF-a production in the setting of an in vitro herbal (guduchi) treatment or endotoxin (LPS) challenge.
Materials and methods
Reagents: Dulbecco's Modified Eagle Medium (DMEM) with L-glutamine and 25 mM HEPES buffer were purchased from (HiMedia Pvt. Ltd. India.) Fetal bovine serum was purchased from Hyclone (Logan, USA) and heat inactivated at 56 C for 30 min. The whole plant extract of T. cordifolia was used for the study. The plant was obtained from medicinal plant nursery, Pune, Maharashtra. The plant was subjected to extraction with 200 ml methanol at 50 C for 8 cycles by Soxhlet extraction process. The extract was then concentrated with rotator vacuum evaporator and used for further analysis. For enzymatic analysis, fresh crude extract with phosphate buffer (pH 7) was used. The guduchi extract prepared in incomplete DMEM were tested for endotoxin contamination by limulus amebocyte lysate assay which showed insignificant levels [0.0007 ng/mg]. Necessary precautions were taken to avoid endotoxin contamination through out the investigation, by using endotoxin free buffers, reagents and sterile water. All other chemicals and solvents used in this study were obtained from Sigma Chemical Company (St. Louis, USA) and were of analytical grade or the highest grade available.
Cells: The macrophage J774A.1 and the fibroblast L929 cell lines were obtained from National Center for Cell Sciences (NCCS, Pune). J774A.1 cell line was used as source of macrophages, (Origin: BALB/c mouse; Nature: Mature) grown and maintained in the Dulbecco's Modified Eagle Medium (DMEM) (pH 7.5) enriched with 10% fetal bovine serum, at 37 C and 5% CO 2 .
Viability assay
Cell viability was determined by the trypan blue dye exclusion technique. Equal volumes of cell suspensions were mixed with 0.4% trypan blue in PBS, and the unstained viable cells were determined. These cells were further used for cytotoxicity assay in 2 Â 10 6 densities per ml in the 96 well tissue culture plates.
Stimulation of macrophages: The macrophage cells (cell line J774A.1) from late log phase of growth (subconfluent) were seeded in 96 well flat bottom microtiter plates (Tarsons, India) in a volume of 100 ml under adequate culture conditions. Guduchi (80 mg/well) or LPS (10 mg/well) were added in a volume of 100 ml with and/or without inhibitors in triplicate. The cultures were incubated at 37 C and 5% CO 2 environment. After 24 h and 48 h incubation percent viability was checked and culture supernatants were collected and assayed for nitric oxide and TNF-a activity.
Inhibitor treatment to the macrophages as second messengers
Along with guduchi or LPS macrophage cells were treated with two different inhibitors, H-7 (inhibitor of protein kinase C) and/or genistein (inhibitor of tyrosine-specific protein kinase), within a concentration of 10 mM/ml, respectively. Supernatant collected from these cells were assayed for nitrite generation and for tumoricidal activity to check inhibition.
Nitrite assay
The concentration of stable nitrite, an end product of the nitric oxide present in the supernatant of treated or untreated J774A macrophage cell cultures (2 Â 10 6 cells/ml), was measured by the method of Ding et al. [6] based on the Griess reaction [20]. Briefly, 50 ml of supernatant was incubated with an equal volume of Griess reagent (1% sulphanilamide in 2.5% H 3 PO 4 and 0.1% naphthylethylene-diaminedihydrochloride in distilled water; both solutions mixed in a ratio of 1:1 at room temperature) for 10 min. The absorbance at 550 nm was then measured in a microtitre plate reader. The standard curve for nitrite was prepared by using 10e100 mM sodium nitrite in distilled water.
Assay for TNF activity
The activity of TNF-a in the culture supernatants of guduchi or LPS and inhibitor (H-7 or genistein 10 mg/ml each) treated and untreated macrophages was measured by a modification of the Mosmann method based on the reduction of MTT (Sigma) to a colored formazan by living cells [18]. Briefly, 2 Â 10 6 L929 cells, in 100 ml complete medium were grown in wells of a 96 well tissue culture plate in the presence of 1 mg/ml of actinomycin D and 100 ml of test culture supernatant. Cell viability was assessed after 24 h of incubation. The supernatant was discarded and 10 ml of MTT; 3-(4, 5 dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (3 mg/ml) was added to each well and plates were further incubated for 2 h at 37 C. The enzyme reaction was then stopped by addition of 150 ml of dimethyl sulfoxide (DMSO). Plates were incubated for 10 min under agitation at room temperature and colorimetric measurement of the formazan was performed on an enzyme-linked immunosorbent assay plate reader at 570 nm. Cells treated with culture supernatants of untreated macrophages were consider as control. Percent viability and percent cytolysis of these cells was then calculated by the given formula.
Percent viability
where, E is the absorbance of cells treated with culture supernatants of guduchi or LPS treated and untreated macrophages and C is the absorbance of cells treated with medium alone.
Percent cytolysis ¼ ð1 À Percent viabilityÞ
Three independent experiments in triplicate were performed for the determination of TNF-a in the supernatants. TNF-a (mouse origin) in concentration of 10e100 pg was used as standard. The TNF-a levels were then calculated from the standard curve.
Statistical analysis
Statistical significance of difference between the control and experimental samples were calculated by Student's t-test and one way ANOVA. All the experiments were done in triplicate samples. Conclusion was drawn from 3 independent experiments.
Involvement of second messengers in macrophage activation
In our previous study we had reported activation of macrophages by guduchi or LPS treatment. Elevated TNF-a level was found in guduchi or LPS treated macrophage cell supernatants after 24 h as compared to macrophage treated with medium alone [10e12]. A significant decrease (p < 0.05) in nitrite levels by macrophages treated with guduchi or LPS along with H-7 (inhibitor of protein kinase C) and/or genistein (inhibitor of tyrosine-specific protein kinase) was observed (Fig. 1). Also, inhibitor treatment led to increased tumor viability and decreased tumor cell cytolysis in vitro as compared to the previous results ( Fig. 2A and B). Activation of macrophages is found to be reduced significantly after inhibitor treatment.
Nitrite levels
To determine whether genistein and/or H-7 inhibit the guduchi or LPS-induced NO production, nitrite assay was performed with macrophages (J774A.1) as mentioned in the materials and methods. Fig. 1 represents the NO levels in the supernatants, from macrophages pre-treated in vitro with guduchi or LPS and subsequently exposed to H-7 and/or genistein (Fig. 1).
Macrophages treated with BRM (guduchi) or endotoxin (LPS) showed significantly enhanced nitrite levels (45.65 mM/ml or 52.91 mM/ml, respectively) as compared to macrophages treated with medium alone (12 mM/ml). When macrophages were treated with guduchi or LPS along with the inhibitors H-7 and/or genistein, some elevation in nitrite level was observed. However, overall significant low nitrite levels in the supernatants of macrophages treated with inhibitors was observed. Guduchi when incubated with macrophages along with the inhibitors H-7 and/or genistein (23.91 mM/ml and/or 24.34 mM/ml) of nitrite levels were found, respectively. Whereas, LPS treatment to macrophages along with the inhibitors (H-7 and/or genistein) showed (24.78 mM/ml and/or 26.52 mM/ml) nitrite levels, respectively. Also reduction in the generation of nitrite was found in macrophages treated with medium alone and inhibitors (H-7 or genistein) which were (7.21 mM/ml and/or 7.15 mM/ml) (Fig. 1).
TNF-a activity
As per our previous results [9], guduchi or LPS treated macrophage cell supernatants showed enhanced TNF-a levels and eventually enhanced cytolysis of L929 fibroblast cells as compared to medium alone. The guduchi or LPS treated macrophage cell supernatants when incubated with the fibroblast cell line (L929) for 24 h showed 70e75% or 65e70% cytolysis, respectively. However, only 20e25% cytolysis was observed in cells treated with medium alone [9]. In the present study a major difference in these parameters of macrophage activation was observed after inhibitor treatment. Macrophages when treated with guduchi along with the inhibitor (H-7) and/or genistein, for 24 h only 27.91% and/or 15.26% cytolysis was seen, respectively. LPS and H-7 treated macrophages showed (32.28%) cytolysis and LPS and genistein treated macrophages showed (29.10%) cytolysis which was found to be reduced significantly (Fig. 2B). TNF-a levels were calculated from the cytotoxicity assay as mentioned in the materials and methods. TNF-a levels after guduchi, LPS or medium alone treatment were reported previously as 80.46, 71.26 or 24.70 pg/ml, respectively [10]. When macrophages treated with guduchi along with the inhibitors H-7 or genistein, generation of TNF-a was significantly reduced 30.75 pg/ ml and/or 16.95 pg/ml, respectively. When macrophages treated with LPS along with the inhibitors H-7 or genistein, showed 35.56 pg/ml and/or 32.43 pg/ml TNF-a levels, respectively (Fig. 2C).
Discussion
Macrophages activated with various stimuli, like bacterial endotoxin, lymphokines and BCG infection, elaborate production of nitrites/nitrates [19,20] and further, nitric oxide generated during the conversion of arginine to nitrites/nitrates, is involved in the macrophage-mediated cytotoxicity [9,21]. The present investigation was carried out to gain insight into several issues pertinent to production and regulation of NO by murine macrophages in vitro by guduchi or LPS treatment. In the previous study we had reported guduchi-induced production of NO, TNF-a and tumoricidal activity of macrophages (J774A.1) against the fibroblast cell line (L929) [10]. The level of nitrite measured in our assay was indicative of nitric oxide production, which is supposed to be one of the key molecules involved in the tumoricidal activity. Our results suggest more than one mechanism of tumor cell killing by macrophages activated with guduchi or LPS, as the tumoricidal activity against L929 cells is independent of nitric oxide pathway. The results were further strengthened by the observations that inhibition of NO pathway had a different effect on the culture supernatant-mediated tumor cell lysis. Earlier, we had reported that TNF-a plays a crucial role in macrophage mediated tumor cell killing [10]. Our results demonstrated that activated macrophages express the cytolytic mechanism mediated by NO and TNF-a. In the present study we have checked the role of specific inhibitors, genistein (a tyrosine-specific protein kinase inhibitor) and/or H-7 (a protein kinase C inhibitor) in tumoricidal functions of activated macrophages. Genistein (a tyrosine specific protein kinase inhibitor) and/or H-7 (a protein kinase C inhibitor) significantly inhibited guduchi/LPS induced NO release, suggesting that phosphorylation via these kinases may upregulate NOS activity. Inhibition in the production of NO, TNF-a and cytolysis mediated by macrophages was observed after treatment with these inhibitors. This indicates that tyrosine-specific protein kinase and protein kinase C have major role to play in tumoricidal function. Inhibition of these kinases eventually inhibits macrophage activation.
Conclusion
Treatment of macrophages with protein kinase C inhibitor (H-7) and tyrosine-specific protein kinase inhibitor (genistein) along with the incubation of BRM, inhibited the BRM inducedtumoricidal activity of macrophages as well as production of TNFa and NO. From this study it can be concluded that protein kinase C and tyrosine-specific protein kinase plays an important role in tumoricidal function mediated by guduchi or LPS activated macrophages. | 2018-04-03T04:19:45.093Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "c94835f0c74ac4e3c3b7903d31cfb56014b60f32",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jaim.2016.12.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c94835f0c74ac4e3c3b7903d31cfb56014b60f32",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
221107710 | pes2o/s2orc | v3-fos-license | Adaptive Immune Response against Hepatitis C Virus
A functional adaptive immune response is the major determinant for clearance of hepatitis C virus (HCV) infection. However, in the majority of patients, this response fails and persistent infection evolves. Here, we dissect the HCV-specific key players of adaptive immunity, namely B cells and T cells, and describe factors that affect infection outcome. Once chronic infection is established, continuous exposure to HCV antigens affects functionality, phenotype, transcriptional program, metabolism, and the epigenetics of the adaptive immune cells. In addition, viral escape mutations contribute to the failure of adaptive antiviral immunity. Direct-acting antivirals (DAA) can mediate HCV clearance in almost all patients with chronic HCV infection, however, defects in adaptive immune cell populations remain, only limited functional memory is obtained and reinfection of cured individuals is possible. Thus, to avoid potential reinfection and achieve global elimination of HCV infections, a prophylactic vaccine is needed. Recent vaccine trials could induce HCV-specific immunity but failed to protect from persistent infection. Thus, lessons from natural protection from persistent infection, DAA-mediated cure, and non-protective vaccination trials might lead the way to successful vaccination strategies in the future.
Introduction
Hepatitis C virus (HCV) has infected approximately 70 million people worldwide. Only a minority of individuals (20-30%) are able to clear the virus spontaneously in the acute phase of infection, while the majority of patients develop persistent infection. These patients are at substantial risk to develop liver inflammation, fibrosis, cirrhosis, and hepatocellular carcinoma (HCC) [1]. Direct-acting antiviral (DAA) treatment regimens revolutionized treatment of chronic HCV infection and now allow cure of nearly all patients treated [1]. Worldwide eradication of HCV infection, however, will most likely require a prophylactic vaccine against HCV, since antiviral treatment of chronically infected patients alone might not hold pace with the rate of new infections, and since re-infection of cured individuals is possible, especially in cohorts with high risk for infection [2]. Recent HCV vaccination trials have failed [3][4][5], and it is thus of utmost importance to define and understand the prerequisites and mechanisms of successful HCV-specific immune responses. In addition, of a more basic immunological perspective, HCV infection is an exciting immunological model, since HCV infection is the only human infection with a dichotomous outcome (viral clearance versus persistence) in a substantial proportion of patients, and the only human chronic infection that can be cured by a well-tolerated drug therapy. It is, thus, a perfect setting to better understand the immunological mechanisms of spontaneous viral clearance, as well as the effects of the loss of antigen on virus-specific immunity in a chronic human viral infection. In the following, we will first address the roles of HCV-specific B cells/neutralizing antibodies, as well as CD4+ and CD8+ T cells, since all of these were demonstrated to have important roles in infection outcome ( Figure 1). We will then summarize lessons from successful natural clearance of acute HCV infection, DAA-mediated clearance of chronic HCV infection, and also failed vaccination trials. In acute-resolving HCV infection, multi-specific, and vigorous HCV-specific CD4+ and CD8+ T cells are primed, and plasma cells produce broadly neutralizing antibodies (bnAbs). After viral clearance, memory cells (expressing, e.g., CD127) are maintained. (B) In acute-persistent HCV infection, the initial HCV-specific adaptive immune response is similar to acute-resolving infection, however, CD4+ T cells are rapidly lost, CD8+ T cells exhaust (expressing, e.g., PD-1), and viral escape mutations abrogate recognition by HCV-specific CD8+ T cells and nAbs. Host genetic background, including HLA class I and II alleles, as well as ERAP allotypes, might impact dichotomous outcome. Graphic elements were taken and modified from a Servier Medical Art template licensed under a Creative Commons Attribution 3.0 Unported License (CC BY 3.0) (https://smart.servier.com).
Antibody Response
Early in vitro neutralization studies using immunoglobulin from chronically infected patients as well as active immunization studies using recombinant E1E2 protein generated clear evidence that HCV-specific antibodies can protect chimpanzees from challenge with homologous HCV strains [6,7]. Despite these early findings, the importance of antibodies in HCV infection was underestimated for a long time. This was due to a manifold of reasons [8]. First, reports of successful viral clearance in agammaglobulinemic patients raised doubts regarding the importance of antibodies in protection from persistent HCV infection [9]. Second, HCV infection is associated with the occurrence of several specific and unspecific antibody classes, including antibodies detected by clinical routine serological In acute-resolving HCV infection, multi-specific, and vigorous HCV-specific CD4+ and CD8+ T cells are primed, and plasma cells produce broadly neutralizing antibodies (bnAbs). After viral clearance, memory cells (expressing, e.g., CD127) are maintained. (B) In acute-persistent HCV infection, the initial HCV-specific adaptive immune response is similar to acute-resolving infection, however, CD4+ T cells are rapidly lost, CD8+ T cells exhaust (expressing, e.g., PD-1), and viral escape mutations abrogate recognition by HCV-specific CD8+ T cells and nAbs. Host genetic background, including HLA class I and II alleles, as well as ERAP allotypes, might impact dichotomous outcome. Graphic elements were taken and modified from a Servier Medical Art template licensed under a Creative Commons Attribution 3.0 Unported License (CC BY 3.0) (https://smart.servier.com).
Antibody Response
Early in vitro neutralization studies using immunoglobulin from chronically infected patients as well as active immunization studies using recombinant E1E2 protein generated clear evidence that HCV-specific antibodies can protect chimpanzees from challenge with homologous HCV strains [6,7]. Despite these early findings, the importance of antibodies in HCV infection was underestimated for a long time. This was due to a manifold of reasons [8]. First, reports of successful viral clearance in agammaglobulinemic patients raised doubts regarding the importance of antibodies in protection from persistent HCV infection [9]. Second, HCV infection is associated with the occurrence of several specific and unspecific antibody classes, including antibodies detected by clinical routine serological tests, autoantibodies such as rheumatoid factor involved in extrahepatic manifestations of HCV, and neutralizing antibodies (nAbs). Antibodies detected in clinical routine serology mostly targeting the core and the non-structural proteins, can be detected in (immunocompetent) patients approximately 5-8 weeks post infection (coinciding with the peak of liver enzymes and HCV-specific T cells), and do not correlate with the outcome of infection. Autoantibodies are present in a majority of HCV-infected patients, are a result of B cell dysregulation, and might contribute to extrahepatic manifestations of HCV infection such as mixed cryoglobulinemia. The autoantibody most frequently detected in patients with HCV infection is the anti-immunoglobulin autoantibody rheumatoid factor (RF) that is detectable in approximately 50% of patients with chronic HCV infection and contributes to the production of cryoglobulin [10,11]. Only a small fraction of HCV-specific antibodies have the capability to neutralize viral particles in vitro. The vast majority of these nAbs targets the hypervariable region 1 (HVR1) of the HCV glycoprotein E1. These nAbs are mostly strain-specific, and due to the high mutations rate in the HVR1, resistance against these antibodies develops rapidly, abolishing a protective role of these HVR1-specific nAbs [12][13][14]. Viral diversity is also the third reason for the previous underestimation of the role of antibodies in HCV infection. Indeed, earlier studies on the impact of nAbs in the natural course of HCV infection used viral reference strains as readout in the neutralization assays and did not find a clear correlation between nAbs and viral clearance [15,16].
However, by using autologous viral sequences for such studies, a clear association between early nAb responses and viral clearance could be shown [17][18][19]. These studies also indicated that a distinct subset of nAbs, specifically broadly neutralizing antibodies (bnAbs), correlate with viral clearance. bnAbs have a broad capability to cross-recognize viral quasispecies and strains even from different HCV genotypes. While immunoglobulins from patients with chronic HCV infection can protect animals (humanized mice and chimpanzees, respectively) against homologous but not heterologous HCV challenge [20][21][22], bnAbs could protect animals against both, homologous and heterologous HCV challenge in a large number of studies with a variety of antibodies [23][24][25][26]. A mixture of different bnAbs was even able to abrogate established HCV infection in human liver chimeric mice [27]. bnAbs are, therefore, a current research focus in HCV vaccine development. bnAbs target conformational, discontinuous epitopes on E2 (antigenic regions AR1, AR2, and AR3 including the CD81 binding site involved in HCV cell entry) and the E1E2 heterodimer interface (AR4 and AR5), as well as linear, continuous epitopes on E2 (antigenic sites AS; e.g., AS412 corresponding to E2 amino acid residues 412-423) [8]. The exact binding epitopes of many bnAbs were identified by global alanine scanning of the E1E2 protein, followed by antibody binding assays [28,29]. The immunodominance of bnAbs were analyzed very recently in natural acute infection [30], as well as in samples from a (historic) E1E2 vaccination study in healthy volunteers [31].
Despite these recent advances in the understanding of HCV-specific antibody responses it needs to be kept in mind that antibodies are only the effector molecules of B cells, and that little is known regarding HCV-specific B cells to date. Indeed, the reasons why HCV-specific B cells fail to produce substantial quantities of bnAbs in most patients remain obscure. For example, it was proposed that B cell intrinsic mechanisms as well as a reduced or dysregulated help from CD4+ T cells, especially T follicular helper (Tfh) cells, might contribute to this scenario [32,33]. Recently, the detection of hepatitis B virus (HBV)-specific B cells through flow cytometry was made possible by the use of fluorochrome-coupled HBsAg and HBV core molecules as 'traits' [34][35][36][37]. A similar methodological development for HCV-specific B cells, however, is hindered by the lack of knowledge regarding E1E2 structural biology.
In sum, bnAbs are likely an important component in natural HCV clearance and might represent an attractive target for vaccine development. However, new methodological approaches might be necessary to fully understand protective action as well as mechanisms of failure of bnAbs.
HCV-Specific T Cell Response
For HCV, like for many other viral infections, a functional multi-specific T cell response is essential for viral clearance and prevention of chronicity. Several lines of evidence support the mandatory role of both, CD4+ and CD8+ T cell responses in viral clearance. First, HCV-specific CD4+ and CD8+ T cell responses are temporally tightly linked to the onset of liver disease (increase of liver enzymes, clinical symptoms including jaundice in icteric cases) as well as a sharp decline in viremia [38][39][40]. Second, antibody-mediated depletion of CD4+ as well as CD8+ T cells interfered with viral clearance in the chimpanzee model, the only animal next to humans, which despite not being a natural host could be infected with HCV [41,42]. Of note, after the depletion of CD8+ T cells, viremia was prolonged compared to control chimpanzees, and viremia only declined and the infection finally resolved when the CD8+ T cells reappeared and the HCV-specific CD8+ T cells were detectable [41]. In contrast, CD4+ depletion resulted in persistent viremia that was mechanistically linked to the evolution of viral escape mutations in HCV-specific CD8+ T cell epitopes [42]. These results support the concept that CD8+ T cells are the major antiviral effector cells, while CD4+ T cells provide important help and are thus evenly mandatory for viral clearance. Third, there is also strong immunogenetic evidence for the role of both, CD4+ and CD8+ T cells in HCV clearance, since specific HLA class I and II alleles, restricting CD8+ and CD4+ T cells, respectively, were linked to viral clearance or persistence [43][44][45][46]. Despite these shared and joined key roles in the course of HCV infection, HCV-specific CD4+ and CD8+ differ very much in their nature and magnitude. HCV-specific CD8+ T cell responses are multi-functional and long lasting in acute infection and are maintained even during persistent infection, although they eventually might lose their functionality and change their phenotype. HCV-specific CD4+ T cell responses are initially primed and detectable in all infected individuals, however, they rapidly decline in patients with persistent infection and are hardly detectable once chronicity is established [47,48].
CD8+ T Cell Response in Acute HCV Infection
Following HCV infection, a specific and multifunctional CD8+ T cell response is induced in almost all patients. Primed HCV-specific CD8+ T cells appear in the blood and infiltrate the liver first, after 6-8 weeks post infection [38][39][40]. The reasons for this delay are not known, however, kinetics are similar for T cells targeting other hepatotropic viruses such as hepatitis B virus (HBV). During these initial weeks of HCV infection, viremia is controlled at relatively high levels by the innate immune response, including, e.g., NK cells and type I (IFNα) and III (IFNλ) interferons. The important impact of the innate immune response in viral control is underlined by the association of specific polymorphisms, e.g., the IFNλ system and spontaneous resolution of acute HCV infection [49]. The appearance of the CD8+ T cell response, however, coincides with the onset of liver disease and a drop in viral titers [38][39][40]. HCV-specific CD8+ T cells display an activated phenotype (CD38+) and also display high expression of PD-1, rather indicating activation than exhaustion in this infection phase [39,50]. Of note, HCV-specific CD8+ T cell do not produce antiviral cytokines such as interferon-gamma in the early phase of acute infection, irrespective of infection outcome, a phenotype referred to as "stunned" [40].
Early expression of IL-7 receptor alpha (CD127) and T-bet [51] on HCV-specific CD8+ T cells is linked to successful immune responses, resulting in viral clearance. It is important to note, however, that it has so far not been understood why after an initial priming of HCV-specific CD8+ T cell responses of similar strength and with similar functional and phenotypic characteristics, one individual will clear infection while another progresses to chronic infection. A recent analysis of the early transcriptional differences between HCV-specific CD8+ T cells from patients with acute-resolving versus acute-persistent HCV infection performed by the group of Georg Lauer found a dysregulation of metabolic processes, linked to changes in the expression of genes related to nucleosomal regulation of transcription, T cell differentiation, and the inflammatory response [52]. While the field is far from understanding the complex transcriptional regulation networks that determine the fate of virus-specific T cells, it is intriguing that one of the genes strongly upregulated in resolvers was TCF7 encoding for the TCF1 protein. High expression of TCF1 is also found on HCV-specific CD8+ T cells that are maintained after successful antiviral treatment of chronic HCV infection (see below, 'Lessons from DAA therapy'). A gene that was upregulated in patients with viral persistence, however, was p53 [52]. Along with its role in metabolism and carcinogenesis, p53 also has an immune-regulatory role that has recently gained increasing attention. These results were confirmed and extended by the group of Carlo Ferrari, demonstrating that targeting of p53 can rescue impaired glycolytic and mitochondrial functions during early persistent infection [53].
CD8+ T cells also rely on help from CD4+ T cells to perform their full effector function. Thus, absence of CD4+ T cell help might be an important mechanism contributing to viral persistence. Indeed, a weak or impaired HCV-specific CD4+ T cell response with decreased production of IL-2 and IL-21 correlates with a diminished early-phase HCV-specific CD8+ T cell response and viral persistence.
Once HCV is cleared by an effective immune response, CD8+ T cell populations are no longer triggered by ongoing antigen stimulation and start to express high levels of the memory marker CD127, which is needed for homeostatic proliferation, and decline in frequency. However, a robust memory CD8+ T cell response is kept and will rapidly re-expand during reinfection, and might accelerate viral clearance [54]. Despite this memory formation, viral persistence is possible upon reinfection and is almost always associated with the appearance of escape mutations.
CD4+ T cell Response in Acute HCV Infection
During acute infection, HCV-specific CD4+ T cells are primed and initially expand to form a multispecific and multifunctional CD4+ T cell response, irrespective of the outcome of infection. In acute-resolving infection, these CD4+ T cell responses are maintained. In acute-persistent infection, however, these CD4+ cells are rapidly deleted [47,48]. Similar to HCV-specific CD8+ T cells, HCV-specific CD4+ T cells proceed from an activated phenotype with expression of PD-1, CTLA4, and CD38, during acute infection to a memory state, defined by upregulation of CD127 and downregulation of activation markers [55,56], after viral clearance.
Failure of HCV-Specific T Cell Responses in Chronic HCV Infection
The majority of patients are not able to clear acute HCV infection and proceed to chronic HCV infection. The main mechanisms of HCV-specific T cell failure contributing to viral persistence are viral escape and T cell exhaustion. Lack of CD4+ T cell help and production of immunomodulatory cytokines by regulatory T cells (Tregs) [57][58][59][60][61] might further contribute to HCV-specific T cell failure. In addition, impaired function of dendritic cells (DCs) in persistent infection was described very early [62][63][64], however, the precise impact of DC dysfunction on HCV-specific T cell failure remains elusive to date [65].
Viral escape from HCV-specific CD8+ T cell responses typically occurs during the early phase of infection [66,67], with mutations detectable in about 50% of epitopes [67,68], which are associated with viral persistence [67,[69][70][71]. Mutations might develop at the HLA class I binding anchors of the epitope, thus, abolishing or lowering the binding affinity of the epitope for the restricting HLA class I molecule, at positions responsible for T cell receptor recognition [72] or at the flanking sites of the epitope, influencing proteasomal processing [70,73,74]. In cases when the evolution of escape mutations is associated with viral fitness cost [72,75,76], the virus might revert to wild-type upon transmission to an individual negative for the restricting HLA class I allele [70]. In addition, compensatory mutations might be required to allow the development of mutations in regions that would otherwise impair viral replication [77,78]. On a populational level, viral escape might lead to HLA class I associated viral sequence polymorphisms (also called HLA class I footprints), since patients positive for the restricting HLA class I allele frequently display the respective mutation in their autologous viral sequences, while patients negative for the restricting HLA class I allele do not [79][80][81][82][83][84][85]. In cases with low viral fitness cost, escape variants might even replace prototype sequences and become the new consensus sequence in a population, resulting in the loss of an HCV-specific CD8+ T cell epitope in the population [86]. Loss of recognition by viral escape might or might not be complete, but priming of de novo T cell responses against mutated epitopes does not occur in persistent HCV infection, possibly due to the lack of CD4+ T cell help and high antigen load during the later stages of persistent infection. Some reports show that HCV-specific CD8+ T cells targeting escaped epitopes still exert some viral control and effector function. This hypothesis is supported by observations showing that if the T cell pressure is attenuated for example during pregnancy, the virus mutates back to its original wild-type sequence. After pregnancy, the CD8+ T cell response is reinvigorated and there is evidence for renewed CD8+ T cell pressure on HLA class I restricted epitopes [87]. However, since the CD8+ T cells targeting escaped epitopes are not exposed to constant T cell receptor triggering anymore, they acquire a memory-like state rather than an exhaustion phenotype with expression of CD127 and sustained proliferative potential [88,89].
In sharp contrast to viral escape, T cells that are exposed to constant cognate antigen stimulation are subject to dramatic changes concerning their phenotype, function, epigenetic, and transcriptional profile [52,90,91], a process termed T cell exhaustion. T cell exhaustion leads to a gradual loss of effector functions ranging from loss of proliferative capacity and cytokine secretion, to loss of cytotoxicity that is accompanied by an upregulation of inhibitory receptors [92][93][94]. Recent studies showed that the exhausted T cells consist of heterogeneous populations, namely, a less exhausted memory like population defined by expression of CD127 and PD-1, and co-expression of TCF1, which is shown to retain the proliferative capacity of these cells and a severely exhausted CD127-PD-1 high population [95,96]. This terminally exhausted population shows transcriptional and epigenetic changes that cannot be reversed by antigen removal or anti-PD-1 therapy, whereas the less exhausted cell population can respond to anti-PD-1 therapy [95]. Anti-PD-1 therapy has been examined in the context of chronic HCV infection but showed limited efficacy in chimpanzee and human studies. Indeed, when assessed in a cohort of 54 human patients chronically infected with HCV anti-PD-1, therapy resulted in transiently reduced viremia in a subset of patients, with two patients becoming HCV RNA negative [97]. Out of 3 chimpanzees experimentally infected with HCV, a transient drop in viremia after anti-PD-1 therapy was observed in one animal [98]. Barili et al. could show that exhaustion of HCV-specific CD8+ T cells during chronicity is dominated by a broad gene downregulation, coupled with metabolic and anti-viral function deterioration. The authors succeeded in rescuing the effector functions in these cells by applying histone methyltransferase inhibitors [53].
Terminal T cell exhaustion is also characterized by high expression of the transcription factor Eomes that is in tight balance with its homologue T-bet expressed on progenitor cells of terminally exhausted T cells [99]. In the setting of HCV infection, acute-persistent and chronic infections are characterized by a low frequency of T-bet+Eomes-HCV-specific CD8+ T cells, compared to acute-resolving infection [51]. Recent advances in the field identified the HMG-box transcription factor TOX as crucial for the formation of exhausted T cells. Different groups showed that TOX translates persistent antigen stimulation into a distinct transcriptional and epigenetic program and that in absence of TOX, exhausted T cells do not form [100][101][102][103][104][105]. Noteworthy, deletion of the DNA binding domain of TOX reduced PD-1 expression and increased effector function of T cells, but ultimately these T cells were deleted, indicating that T cell dysfunction and exhaustion is a natural program needed to maintain cell populations that are subject to constant antigen triggering [101]. One of these studies also studied TOX expression on HCV-specific CD8+ T cells. Of note, TOX expression was high on HCV-specific CD8+ T cells, in patients with chronic HCV infection (even after DAA-mediated cure of chronic infection), and these T cells co-expressed CD127, PD-1, and TCF1. HCV-specific CD8+ T cells from patients who spontaneously resolved acute HCV infection, however, displayed low TOX expression, comparable to naïve and influenza-specific CD8+ T cells [101].
Data on viral escape and T cell exhaustion regarding CD4+ T cells in chronic HCV infection is limited, since these cells are readily deleted in persistent infection [47,48]. There is some evidence for viral escape within CD4+ T cell epitopes, but this seems to be rather uncommon overall [106][107][108].
Recent studies using enrichment strategies with antigen specific HLA class II tetramers to overcome low cell numbers, showed that HCV-specific CD4+ T cells indeed express multiple inhibitory receptors like PD-1, TIGIT, and CTLA-4, during chronic infection [47,109]. In in vitro culture, CD4+ T cell functionality could be restored by anti-PD-1 antibody administration, but whether inhibitory receptor expression alone accounts for the deletion of HCV-specific CD4+ T cells is currently unclear. Interestingly, Coss et al. could show an increase of CD4+ T cell functionality and number in a cohort of women after childbirth. This increase correlated with viral control, compared to women in their last trimester and women experiencing no viral control [110]. Previously the same group reported that the reduced viremia was associated with revived CD8+ T cell selection pressure on targeted epitopes [87]. Therefore, the drop in viremia can likely be ascribed to an improved CD4+ T cell functionality, providing CD4+ T cell help and thereby increasing CD8+ T cell effector function.
Treg frequency is enhanced in chronic HCV infection [57,58,60,61]. Important issues regarding Tregs in HCV infection such as antigen-specificity and impact on infection outcome, however, remain elusive to date [59]. Tregs were shown to expand and produce regulatory cytokines such as IL-10 and TGF-β, thereby, potentially interfering with CD4+ and CD8+ T cell immunity, by counteracting inflammatory and activation signals [59]. However, Treg cell number and function in acute infection could not be related to infection outcome [111].
Lessons from Successful Natural Clearance of Acute HCV Infection
Several HLA class I and II types are associated with spontaneous clearance of acute HCV infection [43][44][45][46]. For all four HLA types, immunodominant HCV-specific CD8+ T cell epitopes were located in E2, the NS3 protease or NS5B polymerase that are targeted in the vast majority of patients with acute HCV infection expressing the respective HLA type were identified [44,112,113,118]. In patients who develop persistent infection, despite expressing the respective protective HLA types, a complex pattern of viral evolution occurs in the immunodominant HCV-specific CD8+ T cell epitopes [76,112]. Indeed, autologous viral sequences from these patients display multiple amino acid mutations within the epitopes, interfering with the recognition of these epitopes by the virus-specific CD8+ T cell responses. Compared to viral epitopes restricted by non-protective HLA types, a single amino acid mutation within the protective epitopes is not sufficient for viral escape from the virus-specific CD8+ T cell response. Rather, several mutations need to occur to (nearly) completely abolish cross-recognition by the epitope-specific T cell responses. Mutations at some positions in the viral epitope, such as the main HLA binding anchors at the position two of the epitope (in the case of HLA-B*27 restricted epitopes an arginine), cannot occur, since the resulting viral variants are not able to replicate at comparable levels, a phenomenon that was termed as the 'viral fitness cost'. In some cases, mutations within a protective HCV-specific CD8+ T cell epitope even have to be compensated for by an amino acid mutation outside of the epitope, up to 30 amino acids up-or downstream of the epitope, in order to maintain replication levels [77,78,113]. In the setting of acute HCV infection, the HCV-specific CD8+ T cell response targeting these protective epitopes might clear the virus, before this complex pathway of viral escape comprising mutations at several amino acid residues can occur, thus, explaining the high rate of viral clearance associated with these HLA class I types. Next to the functional constraints on the targeted viral epitopes, rapid antigen processing, and thus early priming of dominant virus-specific CD8+ T cell responses might be an additional characteristics of protective HLA class I types such as HLA-B*27 [116]. Targeting such protective HCV-specific CD8+ T cell epitopes might thus be an important aim of HCV-specific prophylactic vaccines. It is important to note that protection by these HLA class I alleles is highly restricted to specific HCV genotypes, subtypes, or even specific strains, as well as specific HLA class I subtypes (alleles). Indeed, HLA-B*27 seems to protect against HCV genotype 1 (1a and 1b), but not genotype 3, a finding that can be explained by the conservation of the immunodominant HLA-B*27 restricted HCV-specific epitope across genotype 1a and 1b, but not other genotypes, including genotype 3a [115]. Similarly, the HLA-B*57 restricted epitope is present in genotype 1a, but not genotype 1b [44]. Even more strikingly, specific infecting strains of the same HCV subtype (1b) display sequence differences in some of these protective HCV-specific CD8+ T cell epitopes, explaining that protective effects of the respective HLA class I types could be demonstrated in one single-source outbreak cohort but not the other [117]. To add complexity even at another dimension, immunodominance of HCV-specific CD8+ T cell responses restricted by HLA-B*27 can also be influenced by precise host genetics, since the HLA-B*27 subtypes (alleles) B*27:05 (representing the ancestral subtype that is also most prominent at the global level) and B*27:02 (a subtype frequently found in the Mediterranean region) do not completely overlap in epitope restriction [119]. In addition, components of the antigen processing/presentation machinery that have so far obtained little attention, such as the endoplasmic reticulum aminopeptidase 1 (ERAP-1) might have a previously underestimated impact on immunodominance, as well as protection in viral infections. ERAP-1 is involved in fine-trimming of antigens to 8-10-mer epitopes that are then ready for presentation by the HLA class I molecules. So far, ERAP-1 was mostly known for the link between ERAP-1 allotypes and HLA-associated autoinflammatory diseases, such as the HLA-B*27-associated ankylosing spondylitis. Of note, however, we could demonstrate that ERAP-1 alloytpes with hyporeactive trimming activity might lead to production and targeting of longer (10-and 11-mer) HCV-specific HLA-B*27-restricted CD8+ T cell epitopes, thus, skewing the usual immunodominance pattern of HLA-B*27-restricted HCV-specific CD8+ T cell epitopes, and thus, likely contributing to the failure of this otherwise protective CD+ T cell responses [114]. In sum, targeting of HCV-specific CD8+ T cell epitopes that have similar characteristics as the immunodominant epitopes restricted by the HLA class I types that protect from viral persistence in the natural course of infection might be an important goal for prophylactic HCV vaccines. However, these epitopes need to be either cross-reactive between different HCV genotypes or the genotype-specific epitopes for each prevalent HCV genotype need to be included in a vaccine.
Lessons from DAA Therapy
The development of direct acting antiviral (DAA) therapy revolutionized treatment of chronic HCV infection. Current treatment regiments have treatment durations of 8-12 weeks, reach cure rates of 95-100% and are associated with few adverse events. It remains an important issue to monitor the long-term effectiveness of DAA therapy, since very low levels of (intrahepatic) viral replication might lead to recurrence of HCV infection even after several months, especially in the case of rare HCV subtypes or DAA-resistant strains [120]. Next to the great clinical advancement, however, the introduction of DAA therapy has further increased the role of HCV infection as a unique human infection model, since it is the only chronic infection that can be cured by a well-tolerated standard therapy. Thus, it allows to study the impact of antigen removal in patients that were chronically infected for decades [121]. A first study by our laboratory showed an increase of the ex vivo frequency of HCV-specific CD8+ T cells as well as a restored proliferative capacity of these CD8+ T cells [122]. Further analysis demonstrated that this partial functional restoration of the HCV-specific CD8+ T cell response during and after DAA-mediated viral clearance was due to the maintenance of a memory-like T cell subset that co-expressed the memory marker CD127, as well as the exhaustion/activation marker PD-1, and was further characterized by expression of the transcription factor TCF1. In contrast, terminally exhausted CD127-PD-1 high TCF1-HCV-specific CD8+ T cells disappeared after HCV elimination. Upon re-challenge with HCV, memory-like CD127+PD-1+TCF1+ HCV-specific CD8+ T cells expand and give rise to the re-emergence of terminally exhausted CD127-PD-1 high TCF1-HCV-specific CD8+ T cells [96]. Interestingly, the memory-like phenotype of HCV-specific CD8+ T cells is not only observed in the case of DAA-mediated viral clearance, but also in the case of viral escape, interfering with antigen recognition by the epitope-specific CD8+ T cells [96]. It is important to note that memory-like HCV-specific CD8+ T cells that are maintained and enriched after DAA-mediated cure are different from "conventional" memory HCV-specific CD8+ T cells observed in patients, after spontaneous resolution of acute HCV infection, indicated by a substantially higher co-expression of CD127 and PD-1, a higher expression of Eomes, and a lower expression of TCF1 [96]. In line with this partial phenotypical recovery, HCV-specific CD8+ T cells partially recover in function, such as IFNγ and TNF production, but are not fully restored to the level of conventional memory CD8+ T cells found in patients with spontaneously resolved acute HCV infection [96]. Of note, these phenotypical and functional impairments that remain after DAA-mediated cure of chronic HCV infection are also associated with sustained metabolic impairments such as mitochondrial dysfunction [123] and might be more severe in patients with advanced liver disease as well as male patients [123]. These data collectively indicate that HCV-specific CD8+ T cells develop defects during chronic infection that cannot simply be restored by antigen removal. This concept is supported by the current finding that expression of TOX, a central transcription factor regulating T cell exhaustion, remains high after DAA-mediated cure of chronic infection, while TOX is not expressed by HCV-specific CD8+ T cells after spontaneous clearance of acute HCV infection [101]. These findings mimic the situation in the LCMV mouse model, where chronic infection leads to irreversible TOX expression, probably due to epigenetic programming [100,101]. This concept is also in line with the finding that restoration of HCV-specific CD8+ T cells is possible by antiviral treatment early in infection (e.g., acute HCV infection) [124], and the observation in mice that virus-specific CD8+ T cells can be rescued from differentiation to exhausted T cells by antigen removal, early but not late in LCMV infection [125]. This persistent defect of HCV-specific CD8+ T cells might contribute to the lack of protection against re-infection after DAA-mediated cure of chronic HCV infection. Indeed, in the chimpanzee model, viral persistence developed after re-infection, despite the intrahepatic presence of HCV-specific CD8+ T cells primed during the primary infection. This finding could be explained by the persistence of phenotypical alterations (low CD127 expression, high PD-1 expression) found on intrahepatic HCV-specific CD8+ T cells, even two years after DAA-mediated viral clearance [126]. It is thus a major research priority to further define the epigenetic regulation of sustained defects in HCV-specific CD8+ T cells, after DAA-mediated cure. Novel targets might be needed to overcome HCV-specific CD8+ T cell failure after HCV cure and thus protect individuals at continued risk from re-infection.
HCV-specific CD4+ T cells remain at a very low frequency and with a dysfunctional phenotype after DAA-mediated HCV cure [127]. In addition, frequencies of regulatory T cells remained at increased levels after cure [128]. Of note, however, an HCV-specific CD4+ T cell subset with follicular T helper (Tfh) cell signature was maintained during and in the long-term, after DAA-mediated viral clearance. This Tfh cell subset was also responsible for a temporary increase of the CD4+ T cell frequency at week two of DAA therapy, which was most likely due to the efflux of liver infiltrating Tfh cells into the peripheral blood, following virus elimination [32]. HCV-specific Tfh cells might represent an important target population for preventive vaccination strategies.
Lessons from Vaccine Trials
There are many challenges to HCV vaccine design [3,4,129]. Although HCV infection can be cleared in about 30% of patients in the acute phase of infection, the exact correlates of viral persistence versus resolution are obscure. As discussed in detail above, many studies demonstrated that viral clearance is associated with an early, vigorous, broadly directed, functional, and sustained CD4+ and CD8+ T cell response, as well as with the early appearance of broadly neutralizing antibodies, but the exact mechanisms that lead to an induction of this favorable immune response remain unknown. Additionally, the genetic heterogeneity of HCV is hard to address. Seven HCV genotypes circulate world-wide and each can be further separated into numerous subtypes. In addition, the error-prone RNA-dependent RNA polymerase activity leads to the generation of innumerable quasispecies within a single host, allowing viral escape from the host immune response and further complicating vaccine development. HCV cannot be kept in cell culture easily, making the generation of live attenuated or killed modified virus vaccines extremely difficult. Next to humans, only chimpanzees can persistently be infected with HCV. Other non-human primates such as tree shrews (Tupaia belangeri) can be infected with HCV, but develop only transient viremia, indicating that their use in vaccination studies requires further optimization of the model [130]. Immune-competent small animal models of HCV infection were only recently established [131,132]. HCV-naïve individuals at high risk for infection such as people who inject drugs (PWID) are optimal candidates for HCV vaccine efficacy trials, but these cohorts are rare and need intensive care to be maintained [133,134].
Current vaccine strategies do not aim to prevent HCV infection (sterilizing immunity), but rather have the goal to prevent viral persistence upon infection (protective immunity). Two different vaccination strategies are under evaluation for inducing protective immunity. Vaccines aiming to induce broadly neutralizing antibodies (bnAbs) and vaccines aiming to induce protective CD4+ and CD8+ T cell responses.
Many different vaccination strategies were evaluated in order to induce bnAb responses, however, the large majority of these vaccines did not advance to a clinical stage. Most promising results from pre-clinical studies were obtained for the recombinant full-length E1E2 protein from a single genotype 1a strain, with an oil-water adjuvant. This vaccine led to the formation of bnAbs and reduced rates of viral persistence in rodents, primates, and chimpanzees [6,135]. However, it failed to induce antibodies in the majority of patients in a phase 1a human trial [136,137]. Vaccination strategies based on bnAbs might thus have a long road ahead before they show promising clinical effects. Indeed, based on recent advances in the definition of bnAb epitopes discussed above, more targeted antigens than a full-length E1E2 protein just from a single HCV strain might be more effective; in addition, novel strategies to adjuvant the HCV antigen are likely to enhance the chance to induce substantial bnAb levels. Last but not least, an overwhelming amount of data from the natural course of HCV infection indicates that a humoral immune response alone is unlikely to achieve viral clearance in a substantial proportion of infected individuals, suggesting that vaccines designed to induce bnAbs should be used in combination with vaccines designed to induce a protective CD4+ and CD8+ T cell response.
A variety of strategies were used to induce HCV-specific CD4+ and CD8+ T cells in animal models [3,4,129]. Most of these studies focused on the non-structural HCV proteins (NS3, NS4A, NS4B, NS5A, NS5B), since these proteins are more conserved and more often targeted by HCV-specific T cells, compared to the envelope glycoproteins. While most vaccines were able to induce HCV-specific CD4+ and CD8+ T cell responses of variable functionality at least in a subset of animals, only few vaccines were further tested for their ability to protect chimpanzees from persistent HCV infection. Chimpanzees are the only primates that can be chronically infected with HCV next to humans and served as an HCV infection model, until these experiments were abandoned due to ethical concerns. Most, but not all, of these chimpanzee vaccine studies demonstrated reduced rates of HCV persistence in vaccinated versus control animals [138]. Encouraging results were obtained for a vaccination strategy based on replication-defective adenoviral vectors encoding the non-structural HCV proteins (NS3-NS5B) [139]. In the initial chimpanzee study, human adenovirus (Ad) serotypes 6 and 24 were used as vectors, since neutralizing antibodies against these two adenovirus serotypes have a low seroprevalence in humans, and HCV genotype 1b was used as a viral sequence. After priming with Ad6 and boosting with Ad24, an additional boosting was performed with electroporated plasmid DNA. Upon challenge with HCV genotype 1b, all five vaccinated chimpanzees displayed substantially lower viral titers, compared to the control animals, and four of five chimpanzees cleared the infection after a significantly shorter duration of viremia, compared to the control animals, while one vaccinated chimpanzee developed persistent infection [139]. Further immunological analysis demonstrated an early expansion of CD8+ T cells with high CD127 expression, low PD-1 expression, and increased effector function, compared to the control animals developing persistent infection [140]. Strikingly, early expansion of CD8+ T cells with high expression of CD127 and high functionality was also identified as a hallmark of spontaneous clearance of acute HCV infection in the chimpanzee model [141]. Based on these results, the vaccination strategy was further adapted and tested in healthy volunteers not at risk for HCV infection [142,143]. In order to further minimize problems related to preexisting or primed Ad-specific neutralizing antibodies, an Ad6 prime, chimpanzee adenovirus 3 (ChAd3) boost regimen was used in the first human trial [142], and this was further optimized by the use of a ChAd3 prime, modified vaccinia Ankara (MVA) boost regimen, with improved boosting capacity [143]. This vaccination strategy induced vigorous, multispecific, and polyfunctional CD8+ T cells, mostly central and effector memory T cells that expressed CD127, but not PD-1, and were sustained for at least one year. Based on these promising data, the first and so far only clinical trial in individuals at high risk for HCV infection was performed in the US. The double-blind, randomized, placebo-controlled phase I/II study assessed the efficacy of the ChAd3-HCV1b-NS prime and MVA-HCV-1b-NS boost vaccination regimen in a large PWID cohort of 548 HCV-naïve individuals that was completed in 2019. Unfortunately, the vaccine could not prevent chronic HCV infection when compared to the unvaccinated control cohort, with 14/273 individuals developing chronic HCV infection in the vaccine group versus 14/275 individuals in the placebo group [5]. While these results are overall disappointing, it is important to point out that 78% of vaccinated trial participants generated T cell responses to one or more vaccine antigen pools. In addition, individuals who were vaccinated and infected displayed significantly lower peak viral titers (approximately 5-fold) than those who received placebo. These results allowed the interpretation that the vaccine induced T cell responses that were able to control viremia, at least partially. The long-term failure of these vaccine-induced T cell response might either indicate that the T cell responses were not vigorous enough, calling, for example, for a more effective adjuvant, or that cross-recognition of HCV genotypes, subtypes, or even quasispecies circulating in the US by the vaccine-induced T cell response was not sufficient. Indeed, CD8+ T cell epitopes that were induced by this vaccination regimen and targeted immunodominant HCV-specific epitopes displayed a limited capacity to cross-recognize viral variants circulating in the population [144]. This interpretation is also in line with the finding that the epitope repertoire between HCV genotype 1 and genotype 3 or 4, respectively, display little overlap [145,146]. Thus, current research addresses novel adjuvant formulations such as the use of MHC class II invariant chain-adjuvanted viral vectors, enhancing the peak magnitude, breath, and proliferative capacity of HCV-specific T cells induced by the ChAd3-HCV1b-NS prime/MVA-HCV-1b-NS boost vaccine in healthy volunteers [147]. In addition, the team of Eleanor Barnes further optimized the vaccine strategy to generate pan-genotypic T cell responses to conserved subdominant epitopes [148]. For this purpose, only viral sequence regions with a high grade of conservation between the major HCV genotypes (1 and 3 or 1-6, respectively) were included in the vaccine, and this vaccine was also adjuvanted by MHC class II invariant chain [149]. In a mouse model, this strategy clearly enhanced the magnitude, breath, and cross-reactivity of vaccine-induced T cell responses [149]. So far, however, it is not clear if this in vitro advantage will also translate to protective immunity in individuals at risk. Indeed, immunodominant HCV-specific CD8+ T cell epitopes restricted by protective HLA class I types such as HLA-A*03, B*27, and B*57 show little conservation between HCV genotypes or even subtypes [44,112,115,117]. These 'protective' epitopes are thus excluded from the vaccine that is engineered to cover only highly conserved HCV sequence regions.
In conclusion, a future efficacious vaccine will likely have to induce cell-mediated as well as humoral immunity. For achievement of this goal, further research on cross-reactive epitopes, conserved regions within the HCV genome, correlates of protective immunity, and the role of bnAbs is urgently needed.
Conclusions
Global elimination of HCV infection will most likely depend on a prophylactic HCV vaccine. During the last few years, great progress was made in the understanding of successful HCV-specific immunity in acute-resolving HCV infection, as well as the mechanisms of HCV-specific CD8+ T cell failure in persistent infection. In addition, the great clinical advance of DAA therapy allowing cure of nearly all patients with chronic HCV infection enabled the analysis of partial restoration of HCV-specific immunity after clearance of the chronic infection. These new insights into HCV immunobiology, together with lessons from recently failed HCV vaccine trials might lead the way to successful vaccination strategies for both, individuals at risk for primary infection, as well as re-infection after DAA-mediated cure.
Author Contributions: All authors have contributed to the writing of this manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This work was funded in part by the Deutsche Forschungsgemeinschaft (SFB1160 "Immune-mediated pathology as a consequence of impaired immune reactions [IMPATH]", project 256073931, project A02 awarded to R.T., and project A06 awarded to C.N.H.; SFB TRR-179 "Determinants and dynamics of elimination versus persistence of hepatitis virus infection", project 272983813, project 01 awarded to R.T. and project 02 awarded to C.N.H.).
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the writing of the manuscript. | 2020-08-13T10:09:59.537Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "bf4115c6140d317cff3a1c2e5b80333ad6d9b505",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/16/5644/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76e08bec889ca29415e94c5595d019347c011a5e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210249917 | pes2o/s2orc | v3-fos-license | Synthesis hydroxyapatite/collagen/chitosan composite for tissue engineering
Hydroxyapatie/Collagen/Chitosan composite has been synthesized by ex-situ methods. The purpose of this research is to study the composition’s effect of the composite HA/Coll/Chi to some characters such as a functional group and phase as well as the physical character such as crystallinity, the compressive strength and surface morphology. HA synthesized from the precursor calcium derived from duck eggshell reacted with H3PO4, HA sintered at 900 °C. HA was added with collagen and chitosan solutin with a mass ratio of the composition HA/Coll/Chi are 7: 2: 1; 7: 1,5: 1,5; 7: 1: 2. FTIR analysis showed a band PO43−and OH− adsorption of HA, as well as a shift vibration of spectra in the group C = O and NH2 from chitosan and collagen that shows there has been a bond between collagen, chitosan and HA. Data of XRD showed that the phase of HA and chitosan in the composite of HA/Coll/Chi. Composite with collagen component hight has low cristalinity.
Introduction
Bone as part of human body has very important function, so that in the event of fracture bone will the serious of health and needed technique healing for correct bone. One of the techniques for healing of bone is use synthetic bone graft biomaterial. The good synthetic of bone graft have to biocompatible, bioactive, nontoxic, supporting the nature of and osteoconductive of osteoinductive [1][2], beside that the bone graft to have composition and structure which is equal with bone, that is containing > 69 % calcium phosphate, especially hidroxsiapatite; 21% collagen; 9% water and 1% other component [3]. Hidroxyapatite as especial component have the power of unfavorable mechanic, brittle and breakable, so that in synthesis of synthetic bone graft, the hydroxyapatite has to be composite with other material. Composite of polymer matrix have some advantage among others avoid the problem of shielding stress and eliminate second surgery procedure to eliminate implant. Mineral phase especially function of hidroxyapatite give inertia and delaying, while organic matrix give interest strength and bone flexibility [8]. Collagen measure up to good biocompatibility to degradation and permeated by body. In composite material, collagen and hidroxyapatite play role which is same as at natural bone [4]. The collagen can isolation from various sources of that are ox bone, goat, scrawl and fishbone. At this research the collagen got from scrawl. Characteristics osteoconduktive and osteointegrated of graft bone relate to porosity storey level and porosity size. To form pore at graft bone need materials that added which is function as porogen. Porogen that use must have characteristic biocompatible, biodegradable and nontoxic, one of them is chitosan. Chitosan is deacetilation chitin processes. This material can use as filler in composite produse. Chitosan have some characteristic there are biocompatible, biodegradable, non-toxic, non-antigenic and osteoconductive [5][6][7]. The osteoconductive character of chitosan is it can quicken growth of osteoblast at is composite of HA-chitosan so that can quicken forming of bone. In this research the bone graft made by composite three materials that is hidroxyapatite/collagen of Chitosan with mass composition variation of which look like with composition a period of bone. The syntheses there of bone graft are two methods able to be used are ex-situ and in-situ. Method of ex-situ represent method addition of polymer when collagen, chitosan after especial materials in the form of hidroxyapatite have been formed, while at method of in-situ addition collagen and chitosan done at the time of process of synthesis hidroxyapatite take place. Some research indicates that method of ex situ yield product having compared to higher purity than other method.
Material and Method
The calcium oxide was isolated from eggshell (Gallus gallus), the collagen was synthesis from scrawl (Gallus gallus) obtained from Mojosari, Indonesia. Chitosan (85% DD), Acetic acid, nitrate acid, natrium hydroxide p.a quality purchase from Merck.
Synthesis of hydroxyapatite
Synthesis hydroxyapatite has been done by reacting precursor of calcium and phosphate precursors with comparison of concentration of Ca /P 1.67. CaO powder was added by aqudemineral to yield hydroxide calcium. After that it was added with phosphate and stirrer until homogeneous. And then the solution added with natrium hydroxide until pH 10. Mixture was aging at room temperature during 24 hours. The solution yielded to filter and its sediment is washed with aquademineral, dried at oven in temperature 110 o C during 2 hours. After that, it was added with nitrate acid and sintering in 900 o C during 2 hours. The crystal that produces was cooled in furnace to produce of hydroxyapatite.
Synthesis of hydroxyapatite-Chitosan-collagen composite
Amount of chitosan was dissolved in acetate acid solution. It solution was stirred until homogeneous. Hydroxyapatite powder have been dissolved with water was added in chitosan solution with wish drop method. After that it solution was added with collagen solution. The composite that produce was treatment with freeze dry and then condensed on natrium hydroxide solution. After that the composite was dried in freeze dry again. The processed repeat again for composite with variation Massa composition HA/Coll/Chi 7:1.5:1.5 and 7:1:2. Composite HA/Coll/Chi was characterized functional group and crystalinity.
Results and discussion
At hidroxyapatite synthesis, CaO powder was soluted in aquademineral form Ca(OH) 2 solution. It solution was added with phosphoric acid solution drop to drop so that pH not go down drastically. The rate of phosphoric acid addition very related to obtained pH in the end of synthesis. Degradation at pH under 7 causing imperfect dissociation of phosphoric acid so that yield β-Ca 3 (PO 4 ) 2 and CaO. Phosphoric acid solution that was added with slowly can increase the homogeneity of solution [8,9]. At the research, the synthesis of hydroxyapatite is undergone at temperature 60 o C, this function is to crystal maximalized produce and reduces produce monoclinic crystal structure. The monoclinic crystal structure can produce in temperature under at 60 o C [10]. Hydroxyapatite that synthesis is expected has structure same with bone structure, there are hexagonal structure. When phosphoric acid was added in calcium hydroxide solution the solution slowly condensation become to have the character of acid, while process of crystallization take place effective at base condition [11] that the solution must be added with natrium hydroxide until pH 10, hydroxyapaptite crystal stabile at pH 10 [12] The hydroxyapatite from synthesis with variation sintering temperature compared with hydroxyapatite standard from Bank Jaringan. Figure 2 seen that all bone graft have physical character of solid with white in color. All bone graft has hollow fiber which result bone graft do not too hard when depressed. Pursuant to perception of physical seen bone graft have been mingled is homogeneously. Bone graft of Ha/Coll/Chi 7:1:2 with content of chitosan at most brass white in color, more and more content of chitosan hence its color tend to turn white brass. Composite of Ha/Coll/Chi 7:1:2 also more solid and resilient when compared with bone graft Ha/Coll/Chi 7:1.5:1.5 and bone graft Ha/Coll/Chi 7:2:1.Progressively the increasing of chitosan the bone graft seen it more resilient. At table 1 can be seen that the crystalinity of bone graft HA/Coll/Chi is experiencing of degradation. Degree of crytalinity express of the composition of crystal content in a material (Samsiah, 2009). It's crystalinity degree more high the bone graft hence progressively material crystal. Bone graft Ha/Coll/Chi have crystalinity degree more lower than hydroxyapatite caused by addition of organic material at bone graft which result bone graft is amorf. This matter is because collagen and chitosan are disseminated and has been bonding with apatite compound. At is bone graft of Ha/Coll/Chi 7:2:1 owning lowest crytalinity among other bone graft, this is indicate that materil organic have effect in crystalinity of bone graft.
Conclusion
Hydroxyapatite that produce from eggshell with precipitation wet method have functional group similar with hydrxyapatite from Bank Jaringan (HAp-BJ), there are characteristic for CO 3 2-, PO 4 3-, and OHfunctional groups. Bone graft Ha/Coll/Chi with composition 7:2:1 has lowest crytalinity among other bone graft, this matter indicate that collagen has effect in bone graft characteristic. | 2019-11-14T17:10:13.082Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "c26036d6cb678d0d70b10d531ca013dc138201c5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1317/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "61a391aba3e7206b58d61d648baf62a73fa1646b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
251648473 | pes2o/s2orc | v3-fos-license | Characterization of human epithelial resident memory regulatory T cells
Human resident memory regulatory T cells (Tregs) exist in the normal, noninflamed skin. Except one, all previous studies analyzed skin Tregs using full-thickness human skin. Considering that thick dermis contains more Tregs than thin epidermis, the current understanding of skin Tregs might be biased toward dermal Tregs. Therefore, we sought to determine the phenotype and function of human epidermal and epithelial Tregs. Human epidermis and epithelium were allowed to float on a medium without adding any exogenous cytokines and stimulations for two days and then emigrants from the explants were analyzed. Foxp3 was selectively expressed in CD4+CD103− T cells in the various human epithelia, as it is highly demethylated. CD4+CD103−Foxp3+ cells suppressed proliferation of other resident memory T cells. The generation and maintenance of epithelial Tregs were independent of hair density and Langerhans cells. Collectively, immune-suppressive CD4+CD103−Foxp3+ Tregs are present in the normal, noninflamed human epidermis and mucosal epithelia.
Introduction
The human skin is composed of three layers, namely, epidermis, dermis, and subcutaneous tissue. Epidermis is the outermost layer of the skin. Meanwhile, epithelium is the tissue that covers the internal and external surfaces of the body. In this context, the epidermis is a type of epithelial tissue. The dermis is basically made up of the structural protein known as collagen. It is much thicker than the epidermis and contains a variety of CD45 + immune cells that are derived from hematopoietic lineage cells (HLCs) (e.g., resident memory T (T RM ) cells, dendritic cells, macrophages, natural killer cells, and mast cells). Meanwhile, the epidermis functions primarily as a barrier against various external assailants. It is believed to be primarily composed of a majority of CD45 − keratinocytes with CD45 − melanocytes, Merkel cells, and CD45 + Langerhans cells (LCs) as minor components. The low number of CD45 + HLCderived immune cells in the epidermis resulted in the concept that the epidermis is immunologically undynamic. Therefore, a recent finding on the presence of human T RM cells in the epidermis was a milestone (1).
Proliferation of human dermal Tregs is mediated by dermal fibroblasts and IL-15 through cell−cell contact without antigen presentation and costimulation exerting their suppressive function also through cell−cell contact, independent of IL-10 and TGF-b production (2). On the other hand, proliferation of human epidermal Tregs is mediated by LCs, the epidermal and epithelial antigen-presenting cells, indicating LCs' tolerogenic features in the steady state. These epidermal Tregs suppress the proliferation of autologous other-skin T RM cells via cell−cell contact, dependent on MHC-II and CD80/CD86, suggesting the involvement of antigen presentation. Interestingly, the percentage of Foxp3 + and Ki-67 + cells in CD3 + cells is significantly higher in the epidermis than the dermis, suggesting the active in situ proliferation of human epidermal Tregs (5).
These previous elegant works elucidated the existence of immune-suppressive resident memory Tregs in the normal and noninflamed human skin (2)(3)(4)(5)(6)(7)(8). However, all previous studies analyzed skin Tregs using full-thickness human skin, except for one study (5). Considering that thick dermis contains more T RM cells than thin epidermis (14, 15), the current understanding of skin Tregs might be biased toward dermal Tregs. Therefore, this study focused on epidermal and epithelial Tregs. In addition, human epidermal and dermal T RM cells express CD69, a T RM marker, further being segregated into CD103 + and CD103 − cells (1,15,16). Thus, a differential Foxp3 expression and function between CD103 + and CD103 − cells were also examined.
Sources of tissues
Healthy, noninflamed human samples were obtained from routinely discarded tissue following plastic and gynecological surgeries. Unless otherwise noted, healthy and noninflamed human skin from female mamma was used in the experiments. All lesional skin samples of treatment-naive atopic dermatitis (AD) and psoriasis were taken from male patients and trunk.
Tissue processing RPMI 1640 (Invitrogen Life Technologies, Carlsbad, CA) containing 10% FBS (Biowest, Nuaille, France) and Anti-Anti (1:100; Gibco, Dublin, Ireland) without additional exogenous cytokines was utilized as a culture medium. This study analyzed emigrants from the epidermis and mucosal epithelium recovered via spontaneous migration method. In the indicated experiment, samples were enzymatically digested. In the spontaneous migration method, the skin or mucosa was washed with sterile cold PBS (Gibco, Dublin, Ireland) immediately after surgery at day 0. To obtain epidermal or epithelial sheets, the subcutaneous fat and deep dermis or lamina propria were thoroughly removed using a pair of scissors, respectively. The parts of skin composed of the epidermis and upper dermis or mucosa composed of epithelium and upper lamina propria were cut into approximately 20 × 10 mm square pieces and then incubated with Dispase II (2.5 mg/mL for mamma, scrotum, and glans penis, 1.5 mg/mL for urethra, and 1.25 mg/mL for vagina; Roche Diagnostics, Indianapolis, IN) dissolved with PBS (Gibco, Dublin, Ireland) overnight at 4°C. In the enzymatically digestion method, the epidermal and epithelial sheets were incubated with collagenase type IV (200 U/mL; Worthington, Lakewood, NJ) for 30 min at 37°C at day 1. After incubation, the sheets were divided into small pieces using a pair of forceps. To generate single-cell suspensions, these pieces were aspirated with a 50-cc syringe up and down 5 times. This was then filtered thrice through a sterile mesh, with subsequent analysis on the same day (day 1). To obtain emigrants, the epidermal or epithelial sheets were floated in the culture medium for 2 days at 37°C. At day 3, these emigrants were analyzed. . For cytokine expression, dead cells were eliminated from epidermal emigrants using the Dead Cell Removal Kit from Miltenyi Biotec GmbH (Bergisch Gladbach, Germany). Cells were cultured at a density of 5 × 10 5 per 48-well culture plate with 500 mL of culture medium supplemented with eBioscience Cell Stimulation Cocktail (500x) from Invitrogen (Carlsbad, CA) and eBioscience Protein Transport Inhibitor Cocktail (500x) from Invitrogen (Carlsbad, CA) for 6 h at 37°C. For intracellular staining, surface antigens were stained as described above, followed by cell fixation and permeabilization using eBioscience ™ Foxp3/Transcription Factor Staining Buffer Set (Invitrogen ™ , Carlsbad, CA) according to the manufacturer's instructions. Cells were then incubated with anti-Foxp3 mAbs together with mAbs for cytokines for 30 min at 4°C and washed twice in the buffer. Data were analyzed using the FlowJo software (FlowJo, LLC, Ashland, OR).
Quantitative PCR
Total RNA was extracted from human noninflamed and inflamed epidermis using QIAzol ® Lysis Reagent (Qiagen, Hilden, Germany) and RNeasy ® Plus Universal Mini kit (Qiagen, Hilden, Germany) per the manufacturer's instructions. Reverse transcription was performed using ReverTra Ace ® qPCR RT Kit (Toyobo, Ohtsu, Japan) per the manufacturer's instructions. mRNA levels were determined using commercially available primer/probe sets (TaqMan ® Gene Expression Assay: Applied Biosystems, Foster City, CA) and the AB7500 real-time PCR system (Applied Biosystems, Foster City, CA). The amount of target gene mRNA obtained using real-time PCR was normalized against the amount of housekeeping control gene (ACTB) mRNA. Human Foxp3 primer was designed by Takara (Kyoto, Japan).
T cell isolation
Dead cells were eliminated from the epidermal emigrants using Dead Cell Removal Kit (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany). CD8 + T cells were positively isolated from the epidermal emigrants using CD8 microbeads (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany). CD4 + CD25 − and CD4 + CD25 + T cells were isolated from both the epidermal emigrants and the PBMCs using CD4 + CD25 + Regulatory T cell Isolation kit (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany). To enhance the CD4 + CD25 + T-cell purity, cells were passed through two-round consecutive columns. In the Treg suppression assay, isolated CD8 + T cells and CD4 + CD25 − T cells that were used as responders were labeled with 2.5 mM CFSE (Invitrogen Life Technologies, Carlsbad, CA) for 10 min at 37°C. These responders were cocultured with or without isolated CD4 + CD25 + T cells at 20:1 ratio in the 96-well round-bottom plates that were pre-immobilized with anti-CD3 (1 mg/mL; clone UCHT1; BioLegend, San Diego, CA) and anti-CD28 (3 mg/mL; clone CD28.2; BioLegend, San Diego, CA) mAbs overnight at 4°C . After culturing for 5 days at 37°C, the CFSE expression in the responders was determined by flow cytometry.
Genomic DNA demethylation in the FOXP3 gene Isolated T cells were resuspended with 400 mL of Lysis buffer (100 mM NaCl, 100 mM Tris-HCL, 50 mM EDTA, 0.5% SDS). 2 mL of Proteinase K (20 mg/mL; Invitrogen, Waltham, MA) was added to the sample and incubated at 55°C for 24 h at 1,300 rpm. 400 mL of PCI (phenol/chloroform/isoamyl alcohol (25:24:1)) were used for the phenol-chloroform extraction (twice). 400 mL of CIA (chloroform/isoamyl alcohol (24:1)) was added for the chloroform extraction. 40 mL of 3 M Sodium Acetate, 1 mL of ethatinmate (Nippon Gene, Tokyo, Japan) and 1 mL of 100% ethanol were used for the ethanol precipitation. The genomic DNA was eluted in 25 mL of TE buffer. Bisulfite treatment was performed using MethylEasy ™ Xceed (Human Genetic Signatures, North Ryde, Australia) according to the manufacturer instructions. The target locus was amplified by TaKaRa Ex Taq ® Hot Start Version (Takara, Shiga, Japan) polymerase with bisulfite sequence primers previously reported (human FOXP3 CNS2 forward primer: TTGGGTTAAGTTTGTTGTAGGATAG and reverse primer: ATCTAAACCCTATTATCACAACCCC) (17). The PCR product was subjected to gel electrophoresis and extracted using QIAEX II Gel Extraction Kit ® (Qiagen, Venlo, Netherland). TA cloning was performed using DynaExpress TA PCR Cloning kit (Funakoshi, Tokyo, Japan). DH5a transformation was performed using E.coli DH5a Competent Cells (Takara, Shiga, Japan) and incubated overnight at 37°C.
White colonies were picked up and the denature reaction and the rolling circle amplification were performed using illustra ™ TempliPhi ™ DNA Amplification Kit (Cytiva, Tokyo, Japan). Sequencing was performed using BigDye ™ Terminator v3.1 (Applied Biosystems, Foster City, CA), M13 Reverse primer (Invitrogen Life Technologies, Carlsbad, CA) and 3500 Genetic Analyzer (Applied Biosystems, Foster City, CA).
Results
Higher IL-2Ra (CD25) and IL-2Rb expressions in the epidermal CD4 + CD103 − cells Unless otherwise noted, healthy and noninflamed human skin from female mamma was used in the experiments. For epidermal T cell analysis, the skin needs to be incubated with Dispase II to separate the epidermis from the underlying dermis ( Figure 1A). Laminin 332, a major component of lamina densa, was stained in the dermal apical side (Figure 1B), indicating a lack of dermal components in the separated epidermis. In the spontaneous migration method, the epidermis was floated on a medium without adding any exogenous cytokines and stimulations for two days at 37°C (15). Then, emigrants from the epidermis were analyzed, followed by detecting CD1a + CD3 − LCs and CD1a − CD3 + T cells. The latter was divided into CD4 + and CD8 + T cells ( Figure 1C), both of which expressed CD69, a T RM marker, with or without CD103 (another T RM marker) expression ( Figure 1D), as previously reported (1,15). Thereafter, epidermal CD4 + T cells were focused upon. Both the CD103 + and CD103 − fractions equivalently demonstrated a memory phenotype and skin tropism ( Figure 1E). Common gchain and IL-7Ra expressions were comparable between CD103 + and CD103 − fractions, whereas IL-2Ra (CD25) and IL-2Rb expressions were significantly higher in the CD103 − fraction ( Figures 1F, G), suggesting higher IL-2 responsiveness.
Predominant Foxp3 expression in the epidermal and epithelial CD4 + CD103 − cells As expected in a high CD25 expression in the CD103 − fraction, Foxp3 was predominantly expressed in this fraction (Figures 2A, B). Additionally, Foxp3 + CD103 − cells comprised~15% of epidermal CD4 + T cells ( Figure 2C) and approximately 40% of the CD103 − fraction consistently expressed Foxp3 ( Figure 2D). However, the spontaneous migration method induces T cell activation (15) and Foxp3 is a T-cell activation marker (18). Thus, the epidermis was enzymatically digested with collagenase type IV, leading to a confirmed Foxp3 expression in the CD103 − fraction ( Figures 2E-H), suggesting that Foxp3 expression in the CD103 − fraction was not induced by the spontaneous migration method. Consistent with flow cytometry-based data, the human epidermis from healthy volunteers contained small CD4 + Foxp3 + cell populations (Figure 2I), and the expression of Foxp3 and CD103 was mutually exclusive ( Figure 2J). Interestingly, Foxp3 + CD103 − cells consistently comprised approximately 15% of epithelial CD4 + T cells in several female and male organs, except in the scrotal epidermis ( Figures 3A, B). Based on a preferential localization of epidermal Tregs in the hair follicles (5,6), hairy areas such as the scalp and face contain more Tregs (6). However, the scrotal epidermis, a hairy area, has fewer Tregs. In addition, epithelial Tregs existed consistently in tissues with LCs (e.g., skin (mamma) and type II mucosa (vagina and glans penis)) and without LCs (e.g., the type I mucosa (urethra)). Collectively, Foxp3 is expressed in CD103 − cells. In addition, epithelial Treg (G) An MFI summary of (F). All the experiments were conducted using at least three different samples. Comparisons between groups were evaluated using Student's t-test (one-tailed); *p < 0.05 and **p < 0.01. density might be different between parts of body. Moreover, epithelial Treg density does not only depend on hair density, and LCs are dispensable for epithelial Treg generation and maintenance.
Tregs in the epidermis of inflammatory skin conditions
Tregs has been shown to increase the number and proliferating ability in the skin of patients with an inflammatory skin disease (e.g., psoriasis) (6). Thus, we aimed to determine whether Tregs increased specifically in the epidermis of patients with inflammatory skin diseases. To this end, epidermal FOXP3 mRNA expression was examined using noninflamed mamma skin from healthy female volunteers and inflamed lesional trunk skin from male patients with treatmentnaive atopic dermatitis and psoriasis. As a result, FOXP3 mRNA expression ( Figure 4A) and the number of Tregs marginally increased ( Figure 4B). Therefore, Foxp3-expressing epidermal CD4 + T RM cells increased in the epidermis of inflamed skin condition.
Expression of Treg signatures in the epidermal Foxp3 + CD4 + T cells and Foxp3 − CD4 + T cells
In human CD4 + CD25 + Foxp3 + Tregs of peripheral blood and full-thickness skin, IL-7Ra (CD127) expression inversely correlates with suppressive function (6,19). In addition, expression of Treg signatures like CTLA-4, PD-1, CD27, ICOS, TIGIT, and CD39 are associated with suppressive function and lineage stability. Thus, expressions of these molecules were examined. Epidermal Foxp3 + CD4 + T cells expressed higher CD25 and lower CD127 than Foxp3 − CD4 + cells. CD28 expression was comparable between both the fractions. However, the expressions of CTLA-4 (intracellular), B A One set of samples of mamma and vagina were not from same volunteers. This means these samples were from six different volunteers. Conversely, one set of samples of scrotum, gland penis, and urethra were from one volunteer. This means these samples were from three different volunteers. Comparisons between groups were evaluated using Student's t-test (one-tailed); *p < 0.05 and **p < 0.01. PD-1, CD27, ICOS, TIGIT, and CD39 were significantly higher in Foxp3 + CD4 + T cells than in Foxp3 − CD4 + cells. In contrast, CD226 expression showed the opposite. Of note was that CD25, CD27, and CD39 were predominantly expressed in the Foxp3 + CD4 + T cells (Figures 5A, B). These data indicate an expected suppressive capacity of epidermal Foxp3 + CD4 + T cells.
Less inflammatory cytokine production and activated phenotype of the epidermal Foxp3 + CD4 + T cells Human Tregs in the full-thickness skin reportedly produce less IFN-g and IL-17A compared with non-Tregs (6). All the experiments were conducted using at least three different samples. Comparisons between groups were evaluated using Student's t-test (one-tailed); *p < 0.05, **p < 0.01, and ***p < 0.001. Thus, cytokine production by epidermal Foxp3 + CD4 + T cells was examined. Epidermal Foxp3 + CD4 + cells expressed significantly less IFN-g and tended to express less TNF-a, IL-4, and IL-17A ( Figures 6A, B), but showed comparable TGF-b expression, compared to those by epidermal Foxp3 − CD4 + T cells (Figures 6A, B). IL-10 production was undetected regardless of Foxp3 expression in this system (data not shown).
Highly demethylated FOXP3 in the epidermal CD4 + CD103 − cells To validate the suppressive function of epidermal Foxp3 + CD4 + T cells that mostly consisted of CD103 − fraction, the Treg population was magnetically enriched using the CD4 + CD25 + Regulatory T cell Isolation Kit (Miltenyi Biotec). The Foxp3 + cell purity was enhanced by approximately 90% in the PBMCs but only up to 50% in the epidermis ( Figure 7A). Less inflammatory cytokine production and activated phenotype of the epidermal Foxp3 + CD4 + T cells. All the experiments were conducted using at least three different samples. Comparisons between groups were evaluated using Student's ttest (one-tailed); *p < 0.05 and **p < 0.01. ns, not significant.
Nevertheless, this incompletely enriched epidermal Treg population strongly suppressed the proliferation of both allogeneic epidermal CD4 + CD25 − T cells and CD8 + T cells ( Figure 7B). Weak cytokine production and suppressive function by the peripheral blood Tregs strongly correlate with the genomic DNA demethylation in intron 1 of the FOXP3 (TSDR locus) (20-22). Therefore, FOXP3 in CD4 + CD103 − Tregs with high CD25 expression is expected to be highly demethylated. Hence, the methylation patterns of upstream Foxp3 CpG islands in CD4 + CD25 − and CD4 + CD25 + T cells in the PBMCs and epidermis from female subjects were analyzed. The FOXP3 demethylation rate was higher in CD4 + CD25 + T cells of PBMCs and the epidermis than in CD4 + CD25 − T cells ( Figure 7C). As female samples were used in these experiments, Foxp3 in the epidermal CD4 + CD25 + Tregs that mostly consisted of CD103 − cells appeared to be highly demethylated, as it is on the X-chromosome, resulting in X chromosome inactivation; thus the uppermost limit of FOXP3 demethylation is 50% (23). Moreover, the Foxp3 + cell purity in the epidermal CD4 + CD25 + yielded by CD4 + CD25 + Regulatory T cell Isolation Kit was up to 50% ( Figure 7A). These data suggest that the FOXP3 gene in the epidermal CD4 + CD25 + CD103 − cells is highly demethylated and these epidermal Tregs have a potent suppressive function.
Discussion
The presence of resident memory Tregs in human epidermis has been previously reported (5). They are abundant in hair follicle (5,6). Furthermore, epidermal LCs regulate their proliferation (5). However, the precise phenotype is unknown. Therefore, the current study reported that Foxp3 was predominantly expressed in CD4 + CD103 − T cells of several human epithelial types because FOXP3 DNA in CD4 + CD103 − T cells was highly demethylated compared to that in CD4 + CD103 + T cells. Moreover, epidermal CD4 + CD103 − Foxp3 + cells from healthy and noninflamed human skin inhibited the proliferation of both allogeneic epidermal CD4 + CD103 + and CD8 + T cells, which indicated a bona fide Treg identity. Foxp3-expressing epidermal CD4 + T RM cells increased marginally in the epidermis of inflamed skin. However, given that Foxp3 is a T-cell activation marker (18), it was uncertain whether these cells were "true" Tregs.
Human epidermal and dermal T cells are divided into CD103 + and CD103 − cells (1,15). Since CD103 (integrin aE) binds to E-cadherin expressed on keratinocytes (1, 24), CD103 on the T cells is likely required in tissue retention. However, the migratory behavior of human epidermal CD8 + T RM cells was comparable irrespective of CD103 expression (25), suggesting that epidermal retention of human T cells is not mediated only by CD103. Human epidermal and dermal T RM cells including Tregs express strongly a T RM marker CD69 irrespective of CD103 expression (1,15). CD69 has been shown to facilitate T-cell tissue retention (26,27). Therefore, skin CD103 − T cells might keep their local retention via putative adhesion molecules (e.g., CD69).
Foxp3 was expressed predominantly in the CD4 + CD103 − T cells of several human epithelial types. Consistently, human CD25 high FoxP3 + Tregs barely express CD103 in various tissues such as blood (28). Conversely, Foxp3 is expressed preferentially in CD4 + CD103 + T cells of murine skin and other murine organs and tissues (28). The underlying mechanisms by which CD103 is expressed differently by Foxp3 + Tregs in mice and humans remain unknown. Why is Foxp3 expressed in human CD103 − fraction? First, human thymic CD4 single-positive T cells are unable to express CD103 regardless of Foxp3 expression (29). The percentage of human skin Tregs in the CD4 + T cells reached a peak during the second trimester via both continued thymic egress and local proliferation (8). Moreover, FOXP3 DNA in CD4 + CD103 − T cells was demethylated significantly. These data suggest that the epidermal CD4 + CD103 − Foxp3 + cells examined in the current study are thymus-derived Tregs.
The CD103 − fraction of human epidermal CD4 + T RM cells expressed strongly IL-2Ra (CD25) and IL-2Rb. Additionally, Foxp3 was expressed predominantly in the CD103 − fraction. The data suggest that IL-2 is involved in the regulation of human epidermal Tregs. In the murine Tregs of secondary lymphoid organs and skin, IL-2 is indispensable in Treg generation (30), Foxp3 induction (31, 32), and stability of Foxp3 expression (33). Moreover, IL-2 induces the expression of CD25, CTLA-4, and CD39 in Tregs, thereby enhancing suppressive function (34). However, the maintenance of murine Tregs appears to be mediated by IL-7 rather than IL-2 (30,32). Meanwhile, the regulation of human Tregs by common g-chain cytokines is less known. In human peripheral blood and skin Tregs, IL-7Ra expression correlates inversely with suppressive function (6,19). Additionally, the proliferation of human skin Tregs is mediated by IL-2 and IL-15 (5). Further analyses are required to elucidate the regulation of human Tregs by common g-chain cytokines.
Furthermore, Treg signatures (e.g., CTLA-4, PD-1, CD27, ICOS, TIGIT, and CD39) were found predominantly in the CD4 + Foxp3 + T cells. Murine Treg-specific CTLA-4 depletion leads to the spontaneous development of systemic lymphoproliferation and T-cell-mediated autoimmune diseases, and an impaired suppressive function (35). Conversely, murine Treg-specific PD-1 depletion leads to an enhanced activated phenotype and suppressive function (36). Both human and murine skin Tregs strongly express CD27, which inhibits Treg/ Th17 plasticity (7). ICOS is dispensable in the induction of murine Foxp3; however, it labels Tregs with superior suppressive capacity (37) and promotes Treg survival (38). The coinhibitory and costimulatory factor of TGIT and CD226, respectively, bind with the common ligand CD155. TIGIT is upregulated on activated human peripheral blood Tregs, which facilitate lineage stability and suppressive capacity (39). CD39 is expressed primarily by immune-suppressive Tregs in both humans and mice; thereby, suppressing the development of inflammatory autoimmune diseases (40). These findings suggest that CD4 + CD103 − Foxp3 + cells play an important role in epithelial immune homeostasis. For example, depigmentation in vitiligo is mediated by IFN-g-producing epidermal CD49a + CD8 + T RM cells (41). We found that epidermal CD4 + CD103 − Foxp3 + cells inhibited the proliferation of both allogeneic epidermal CD4 + CD103 + and CD8 + T cells. As a result, an immunological imbalance between CD4 + CD103 − Foxp3 + cells and other effector cells may initiate or worsen epidermal skin diseases. Furthermore, the abundance of T RM cells in the human epidermis, including CD4 + CD103 − Foxp3 + cells, raises the possibility of autonomous immunity in the epidermis independent of the effect of dermal immunity.
As mentioned, epidermal Tregs are assumed to be regulated by LCs in the hair follicles. However, the reduced number of CD4 + CD103 − Foxp3 + cells in the scrotal epidermis, a hairy area, and the existence of CD4 + CD103 − Foxp3 + cells in the urethral epidermis devoid of hair follicles and LCs suggest that generation and maintenance of epithelial Tregs are independent of hair density and LCs. The underlying mechanisms by which epithelial Tregs are generated and maintained in such organs are left open.
There is no definitively superior method for analyzing both epidermal and dermal T cells whose antigen expression is not modulated, because Dispase II incubation alone can cleave T cell and T RM markers (15). However, in terms of T cell antigen protection, the spontaneous migration method used in this study outperforms the enzymatic digestion method (15). Furthermore, we examined epidermal emigrants produced using the spontaneous migration method without the addition of any exogenous cytokines or stimulations. Thus, current data in this study originate from relatively natural human epidermal T cells.
A limitation in this study was the use of mamma skin from female volunteers. The information might vary if samples from male volunteers and/or other skin regions (e.g., scrotum) were used.
Lastly, further studies are needed to uncover the modulation of the phenotype and function of epithelial Tregs in the inflamed condition and the involvement of epithelial Tregs in the pathomechanisms of autoimmune skin diseases.
Data availability statement
The data that support the findings of this study are available from the corresponding author, YO, upon reasonable request.
Ethics statement
The studies involving human participants were reviewed and approved by The Institutional Review Board of the University Hospital. The patients/participants provided their written informed consent to participate in this study. | 2022-08-19T13:26:24.764Z | 2022-08-19T00:00:00.000 | {
"year": 2022,
"sha1": "392fd0194a8c234acca97a36690a3ab47c349e4e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "392fd0194a8c234acca97a36690a3ab47c349e4e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56411351 | pes2o/s2orc | v3-fos-license | Relative energetics and structural properties of zirconia using a self-consistent tight-binding model
We describe an empirical, self-consistent, orthogonal tight-binding model for zirconia, which allows for the polarizability of the anions at dipole and quadrupole levels and for crystal field splitting of the cation d orbitals. This is achieved by mixing the orbitals of different symmetry on a site with coupling coefficients driven by the Coulomb potentials up to octapole level. The additional forces on atoms due to the self-consistency and polarizabilities are exactly obtained by straightforward electrostatics, by analogy with the Hellmann-Feynman theorem as applied in first-principles calculations. The model correctly orders the zero temperature energies of all zirconia polymorphs. The Zr-O matrix elements of the Hamiltonian, which measure covalency, make a greater contribution than the polarizability to the energy differences between phases. Results for elastic constants of the cubic and tetragonal phases and phonon frequencies of the cubic phase are also presented and compared with some experimental data and first-principles calculations. We suggest that the model will be useful for studying finite temperature effects by means of molecular dynamics.
I. INTRODUCTION
The classical empirical models of zirconia are based on the a priori assumption of its ionicity. Empirical approaches like the Shell Model (SM) or the Rigid Ion Model (RIM) described the structural, 14 dynamical 15, 16 and transport [17][18][19] properties of the phases on which they were parameterized, but failed to predict the absolute stability of the m structure.
The most detailed of such models was developed by Wilson et al., 20 However, further calculations 21 carried out with this model revealed that even though it predicted the correct energy ordering of the c, t and m phases, it predicted that the rutile structure should be even more stable, and this phase is never observed experimentally in zirconia.
The experience gained with the CIM-DQ model suggests that a successful empirical model of zirconia should describe the effects of the atomic polarization, but should also go beyond a purely ionic description of the bonding. The partial covalent character of zirconia has already been postulated 22 and is evident from electronic structure calculations based on density functional theory. In this paper we further investigate the recently proposed polarizable self-consistent tight binding (SC-TB) model [23][24][25] which combines the physical concepts of covalency, ionicity and polarizability. Using the SC-TB model we are drawn to the conclusion that the covalent character of the Zr-O bond makes a significant contribution to the relative energetics of different structures, which would explain the limited predictive power of the previous ionic models.
There have been several previous approaches to analyzing the structural and electronic properties of zirconia. Boyer and Klein 26 used the APW method to derive pair potentials with which to investigate the equation of state of the c phase. Cohen et al. 27 calculated the relative energetics and the elasticity using the Potential Induced Breathing (PIB) method based on the Gordon-Kim approach. Zandiehnadem et al. 28 studied the electronic structure with a first principles LCAO method. The FLAPW calculations of Jansen 29 predicted for the first time the correct energetic ordering between the c and t structures at zero absolute temperature, identifying the double well in the potential energy that governs their relative stability. The double well was subsequently confirmed by ab initio Hartree-Fock (HF) calculations, 30,31 but these did not predict the stability of the m structure over the t one. Only the very recent Density Functional Theory (DFT) calculations 32-34 consistently reproduce the relative energetics of the three zirconia polymorphs at 0 K.
The plan of the present paper is as follow. In Section II we describe the model used in the calculations, the inclusion of the atomic polarizability in the TB framework and the parameterization procedure. A preliminary account of this work has been published. 24 We have made DFT calculations of band structures of the simple structures for this purpose, using a new full-potential, linear muffin tin orbital method (NFP-LMTO). The predictive power of the new model is tested against the DFT calculations in Section III A, where we study the relative energetics of zirconia. Section III B focuses on the relationship between the c and t structures: the Landau theory of phase transformation is used to interpret the results of the static calculations. In Section IV, we explore the elastic and the vibrational properties of the high symmetry phases. The results are summarized in the concluding Section.
A. Including polarizabilities in TB
In the TB approximation the crystal wave function can be expressed as a linear combination of atom-centered orbitals which we denote | RL : L is a composite angular momentum index L = (ℓ, m) of the atomic orbital centered on the site whose position is R, n and k are the band and k−vector indices of the single particle wave function. For the purpose of derivation, we express the local orbitals as a product of a radial function and a real spherical harmonic although in our empirical TB scheme the explicit functional forms of the radial wave functions are not required. To simplify the notation, we will frequently suppress the site index R, in which case one can take it we are referring to an atom at the origin and r is a small vector in its neighborhood. If we assume the on-site charge distribution to be localized, then its total multipole moment Q L has a monopole contribution from the ionic core charge and a multipole (including monopole) contribution from the valence charge: As Stone 37 points out, the electronic multipole moment on a site is the expectation value of the operatorQ where e is the charge of the electron. Neglecting inter-site terms like R ′ L ′ Q e RL R ′′ L ′′ for R ′ , R ′′ = R, the definition of the on-site multipole moment is therefore: By invoking equations (2) and (4), the last factor of Eq.(5) can be expressed as a product of two quantities, the Gaunt coefficients C L ′ L ′′ L , which dictate the selection rules, and the integrals ∆ ℓ ′ ℓ ′′ ℓ , which will be new parameters of the model: where dΩ stands for the element of solid angle sin θ dθ dφ. The rôle of the Gaunt coefficients, which depend on the angular part of the wave function only, is to select the term with symmetry L arising from the coupling of the on-site orbitals L ′ and L ′′ . The ∆ parameters, depending on the radial part of the wave function, determine the magnitude of the coupling.
The substitution of Eq.(6) in Eq.(5) defines the multipole moment of symmetry L on the site R.
Having defined the on-site multipole moments, we can calculate the fields which they generate on all the lattice sites. The derivation uses standard results from classical electrostatics. The electrostatic potential is expanded in partial waves about the site: where, using the Poisson equation, andB The sum over L ′′ is restricted to the values for which ℓ ′′ = ℓ + ℓ ′ ;B LL ′ are proportional to the well known LMTO-ASA structure constants. 38 The component of electrostatic potential V L couples different orbitals on a site giving the matrix elements: The diagonal elements of the Hamiltonian are adjusted by using a single Hubbard U in the standard way, which adds a term U δN Rℓ to each diagonal matrix element. The quantities δN Rℓ are the changes in the electronic charge projected onto a site and orbital compared to the input, non-self-consistent charge. We use the standard Mulliken projection.
Finally the Schrödinger equation is solved using a self-consistent iterative procedure with charge mixing to obtain the coefficients c nk RL and hence the multipoles. It is useful to step back at this point and compare the above model with the Hohenberg-Kohn-Sham (HKS) one, whose exchange and correlation energy functional U xc [n] has been expanded to second order in the electron density n(r): 39 n 0 denotes a reference electron density, which we will consider as a superposition of spherical ionic charges; T S is the kinetic energy operator of the non-interacting electron gas, V xc 0 , V H 0 and V i 0 are the exchange and correlation, Hartree and ionic potentials calculated at the reference charge n 0 ; δn denotes the deviation from that reference (δn = n − n 0 ) and n ′ refers to the electron density at r ′ . U H and U ii are respectively the Hartree and the ion-ion electrostatic energies.
Without the last term, this is simply the Harris-Foulkes functional. It generates a nonself-consistent TB model in which the first term is the sum of the eigenvalues while the second is a sum of pair potentials. 40 If the last term is included, the total energy must be minimized iteratively, and the last term now provides the self-consistency correction to the Kohn-Sham Hamiltonian.
The last line of Eq.(13) represent the Hartree energy of the deviation from the reference charge, U H [δn], and the second order term of the U xc Taylor expansion. We can identify this term in our SC-TB model as follows: Our total energy in the SC-TB model is therefore It can be verified that, by minimizing the above expression with respect to the expansion coefficients in the wave functions, we recover the Schrödinger equation with the SC-TB Hamiltonian.
Calculation of the forces on the ions is very straightforward once we have the selfconsistent wave functions and multipoles. For if an ion is moved a small distance δR, there is no change in total electronic energy to first order in the δc nk RL . Therefore we can calculate the force due to the change in the first term of (15) by the conventional formulae, using the derivatives of the non-self-consistent Hamiltonian matrix elements (see following section). In calculating the forces due to the last term of (15) we can hold the multipoles fixed and use standard electrostatics. There is no contribution to the forces from the on-site energy containing U. The simple form of these results for the forces in TB is a direct analogy with the application of the Hellmann-Feynman theorem in DFT.
B. Parameterization
Each parameter of the model has been adjusted to the results of NFP-LMTO calculations, details of which are specified in the previous work on zirconia. 24 Our TB description of zirconia uses a minimal basis of atomic orbitals. The oxygen atoms are modelled with 2p and 3s orbitals and with a fixed core charge of +4, while on the zirconium atoms there are 4d orbitals and a core charge of +4. The purpose of the 3s orbital on the oxygen is twofold: to allow an extra degree of freedom for polarization, which is otherwise restricted to charge transfer between its 2p orbitals, and to better reproduce the structure of the conduction bands.
A repulsive Born-Mayer pair potential U pair has been chosen in order to reproduce the lattice parameter and the bulk modulus of the c phase. Only the first Zr-O coordination shell has been included in this interaction.
The Hamiltonian H 0 has been adjusted to the ab initio electronic structure of the c phase shown in Figure 2 Table I. The basis set chosen reduces the number of symmetry-allowed ∆ parameters to 4: ∆ spp , ∆ ppd , ∆ ddd and ∆ ddg . The first two refer to the s and p orbitals of oxygen ions, the last two to the d orbitals on the zirconium.
In the highly symmetric c structure the first non spherical terms of the potential V L on the cation and anion sites have g and f symmetry respectively. The latter cannot interact with the oxygen orbitals, the former splits the energetic levels of the zirconium d orbitals and ∆ ddg determines the magnitude of the energy splitting δǫ. Cubic crystal field theory 42 predicts the proportionality between δǫ and the radial distribution of charge <r 4 > which is the definition of ∆ ddg given in Eq. (8). Figure 2 Less symmetric structures are necessary to parameterize the remaining ∆'s. In the rutile phase, the ℓ = 3 component of the crystal field acting on the oxygen ions splits the p levels.
Consequently, it contributes to the width of the 2p band: this effect is controlled by ∆ ppd which we adjust to match the ab initio band structure of the rutile phase. The last term ∆ spp has been chosen in order to reproduce the depth of the double well in the potential energy of the t structure. The c and the t phases were used in the parameterization procedure, therefore there is automatic agreement of the two methods for these crystal structures. The true prediction of the model is the absolute stability of the monoclinic phase. This indicates the transferability of the parameters between the phases.
The rutile phase, which is not experimentally observed, has been included in the study because further calculations with the CIM-DQ 21,43 model predicted the rutile phase to be more stable than the monoclinic one. Figure 3 shows that the SC-TB model does not suffer from this problem, although the relative energy of the rutile phase is less than with the DFT. To our knowledge, the SC-TB is the first semi-empirical model which reproduces the correct ordering of these polymorphs at zero temperature, including the stability of the m phase. Table II summarizes the structural properties calculated with the NFP-LMTO method and with the polarizable SC-TB model, comparing them with other theoretical and experimental works. The c and m lattice parameters are referred to the 12-atoms unit cell, while the t ones are given in terms of the 6-atoms unit cell. A comparison of the energy differences between the phases of zirconia calculated with different methods is given in Table III.
High-Pressure phases
Under pressure, the low temperature m phase transforms to an orthorhombic structure, known as ortho I (o I ), whose crystallography is still controversial. X-ray diffraction analysis 44,45 suggests it belongs to the P bcm space group while neutron diffraction studies 46,47 propose the P bca space group. We carried out the calculations using the latter structure.
The phase transition pressure strongly depends on the state of the sample and is believed to be between 3 and 6 GPa. 48-50 A second pressure-induced phase transition is observed around 15 GPa, 50 where the o I transforms to the orthorhombic phase termed ortho II (o II ).
The latter is isostructural to cotunnite (PbCl 2 ) and belongs to the P nam space group. 51 The pressure increases the coordination number of the zirconium atoms from 7 to 9.
A comprehensive first-principles study of the two orthorhombic phases has apparently not yet been made: Stapper et al. 33 studied the o I structure only, while Jomard et al. 34 focused on the o II phase.
The atomic environment of the high pressure phases is completely different to that of the c and t phases used in the parameterization of the TB model, therefore these orthorhombic structures provides a severe benchmark for the transferability of the TB parameters.
The energy ordering of the phases predicted by the TB model is which is the same as we obtain by combining the results of Refs. 33 and 34. The numerical values of the energy differences are summarized in Table III and compare reasonably well with the ab initio results. The Energy-Volume curves of the orthorhombic phases are shown in Figure 4: all the degrees of freedom were fully relaxed and their values are collected in Table IV.
Although the TB model predicts the correct relative energetics of the phases, it is not capable of describing the subtle pressure-induced phase transformation m ↔ o I . Figure 4 shows the common-tangent between the m and the o II phases. As the pressure is increased, the model misses the correct sequence of the phases, predicting a m ↔ o II pressure-induced phase transformation at 5 GPa.
Static calculations
The relationship between the cubic and the tetragonal phases is governed by a volume dependent double well in the potential energy. Since the FLAPW calculation of Jansen 52,29 who predicted it first, the double well has been confirmed by several other ab initio calculations and it is now well established.
In this section we analyze the nature of the 0 K energy surface by combining the information gained using two very different approaches: the NFP-LMTO method and the polarizable TB model. The qualitative and quantitative agreement between the results of the two calculations, shown in the previous section, entitles us to use the physical picture provided by the simpler model to interpret the ab initio results.
Starting from the c phase, the t structure can be obtained by continuously stretching the unit cell along the c crystallographic direction and by displacing the oxygen columns by δ along the tetragonal axis according to the X − 2 mode of vibration ( Figure 1). We calculated the total energy of the crystal using the two methods, for different values of (δ, c/a) at several volumes.
The energy curve exhibits a single well or a double well structure depending on the specific volume. At small volumes, V 1 , the tetragonal distortion is energetically unfavored and the equilibrium structure is cubic ( Figure 6). When the cubic phase is stable, there is no distinct metastable tetragonal phase with which to compare its energy, so the energies of the two phases merge. At larger volumes, V 2 , a structural instability appears and the c structure spontaneously distorts to the t one ( Figure 6).
The curvature of the energy surfaces is related to the phase transition mechanism. It is clear from Figure 6 that ∂ 2 E ∂η 2 is positive, while ∂ 2 E ∂δ 2 is negative: this suggests that the phase transition is driven by the δ instability and that the adjustment of the c/a ratio is a secondary effect. The coupling between these two order parameters will be further discussed when we interpret the double well using Landau Theory.
Our LDA and TB results for the depth of the double well at the t phase equilibrium volume, V 2 , are consistent with the recent LDA values of ≈ 7 mRy. 33,34 This energy barrier for the 6-atom unit cell corresponds to a temperature of ≈ 1100 K. The same result was obtained by Jansen 52 with the FLAPW method who proposed a value of ≈ 1200 K. It is natural that these temperatures, extrapolated from the 0 K potential energy, underestimate the experimental phase transition temperature of 2570 K. 6 The experimentally observed phase transition temperature can be considered as the sum of the kinetic contributions of all the activated eigenmodes, while the calculated energy barrier refers to the kinetic contribution of the X − 2 eigenmode only. Even though it is reasonable to expect that at the phase transition the soft mode in the phonon spectra ( Figure 11) will be highly weighted in the total density of states, the kinetic energy kT associated with all the other modes of vibrations will still contribute to the measured phase transition temperature. By adjusting the various parameters describing ionicity, covalency and polarizability of the TB model we can select and isolate the effects that induce the double well, but before doing so it is instructive to understand how a simple RIM answers to the same question. It has been shown 20 that it is possible to reproduce the double well with a RIM in which there are two contributions: a repulsive short ranged pairwise interaction U pair and a long ranged electrostatic term U ii .
Physical interpretation of the double well
z is the ionic charge and r ij is the interatomic distance between the ions i and j.
The Zr-O bonds increase and decrease in length in a symmetric way. As a net result, the centrosymmetric position of the oxygen atoms is a relative maximum of the Coulomb energy U ii . The change in the Madelung potential caused by the tetragonal distortion is shown in Figure 7 It can be noticed that analogous terms are present in the TB model and a similar interpretation is tempting. However, we now have the additional effects due to polarization, covalency, and charge redistribution. Figure 7 (b) shows that the absolute value of the self-consistent equilibrium charge Q decreases on both species. Consequently, in this approximation, the on-site energy We can be more specific about the nature of the polarization. In the c structure, the first non-zero components of the electrostatic potential are V 0 and V 3 . The latter could, in principle, induce an octapole moment Q 3 on the anions. We truncated the multipolar expansion of the atomic multipole moments to the quadrupoles Q 2 therefore, within this approximation, the ions in the c structure are not polarized. Higher order terms can be included in the expansion, but the overall agreement of the results with both experiments and first-principle calculations demonstrates that the model is already capturing the important physics of the system.
As the anion sublattice is distorted, the symmetry lowering induces the ℓ = 1 and ℓ = 2 components of the potential which couple the s and p oxygen atomic orbitals. The magnitude of the coupling, and therefore of the multipole moments, is controlled by the parameters ∆ spp and ∆ ppd . The latter, fixed in order to reproduce the electronic structure of the rutile phase, produces very weak quadrupole moments, whose contribution to the double well is negligible. The former controls the size of the dipole moments whose symmetric distribution further minimize the electrostatic energy [ Figure 7 (d)]. The total effect on the double well is shown in Figure 8.
Landau theory
The c ↔ t phase transition can be interpreted in terms of the Landau Theory. 53 In a subsequent paper we plan to explore the free energy surface at T > 0 with this formalism, so it is convenient to introduce it here to discuss the T = 0 results. Experimentally, the mechanism of this phase transition has been very controversial and a clear description is still missing. [54][55][56][57][58][59][60][61] Theoretically, Chan 62 suggested that a partial softening of an elastic constant is the driving force of this phase transition and, after symmetry considerations based on the elastic strains only, concluded that the phase transition must be of first order. We show here that the inclusion of the order parameter δ gives a second order phase transition. A similar discussion has been given by Ishibashi and Dvoŕȃk. 63 According to the Landau Theory, the appropriate thermodynamic potential which describes the relationship between the two phases of interest, is expanded in a Taylor series in one or more order parameters, in which the expansion coefficients are temperature dependent. The order parameters are non-zero in the low symmetry phase and vanish in the high symmetry one, providing therefore a unique way to differentiate the two phases. The terms involved in the Taylor expansion are invariants under the symmetry operations of the high symmetry phase and can be identified using group theory.
In the case of zirconia, the c structure is unstable along the three crystallographic directions, therefore the distortions along x, y and z have to be explicitly treated in the energy expansion. This suggests the following 9 order parameters, defined in terms of the strain tensor ǫ, and grouped into 4 symmetry-adapted bases which spans the corresponding irreducible representations: A complete analysis involving all the order parameters will be done in a separate paper, here we simplify the total energy expansion selecting one of the three possible directions of the tetragonal axis. Under this hypothesis three order parameters are necessary to describe the c ↔ t phase transition of zirconia: δ, η and η 0 . The high temperature c phase has the full cubic symmetry m3m and the only degree of freedom is the hydrostatic strain η 0 = ǫ xx + ǫ yy + ǫ zz . The low-symmetry t phase is defined by the distortion of the anionic sublattice δ, which we define as the amplitude of the X − 2 mode of vibration, and by the tetragonal strain η = (2ǫ zz − ǫ xx − ǫ yy ).
The three order parameters can be hierarchically classified according to the amount of symmetry breaking that they involve. The hydrostatic strain η 0 preserves the cubic symmetry of the crystal. The tetragonal strain η maintains the number of atoms in the primitive cell and lowers the symmetry to the point group 4/mmm which still has the mirror symmetry operation perpendicular to the tetragonal axis. The tetragonal distortion δ breaks this symmetry operation and involves cell doubling. Therefore, according to Landau theory, δ is the primary order parameter, η is the secondary and η 0 is the tertiary one.
The potential energy is expanded as a power series in these order parameters around the equilibrium volume of the cubic phase V 0 ( Figure 5): The elastic constants c 0 and c 1 are proportional respectively to the bulk modulus and to C ′ = 1 2 (c 11 − c 12 ) in the c phase described in the next section. The third order term δ 3 is forbidden by symmetry, therefore this transition is of second order if a 2 goes negative.
The volume dependence of the order parameters can be studied by setting to zero ∇ η U and ∇ η 0 U. Both the ab initio and TB results (Figure 9) confirm the analytic expressions: These expressions show that the second-order strain terms of Eq.(18) are already proportional to δ 4 and therefore, within the chosen order of approximation, it is not necessary to include third-order terms in ǫ ij . Moreover, from the static results it is clear that the description of the high temperature stability of the c phase must go beyond the quasiharmonic approximation. The higher the temperature, the larger the volume and, according to Figure 9, the larger δ and η. Therefore, in a simple quasi-harmonic picture, a higher temperature seems to favor the t phase with respect to c, in contradiction to the experimental observation.
The parameters c 1 and c 0 are known from the elastic properties of the crystal and have been calculated independently (see next section). The coefficients a 2 and a 4 have been fitted to the double well of an undistorted stress-free cubic crystal (in the sense η = 0 and η 0 = 0).
In a similar way, b 1 and b 0 have been fitted to the double well of a tetragonal crystal at V 0 (η 0 = 0, η = 0) and of a cubic crystal near V 0 (η = 0, η 0 = 0) respectively. Figure 10 (a) shows the three curves used for the fitting procedure. The agreement is very good even far away from the reference volume of the energy expansion [ Figure 10 To see this we substitute the relationships (19) back in Eq. (18): The above equation shows that the coupling term can renormalize the fourth order coefficient, and could make it negative. In that case it would be necessary to truncate Eq.(18) at the sixth order term in δ, including therefore the third-order terms in the strain.
These would then drive the phase transition making it first order. 62,66 The numerical values of the coefficients (Table V) allow us to estimate the amount of the coupling. We find that the coupling term is ≈ 20% of a 4 4 , not big enough to affect the sign of the fourth order coefficient and therefore the 0 K calculations suggest that the phase transition is displacive of second order.
The temperature dependence of the elastic constants might change this description and the final answer will be given by high temperature MD calculations which are in progress. We compare our predictions with theoretical and experimental data in Table VI. The results of two other theoretical approaches, the Hartree-Fock and the PIB ones, are very different. As already mentioned in the Introduction, none of these calculations predicted the correct relative energetics of the crystal structures. Elasticity is a property of the energy second derivative: a good description of the energy curves is a prerequisite for reliable elastic constant calculations.
The fairly good agreement of our calculations with the experiments further indicates that the SC-TB model captures the main physics of the bonding. The bulk modulus, however, is seriously overestimated: this may not be an intrinsic limitation of the TB model, because it was fit precisely to the NFP-LMTO calculation, which similarly overestimates this quantity.
B. Phonon Spectra
In order to test the model further, as well as to give further insight into the spontaneous symmetry breaking of the c phase, we studied its vibrational properties. First principle calculations 72,73 predict an imaginary frequency at the boundary of the BZ: this reinforces the idea that the phase transition is displacive, and driven by the softening of an optic mode.
Our calculations were carried out with the TB model on a 96-atom unit cell. The eigenvalues and eigenvectors of the possible vibrational modes in that unit cell, were found by diagonalising the dynamical matrix which we calculated using the direct method. The procedure was as follows.
Within the harmonic approximation, the potential energy Φ is expanded to second order in powers of the atomic displacements u: We use the notation of Maradudin et al. 74 : κ and l label respectively the atom in the primitive cell and the position of the primitive cell with respect to some origin. The direct method consists in computing the force constants Φ αβ via total energy and force calculations.
In general, the atom κ in the l cell is displaced by a small amount in direction α and the Hellmann-Feynman forces on the other atoms are recorded. These give directly the quadratic terms in the total energy expansion. The force constants Φ αβ can be related to the corresponding term of the dynamical matrix D via the usual relation: M κ is the mass of the atom κ and k is a point in the BZ. The crystal symmetry can considerably reduce the number of necessary independent calculations. 75,76 The phonon spectra plotted along the high symmetry direction < 100 > are shown in Figure 11. The main feature of the spectra is the imaginary frequency of the X − 2 mode of vibration which corresponds to the tetragonal instability shown in Figure 6. As already mentioned the tetragonal instability involves cell doubling therefore the corresponding eigenvector appears at the BZ border of the c phase. The soft mode at the X point ν s = 5.1i is the natural consequence of the negative curvature of the energy surface at δ = 0 ( Figure 6).
Setting to zero the dipolar polarizability of the anions (∆ spp = 0), the X − 2 mode is still soft, ν s = 0.8i, but the force constant corresponding to the instability is much smaller. This is consistent with Figure 8 where the same effect is studied from the energetic point of view: the energy curve is concave at δ = 0 even when the oxygens are not polarizable.
The effect of the oxygen polarizability is evident on the T 1u IR-active mode, which involves the rigid displacement of the two atomic sublattices. The calculated vibration frequency is 7.9 THz when the anions are not polarizable and 6.3 THz when the dipolar degree of freedom is allowed. The closer agreement of the non-polarizable result with the DFT frequencies of 8.1 − 8.5 THz, together with the overestimation of the bulk modulus suggests that the present model could slightly overestimate both the short-range repulsion between closed shells of electrons, responsible for the high bulk modulus, and the long range polarization effects which make the T 1u frequency lower than the ab initio values. The results might be improved with a more accurate re-parameterization but the physical interpretation of the ab-initio results, which is the main objective of this analysis, is unlikely to change. Certain non-analytical terms in the dynamical matrix have been neglected, namely those relating to macroscopic polarization or the Berry phase. For this reason our calculations cannot reproduce the LO-TO splitting of 12 THz calculated by Detraux et al.. 72 The non-analytical terms can be approximated by knowing the Born effective charge and the dielectric tensor, both of which could in principle be obtained from our model. This has previously been done in a TB framework , 77 although not for ZrO 2 , and we plan to investigate the effect in the future. Since the valence electrons are treated explicitly within the SC-TB model we also hope to be able to study the effects of point defects.
Bond integrals to the 12-atoms, 6-atoms, and 12-atoms unit cells respectively. δ denotes the internal degree of freedom of the t phase (see Fig. 1), β is the angle of the m cell in degrees, and x, y, z are the fractional coordinates of the non-equivalent sites in the m structure. | 2018-12-15T05:51:50.014Z | 2000-02-13T00:00:00.000 | {
"year": 2000,
"sha1": "c75f636fb5f57c397d67a8631c0d34e999bc44ac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0002185",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d0f01cd9459e15f40f06a1f3ec60f474d8731838",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
71574969 | pes2o/s2orc | v3-fos-license | Study of Osteoporosis in chronic obstructive pulmonary disease
and hosti http icense. Abstract Background: Chronic obstructive pulmonary disease (COPD) is a lung disease that is thought to result from chronic inflammation that may affect other organ systems. Evidence suggests that the prevalence of osteoporosis in patients with COPD is high and potentially important. Currently, the gold standard to assess osteoporosis non-invasively is dual energy X-ray absorptiometry (DEXA) scan. Purpose: We wanted to investigate the prevalence of osteoporosis in a population of patients with COPD and to determine the severity of osteoporosis in correlation with degree of COPD. Methods: This study was conducted on 50 patients with COPD and 10 healthy subjects as a control group. Study subjects were divided into four groups Group included 10 healthy volunteers as a control group. Group included 20 patients with moderate COPD. Group included 22 patients with severe COPD. Group IV included 8 patients with very severe COPD. All subjects were subjected to; detailed clinical history, thorough clinical examination, plain chest-X-ray postero-anterior view, ventilatory function tests (spirometry), and measurement of bone density by using DEXA. Results: The results of this study revealed significant reduction of body mass index (BMI) in COPD group in comparison with the control group (p value <0.05). As regards osteoporosis, its prevalence in total COPD group was higher than control group and reached 26%, while osteopenia reached 54%. Comparison between the COPD degrees as regards bone mineral density (BMD) revealed that, prevalence of osteoporosis increases with the increase of the severity of COPD from moderate to severe then very severe (20%, 27.3%, and 37.5% respectively). Highly significant sta-
Introduction
Chronic obstructive pulmonary disease (COPD) is characterized by a progressive airflow limitation that is not fully reversible and associated with an abnormal inflammatory response of the lung to noxious particles and gasses [1]. A variety of systemic effects become obvious as the disease progress.
Osteoporosis has been recognized as one of the systemic effects of COPD and debate continues on the precise mechanisms involved and on the options for treatment [2]. Osteoporosis is a systemic skeletal disease characterized by a decreased bone mineral density (BMD) and/or deterioration of the micro architecture, resulting in increased bone fragility and hence an increased susceptibility to fractures [3].
The gold standard to diagnose osteoporosis is dual energy absorptiometry (DEXA) scan. With this technique the amount of mineral in the scanned area of bone is measured in grams and divided by the measured bone surface in square centimeters to acquire a definite bone mineral density (BMD). The BMD score of patients is expressed as T score. The definition of osteoporosis according to the WHO is based on this T-score [3].
It has been reported that BMD is lower in COPD patients than in healthy subjects [4]. One of the most obvious causes of osteoporosis in COPD patients is treatment with glucocorticoids, both as systemic therapy and as inhaled glucocorticoids [5,6]. Glucocorticoid use does not fully account for the low bone mineral density (BMD) and high prevalence of osteoporosis in COPD patients [7].
A number of factors have been suggested to account for these findings. COPD patients are often smokers, as well as they have impaired mobility due to decreased muscle mass and respiratory dysfunction. Further, the group of patients with the most severe COPD also has lower weight than the general population, and low BMI is further a risk factor of increased mortality. All these factors are known to pre-dispose to osteoporosis, and can explain the increased prevalence [8].
Aim of the work
The aim of this study was to determine the frequency of osteoporosis in COPD patients, and to determine the severity of osteoporosis in correlation with degree of COPD.
Patients and methods
This study was conducted on 50 patients of COPD and 10 healthy subjects as a control group; they were selected from Chest Department, Benha Faculty of Medicine from December 2009 to April 2010. The study protocol was approved by the local ethics committee. Informed consent was obtained from the patients. Age of COPD group ranged from 40 to 68 years, they were all males; while age of the control group ranged from 40 to 55 years, they were all males also. Study subjects were divided into four groups. Group included 10 healthy volunteers who had no symptoms or signs of any chest disease and normal ventilatory function tests as a control group. Group included 20 patients with moderate COPD. Group included 22 patients with severe COPD. Group IV included 8 patients with very severe COPD. Group , and IV included patients who had symptoms of chronic airflow obstruction and who fulfilled lung function criteria and classification as set out by the National Heart and Lung Institute/ World Health Organization Global Initiative for Chronic Obstructive Lung Disease guidelines [9]. Exclusion criteria in- As regards BMD (represented by T score), osteopenia and osteoporosis frequency increases as degree of COPD passes from moderate to very severe.
cludes, FEV 1 /FVC of more than 70%, Patients who had a clinical diagnosis of asthma, history of childhood respiratory disorders, chest wall deformity, known immunodeficiency. All subjects were subjected to the following studies; full clinical history, thorough clinical examination, plain chest Xray (postero-anterior and lateral views), blood sample for complete blood picture, ESR, liver and kidney functions, electrocardiogram, Ventilatory function tests, spirometry was performed using ''Spirosift spirometry 5000 FUKUDA DEN-SHI''. Bone Density Measurement; for all patients and controls, bone mineral density of the lumbar spines was measured by dual energy X-ray absorptiometry (DEXA) with the use of Norland, XR 46 apparatus. The mean BMD value of the second, third and fourth lumbar vertebrae (lumbar spine BMD) were used in the present young adult sex-matched control population >À1 was considered normal, T score between À1 and À2.4 was considered osteopenia, and T score 6À2.5 was considered osteoporosis [10].
Statistical analysis
Statistical presentation and analysis of the present study was conducted, using the mean, standard deviation (t test), and chi-square test by SPSS V.16.
Results
From Table 5 there was; n Statistically highly significant correlation between T score and FEV1%, FEF (25-75)%, and PEFR%. n Statistically significant correlation between T score and FVC%. n Statistically non significant correlation between T score and SVC% and FEV1/FVC ratio.
Discussion
Osteoporosis continues to be a major problem in men with chronic illness. In men with CLD, osteoporosis may be particularly disabling because vertebral fractures reduce vital capacity, which further compromises ventilation [11]. Evidence suggests that the prevalence of osteoporosis in patients with COPD is high and potentially important [12,13].
In the present study, the COPD patients age ranged from 40 to 68 years (56.04 ± 7.14) and the age of the control group ranged from 40 to 55 (49 ± 4.42) with no significant difference between the two groups as shown in (Table 1). But, the results of body mass index showed significant reduction in COPD group in comparison with the control group as shown in (Table 1). These results were in agreement with Iqbal et al. (2004) who found significant lower values of BMI in patients with COPD than in normal personnel [14].
Also it was demonstrated that 80% of patients with COPD varying from moderate to very severe had abnormal BMD either osteopenia (T score À1 to À2.4) which were 54%, or osteoporosis (T score 6À2.5) which were 26% as shown in (Table 2 and Fig. 1).
These results were in agreement with the results of the cross sectional study carried by Jorgensen et al. (2007)on 62 COPD patients who found that 78% of patients had low BMD either osteopenic or osteoporotic [15]. There was direct relation between FEV1% and T score. Figure 1 Comparison between COPD subgroups and control group as regards number and percentage of normal, osteopenia and osteoporosis.
Also, these results were in agreement with results of the study carried by Dubois et al. (2004) which carried on 86 patients with COPD and revealed that; 28% of patients were normal as regards BMD, 50% of patients were osteopenic as regards BMD, and 22% of patients were osteoporotic as regards BMD [16].
From the results of our study, it was demonstrated that prevalence of osteopenia and osteoporosis increased with the increasing in COPD degree from moderate to severe and then very severe as shown in (Table 2 and Fig. 1). These results can be explained as increase in COPD degree is associated with increase of risk factors that lead to occurrence of osteoporosis such as increased inflammatory load of COPD, using more corticosteroid treatment, decrease in pulmonary functions, and decrease in BMI.
These results were in agreement with the results of the study carried by , which revealed that risk of osteopenia increases by 30% in moderate COPD and by 70% in severe COPD more than normal personnel, and the risk of osteoporosis increases by 2.1 fold in moderate COPD and by 2.8 fold in severe COPD more than normal personnel [17].
In our work there was statistically high significance between the mean of T score of COPD group and the mean of T score of control group that means BMD in COPD group was lower than BMD in control group as shown in (Table 3).
These results were in agreement with the results of the study carried by Lung Health Study Research Group (2000) which carried on 412 COPD patients during duration of 3 years and revealed that BMD was much lower in COPD patients than of normal personnel of same sex and age [6]. These results also were in agreement with the results of the study carried by McEvoy et al. (2003) on 312 COPD patients and revealed low BMD in COPD group more than in control group of the same sex and age [18]. Table 4 shows that there is direct relation between FEV1% and BMD in COPD patients who were osteoporotic.
These results were in agreement with Iqbal et al. (2004) who found that, in COPD group who had osteoporosis BMD decreases in linear patren with the decrease of FEV1% [14]. Also, these results were in agreement with the results of the cross sectional study carried by Jorgensen et al. (2007) on 62 COPD patients who found that, BMD had direct relation with FEV1% in COPD patients who had osteoporosis [15]. In contrast, Vesto et al. (2002) found that BMD in COPD patients had no relation with the degree of COPD, in other words there was no relation between T score and FEV1% [19].
In the present study, by using spearman's rho rank correlation between T score and different spirometric parameters in COPD group, there was; Statistically highly significant correlation between T score and FEV1%, FEF (25-75)%, and PEFR%, statistically significant correlation between T score and FVC%, and statistically non significant correlation between T score and SVC% and FEV1/FVC ratio, as shown in (Table 5). Incalzi et al. (2004) found that there were significant correlation between T score and all spirometric parameters except FEV1% which had highly significant correlation and SVC% which had no significant correlation [20].
Conclusion
In conclusion, the present study showed that, osteoporosis is highly prevalent in patients with moderate to very severe COPD. Prevalence and severity of osteoporosis increased with the increase of COPD degree.
Recommendations
On the basis of the finding in this study it is recommended that all patients with COPD should be screened for osteoporosis in order to initiate treatment for the disorder before they develop fractures. Further studies are needed to determine the frequency and severity of osteoporosis in mild COPD patients. | 2019-03-08T14:19:18.403Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "9303f24bc1731ed785e3d3d24a0ad5afc783d6cf",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ejcdt.2013.01.009",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d78798591f7dcca9d665c3c53b1919ec2d706521",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7143473 | pes2o/s2orc | v3-fos-license | Metabolic engineering of tomato fruit organic acid content guided by biochemical analysis of an introgression line.
Organic acid content is regarded as one of the most important quality traits of fresh tomato (Solanum lycopersicum). However, the complexity of carboxylic acid metabolism and storage means that it is difficult to predict the best way to engineer altered carboxylic acid levels. Here, we used a biochemical analysis of a tomato introgression line with increased levels of fruit citrate and malate at breaker stage to identify a metabolic engineering target that was subsequently tested in transgenic plants. Increased carboxylic acid levels in introgression line 2-5 were not accompanied by changes in the pattern of carbohydrate oxidation by pericarp discs or the catalytic capacity of tricarboxylic acid cycle enzymes measured in isolated mitochondria. However, there was a significant decrease in the maximum catalytic activity of aconitase in total tissue extracts, suggesting that a cytosolic isoform of aconitase was affected. To test the role of cytosolic aconitase in controlling fruit citrate levels, we analyzed fruit of transgenic lines expressing an antisense construct against SlAco3b, one of the two tomato genes encoding aconitase. A green fluorescent protein fusion of SlAco3b was dual targeted to cytosol and mitochondria, while the other aconitase, SlAco3a, was exclusively mitochondrial when transiently expressed in tobacco (Nicotiana tabacum) leaves. Both aconitase transcripts were decreased in fruit from transgenic lines, and aconitase activity was reduced by about 30% in the transgenic lines. Other measured enzymes of carboxylic acid metabolism were not significantly altered. Both citrate and malate levels were increased in ripe fruit of the transgenic plants, and as a consequence, total carboxylic acid content was increased by 50% at maturity.
Tomato (Solanum lycopersicum) is an important food crop of high economic value and represents a model species for fleshy fruit physiology and ripening (Giovannoni, 2004;Mueller, 2009). The breeding history of tomato has been dominated by a focus on traits that benefit the grower, such as yield, storage characteristics, and field performance (Schuch, 1994;Giovannoni, 2006;Cong et al., 2008). As a result, there has been an unintentional loss of consumer quality traits such as flavor and nutritional value, and this has focused recent interest on the molecular genetics of such traits (Giovannoni, 2001;Causse et al., 2002Causse et al., , 2004Fraser et al., 2009;Mounet et al., 2009;Enfissi et al., 2010;Centeno et al., 2011). The accumulation of a range of soluble metabolites is critically important for both flavor and nutrition. Tomato fruit undergo substantial changes in their metabolite content and composition during ripening (Carrari et al., 2006). Fruit flavor is influenced both by volatile and nonvolatile metabolites (Buttery et al., 1987;Goff and Klee, 2006;Carli et al., 2009). Of the nonvolatile metabolites, the balance between sugars and acidic compounds is of major importance for flavor (Tieman et al., 2012). The perceived flavor of tomato fruit is a complex issue, and simple associations of metabolites with flavor traits do not always hold true. Nevertheless, a network analysis of several tomato genotypes demonstrated a strong correlation between tomato fruit flavor and the main acidic metabolites: carboxylic acids, Glu, and Asp (Carli et al., 2009).
The breeding or engineering of improved flavor by increasing the amount of specific metabolites in ripe fruit requires an understanding of the biochemical and molecular factors that regulate their accumulation during the ripening process. In this study, we focused on the accumulation of citrate and malate, the most abundant of the acidic metabolites in tomato fruit. Carboxylic acids are a major component of the osmotic potential that drives cell expansion through water uptake in the expansion phase of fruit growth (Liu et al., 2007). The concentrations of citrate and other carboxylic acids fall during this expansion phase as the cell contents are diluted (Baxter et al., 2005;Carrari et al., 2006). However, during the final stages of ripening, the level of citrate (and to a lesser extent other carboxylic acids) increases again such that it is present at high abundance in the ripe fruit. It is not known how these changes in organic acid levels are brought about. The maximal catalytic activities of enzymes of the tricarboxylic acid (TCA) cycle generally decline during fruit development, and there are no pronounced changes in activities during the later stages of ripening that correlate with the rise in organic acid levels .
The regulation of metabolite levels is a complex issue. At the most basic level, the amount of a metabolite will change because of a difference between influx into that metabolite pool and efflux from it (Kruger and Ratcliffe, 2009). In the case of citrate, for example, one could envisage that a change in the balance of flux through the citrate synthase and aconitase reactions could be responsible for a change in citrate levels. However, the situation is complicated because citrate is synthesized in the mitochondrion but accumulates in the vacuole . Thus, the transport of citrate between these subcellular compartments will also have a bearing on its rate of accumulation (Shiratake and Martinoia, 2007). Export from the mitochondrion is by counterexchange with other carboxylic acids that are imported. Therefore, the capacity of other parts of the TCA cycle will affect the rate of export of citrate. Active uptake into the vacuole is ultimately dependent on the tonoplast membrane potential generated by the proton-pumping V-type ATPase, although citrate transport appears to occur by facilitated diffusion rather than active uptake in tomato fruit (Oleski et al., 1987). Given that TCA cycle flux is critical for the synthesis of ATP, one can see the extent to which carboxylic acid metabolism as a whole and citrate accumulation are intertwined. Accumulation in the vacuole is also a function of both the influx and efflux of citrate from the vacuole (Shimada et al., 2006) and subsequent metabolism by cytosolic isoforms of aconitase and isocitrate dehydrogenase. Given this complexity and the variety of flux modes within carboxylic acid metabolism (Sweetlove et al., 2010), it is not obvious what the best strategy for engineering increased accumulation of carboxylic acids in fleshy fruits would be. This is reflected in the range of enzymes that have been proposed to control fruit citrate accumulation, including phosphoenolpyruvate carboxylase (Guillet et al., 2002), phosphoenolpyruvate carboxykinase (Famiani et al., 2005), citrate synthase (Sadka et al., 2000a), and aconitase (Sadka et al., 2000b;Degu et al., 2011).
To identify a suitable metabolic engineering strategy for increasing carboxylic acid content of tomato, we undertook a detailed biochemical study of an introgression line of tomato (Eshed and Zamir, 1995). A specific line with an introgressed segment from Solanum pennellii on chromosome 2 (IL2-5) was identified that showed reproducible increases in citrate and malate during the later stages of fruit development but had minimal changes in gross developmental characteristics such as fruit size and number. Although the introgressed segment contained many genes, and metabolites other than carboxylic acids were altered, we reasoned that a focused metabolic analysis would identify which of the proposed mechanisms for controlling fruit citrate accumulation was responsible for the increased citrate and malate contents in the introgression line. The effectiveness of this mechanism as a viable target for the metabolic engineering of carboxylic acid content could then be tested in transgenic plants.
Fruit of IL2-5 Have Increased Malate and Citrate Contents during Development
To identify a suitable line from the tomato 3 S. pennellii introgression population (Eshed and Zamir, 1995), we grew a subset of lines that were free from major phenotypic changes (Gur et al., 2004) and measured fruit carboxylic acid content in pericarp tissue using 1 H-NMR. We wished to identify lines in which carboxylic acids were increased at several stages of fruit development prior to the final ripe stage. This was because the biochemical changes that influence the accumulation of carboxylic acids during ripening are likely to be set prior to the final stage of maturation.
Based on our analysis, we focused on IL2-5, in which fruit citrate and malate contents were increased (Fig. 1). Fruits were analyzed at three developmental ages (30, 40, and 55 d after anthesis [DAA]). Tomatoes at 40 DAA were at breaker stage, and by 55 DAA, they were fully ripe. In comparison with the tomato 'M82' parent line, citrate was significantly increased at each of the developmental ages, with the difference between IL2-5 and cv M82 increasing during development such that ripe fruit of IL2-5 contained 60% more citrate than those of cv M82. Malate levels in IL2-5 were also significantly increased to a similar degree at 30 and 40 DAA, but in ripe fruit there was no difference compared with cv M82. Fumarate levels showed reciprocal changes to malate, with decreases in IL2-5 at 30 and 40 DAA, although the absolute amount of fumarate was a factor of 1,000 less than malate. Other carboxylic acids were not quantifiable by 1 H-NMR of fruit extracts. These metabolite changes were not due to an altered rate of development of IL2-5 fruit, because the main changes in carotenoid content, a key indicator of fruit developmental stage, were coincident in cv M82 and IL2-5 fruit (Supplemental Fig. S1).
To assess whether other parts of the central metabolic network were also altered in IL2-5 fruits, we determined the content of amino acids. Because amino acid synthesis draws on precursors from a number of different sectors of central metabolism, they are a useful indicator of the state of the central metabolic network. As one might expect given the number of potential genetic changes due to chromosomal introgression, changes in a range of amino acids were observed (Fig. 2). Of the 19 measured amino acids, 10 were significantly increased in IL2-5 fruit. Several of these (Glu, Thr, and Met) are either directly connected to carboxylic acid metabolism or draw on carboxylic acid metabolism as a source of carbon skeletons for their biosynthesis. However, in addition, there were also increases in amino acids that are synthesized from precursors from glycolysis (Ala, Ser, Gly, Val, Leu, and Ile) and the oxidative pentose phosphate pathway (Phe).
Carbohydrate Oxidation Fluxes in IL2-5 Fruit
To assess whether the increases in citrate and malate were accompanied by major changes in flux through the central network of carbon metabolism, we incubated discs cut from the pericarp of IL2-5 and cv M82 fruit at 40 DAA (at which stage the amounts of both citrate and malate were significantly greater in IL2-5) with positionally labeled [ 14 C]Glc. Evolved 14 CO 2 was quantified at intervals over a 24-h period. The ratios of 14 CO 2 released from differently labeled Glc molecules reveal information about the relative rates of different routes of carbohydrate oxidation (ap Rees, 1980). The total amount of [ 14 C]Glc metabolized by cv M82 and IL2-5 discs was not significantly different, suggesting that the capacity for uptake of Glc and overall metabolic rate were similar between the two lines. The rate of oxidation of [ 14 C]Glc to 14 CO 2 was linear between 2 and12 h for each positionally labeled Glc (except for [2-14 C]Glc, which was only linear for the first 5 h) for both cv M82 and IL2-5 pericarp discs, suggesting that a metabolic steady state was achieved (Supplemental Fig. S2).There were no significant differences in the relative rates of oxidation of any of the differently labeled positions of Glc between IL2-5 and cv M82 (Fig. 3), suggesting that there were no substantial changes in flux through the pathways of central metabolism. Of particular relevance is the release of label from the C3,4 and C2 positions of Glc, which occurs predominantly via the TCA cycle. The release of CO 2 from C3,4 Glc relative to other carbon positions was the same in both tomato lines (Fig. 3, A-C). Similarly, the ratios of CO 2 release from C2:C1 and C1:C6 were not significantly different between lines (Fig. 3, D and E). These results indicate that the flux through the oxidative steps of the TCA cycle relative to other major carbon oxidation pathways such as the oxidative pentose phosphate pathway is unchanged in pericarp discs of the IL2-5 line.
Maximum Catalytic Activities of Enzymes of Carboxylic Acid Metabolism
The labeling experiments give a broad overview of the fluxes in central carbohydrate metabolism, but the complexity of the carbohydrate oxidation network means that it is impossible to ascribe the oxidation of a specifically labeled carbon atom exclusively to a single metabolic pathway. Moreover, the approach cannot account for complementary changes in flux through parallel pathways in different subcellular compartments. To address this latter issue, we measured the maximum catalytic activity of enzymes of carboxylic acid metabolism, both in isolated mitochondria and in crude tissue extracts, to assess the partitioning of metabolic capacity between mitochondria and other subcellular compartments. Mitochondria were isolated from 40-DAA fruit from IL2-5 and cv M82 plants, and the mitochondrial activities of seven of the eight enzymes of the TCA cycle were measured. There were no significant differences in the activities of these enzymes in IL2-5 mitochondria compared with cv M82 (Fig. 4A). We also assessed the total cellular activities of detectable enzymes of carboxylic acid metabolism in cases where there are known to be extramitochondrial isoforms (Fig. 4B). Total aconitase activity was significantly lower in IL2-5 fruit (about one-third of the activity in cv M82 fruit). The other four enzymes measured were not significantly changed. Given that aconitase activity in isolated mitochondria was the same in the two lines, this result indicates that the cytosolic isoform of aconitase is present at substantially lower levels in IL2-5 relative to cv M82.
Metabolic Engineering of Carboxylic Acid Content of Tomato Fruit
Aconitase catalyzes the conversion of citrate to isocitrate, and it follows that a reduction in the activity of aconitase could lead to an accumulation of citrate. To examine if the reduction of total aconitase activity is brought about at the transcriptional level, and therefore would be a good target for genetic engineering, we assessed aconitase transcript levels using semiquantitative reverse transcription (RT)-PCR. The tomato genome contains two genes encoding aconitase (Kamenetzky et al., 2010), Solyc07g052340 and Solyc12-g005860. The two genes show a high degree of similarity (the predicted complementary DNA [cDNA] sequences are 88% identical), and both are most similar to ACON-ITASE3 (ACO3) in Arabidopsis (Arabidopsis thaliana; The 14 CO 2 released by metabolism was monitored at intervals throughout a 24-h incubation period and quantified by liquid scintillation counting. Ratios of cumulative 14 CO 2 release from different combinations of positionally labeled Glc for IL2-5 (squares, solid lines) and cv M82 (triangles, dotted lines) are presented. Data are ratios of the mean 6 SE (n = 4). There were no significant differences (repeated-measures ANOVA) in the ratios of 14 CO 2 released by cv M82 and IL2-5 for any of the combinations of positionally labeled substrates analyzed (F , 3.78; degrees of freedom = 1.6; P . 0.10). Bernard et al., 2009). Accordingly, we refer to them as SlAco3a and SlAco3b, respectively. The RT-PCR results suggest that the SlAco3a transcript was slightly reduced in IL2-5 fruit (at 40 DAA) in comparison with cv M82 (Fig. 5). SlAco3b showed a less consistent pattern, with an apparent decrease in abundance in some IL2-5 samples but not others (Fig. 5).
To aid in the choice of which Aco gene to target for metabolic engineering, it would be helpful to know the subcellular localization of the respective gene products. In Arabidopsis, there are three aconitase genes: ACO1 encodes a cytosolic isoform, while the products of ACO2 and ACO3 are located in mitochondria (Bernard et al., 2009). The aco-1 mutant allele in S. pennellii (which corresponds to SlAco3b) is deficient in both cytosolic and mitochondrial aconitase protein, suggesting that in tomato this gene product is dual targeted (Carrari et al., 2003). To investigate the subcellular targeting of both tomato Aco proteins, the SlAco genes were transiently expressed as C-terminal GFP fusions in tobacco (Nicotiana tabacum) leaves (Fig. 6). The two gene products showed clear differences in subcellular localization. SlAco3a colocalized closely with mitochondria-targeted mCherry, indicating an exclusively mitochondrial localization. In contrast, SlAco3b has a more complex distribution, appearing throughout the cytosol, but it was also present in punctate bodies colocalizing with mitochondriatargeted mCherry. This suggests that SlAco3b is dual targeted to both cytosol and mitochondria and is consistent with the decrease in both cytosolic and mitochondrial aconitase in the aco-1 mutant.
The changes in Aco transcripts suggest that Aco genes would be good targets for the engineering of altered carboxylic acid levels in tomato fruit and that to replicate the decrease in cytosolic aconitase activity in IL2-5, SlAco3b should be repressed. Therefore, we characterized fruit enzyme and metabolite levels in transgenic plants expressing an 800-bp antisense fragment of SlAco3b (van der Merwe et al., 2010). These transgenic plants were originally generated as part of a study of the role of the TCA cycle in root metabolism (van der Merwe et al., 2010), and their fruit have not been previously characterized.
Two of the transgenic lines (ACO-19 and ACO-38) were grown and allowed to set fruit. Fruit growth, size, and number in each transgenic line were indistinguishable from the wild type. Tomatoes were harvested at breaker (40 DAA) and ripe (55 DAA) stages and extracted for transcript, enzyme, and metabolite determinations. Unsurprisingly, given the high degree of sequence similarity between the two SlAco genes, both SlAco3a and SlAco3b transcripts were significantly decreased in the two transgenic lines in fruit at both 40 and 55 DAA (Fig. 7). The exception was line ACO19, in which SlAco3b was significantly altered in 40-DAA fruit but not in fruit from the later developmental time point. However, aconitase activity was decreased by about 30% compared with the wild type in both lines at both stages of fruit development (Table I). Although the activity of aconitase increased developmentally in wild-type and transgenic lines between the 40-and 55-DAA stages, the relative decrease in activity in the transgenic lines was maintained at around 30%. None of the other measured enzymes of carboxylic acid metabolism were significantly different in the transgenic fruit compared with the wild type at either stage of development (Table I).
To establish whether the transgenic manipulation had the predicted effect on citrate levels, fruit carboxylic acids were quantified by gas chromatographymass spectrometry (GC-MS; Table II). In ripe fruit, citrate was significantly increased in both transgenic lines by about 40%, confirming successful metabolic engineering. In contrast to the introgression line, however, this increase in citrate was not apparent at the earlier developmental stage (40 DAA) in the transgenic fruit. The increases in citrate in ripe fruit were roughly proportional to the decrease in aconitase levels in both the introgression and transgenic lines. In IL2-5, total aconitase activity decreased by 66% and citrate increased by 60%. In the transgenic lines, aconitase decreased by approximately 30% and citrate increased by 40%.
In the transgenic fruit, other carboxylic acids were also significantly altered compared with the wild type in ripe fruit but, as with citrate, not at the early 40-DAA stage. Malate increased and fumarate and succinate decreased (Table II). These changes were similar to those seen in the introgression line at 40 DAA. However, the changes in malate and fumarate in the introgression lines were not apparent in ripe fruit, indicating that the developmental timing of the perturbation of this sector of carboxylic acid metabolism is different in the transgenic lines. The fact that the transgenic lines were not an exact biochemical phenocopy of the introgression lines is to be expected, given the presence of many other background genetic changes in the introgression lines and differences in the timing and extent of change of aconitase activity in the transgenic lines. Nevertheless, in quantitative terms, the changes in fumarate and malate were proportional to the decrease in aconitase activity in both the introgression and transgenic lines. Fumarate decreased proportionately to aconitase: fumarate decreased by 54% in the introgression line and an average of 37% in the transgenic lines (approximately matching the 66% and 30% decreases of aconitase, respectively). Malate increased by 120% in the introgression fruit and an average of 76% in the transgenic fruit, meaning an increase of roughly twice the decrease in aconitase activity. Total carboxylic acid (malate + citrate + succinate + fumarate) increased from an average of 99 mg g 21 fresh weight in wild-type Figure 6. Subcellular localization of SlAco3a and SlAco3b. Gene constructs consisting of SlAco genes fused C-terminally to GFP and driven by the Arabidopsis UBIQUITIN10 promoter were transiently expressed in tobacco leaves by agroinfiltration. GFP fluorescence was colocalized with mitochondrially targeted mCherry (mito-mCherry). GFP fluorescence of SlAco3a-GFP (A) and SlAco3b-GFP (B) with mito-mCherry signal is indicated. Chlorophyll autofluorescence is also shown. Bars = 20 mm. fruit to 140 and 157 mg g 21 fresh weight in ACO-19 and ACO-22 fruit, respectively.
Decreased Activity of Cytosolic Aconitase Correlates with Altered Carboxylic Acid Content in Fruit of an Introgression Line
Fruit of the selected introgression line, IL2-5, contained elevated levels of citrate and malate and a decrease in fumarate at breaker stage relative to the cv M82 parent line. The increase in citrate persisted until ripeness under our growth conditions, although this trait may not be stable under all conditions because no increase in citrate was recorded in the ripe fruit of this introgression line when grown in the field (Schauer et al., 2006. We found no evidence of major changes in fluxes through relevant metabolic pathways in the IL2-5 fruit, reflecting the nonintuitive relationship between metabolite levels and flux Kruger and Ratcliffe, 2009).
There was, however, a significant decrease in the maximum catalytic activity of one of the enzymes of carboxylic acid metabolism: aconitase. This change in activity was not apparent in isolated mitochondria, suggesting that it was a cytosolic isoform of aconitase that was altered. Analysis of transiently expressed GFP fusions of the two tomato aconitases in tobacco leaves showed that while SlAco3a is localized exclusively in mitochondria, SlAco3b appears to be dual targeted to both mitochondria and cytosol. There is a precedent for this in yeast, in which aconitase is thought to be dual targeted by inefficient import into mitochondria or by release of the mitochondrially imported enzyme back into the cytosol (Regev-Rudzki et al., 2005).
The relationship between aconitase activity and genotype in the introgression line is not direct, because the two aconitase genes lie on chromosomes 7 and 12 and therefore are not within the introgressed segment on chromosome 2. RT-PCR suggests a slight decrease in expression of the two SlAco genes, so perhaps the introgressed region in IL2-5 contains a transcriptional regulator of the aconitase gene. This possibility is supported by the identification of several enzyme quantitative trait loci in the same introgression line population in which the introgressed region does not contain the structural gene of the enzyme concerned (Steinhauser et al., 2011).
Aconitase Activity Is a Determinant of Fruit Citrate Levels
A decrease in the activity of aconitase provides an obvious link to the accumulation of citrate, its substrate. Perturbation of the TCA cycle could also provide an explanation for changes in other carboxylic acids. Further evidence that the change in citrate is directly linked to aconitase activity and not some other background effect in the introgression line was provided by direct manipulation of aconitase activity in transgenic plants. In fact, the increase in citrate in ripe fruit was quantitatively proportional to the decrease in aconitase in both the introgression and transgenic lines, suggesting that aconitase activity is a major determinant of ripe fruit citrate levels. In support of this, pharmacological inhibition of aconitase in lemon (Citrus limon) fruit also led to an increase in citrate levels (Degu et al., 2011). Aconitase activity is also linked to citrate levels in other tissues in tomato: leaf citrate levels were also increased in the aco-1 mutant (Carrari et al., 2003). However, it is worth noting that, in the transgenic lines, the decreased aconitase activity at the earlier fruit developmental stage (40 DAA) did not result in a significant increase in citrate level. This suggests that other factors, in addition to aconitase activity, influence fruit citrate level. The difference in the relationship between aconitase and citrate at this developmental stage in transgenic versus introgression line fruit is most likely due to the difference in background genotype (cv Moneymaker versus cv M82, respectively). Citrate synthase might also be expected to influence citrate levels, and this was demonstrated in the leaves of transgenic tomato plants with reduced citrate synthase . However, there was no direct relationship between citrate synthase activity and citrate levels in arsenite-treated citrus (Sadka et al., 2000a). Transgenic manipulation to decrease the activity of NAD-isocitrate dehydrogenase, the enzyme immediately downstream of aconitase, had no effect on tomato leaf citrate levels (Sienkiewicz-Porzucek et al., 2010). It appears that there is not a simple relationship between carboxylic acid levels and the activity of the enzymes that catalyze their interconversion . This disconnect between the metabolite levels and maximum catalytic activities of enzymes extends throughout the network of central carbon metabolism ; therefore, the effect of altered aconitase activity on citrate levels may be considered somewhat exceptional.
Aconitase Activity Also Affects the Accumulation of Other Carboxylic Acids
In addition to increasing citrate content in ripe fruit, transgenic suppression of aconitase also led to a substantial increase in malate as well as decreases in succinate and fumarate. Perturbation of several carboxylic acids by manipulation of a single TCA cycle enzyme is not uncommon (Araújo et al., 2012) and is a reflection of the interconnected nature of carboxylic acid metabolism. Similar changes in carboxylic acid levels were seen in the introgression line, where the change in aconitase activity was restricted to the cytosol.
Interpretation of these changes is complicated by the compartmentation of TCA metabolism and may not be directly related to mitochondrial carboxylic acid metabolism. For instance, the levels of predominantly vacuolar carboxylic acids such as citrate and malate are influenced by the rate of export of these metabolites from mitochondria. In heterotrophic Arabidopsis cells, these export fluxes are 1 order of magnitude lower than the mitochondrial TCA cycle fluxes . Thus, changes in the accumulation rate of citrate and malate can be caused by proportionally small changes in mitochondrial TCA cycle flux that may be within the error range of the flux estimate (and therefore undetectable). This most likely explains the lack of detectable change of overall carboxylic acid oxidation in the introgression line fruit and further suggests that flux through the cytosolic pathway of citrate metabolism is low in relation to the mitochondrial pathway. Alternatively, there could be a compensatory increase in flux through the mitochondrial pathway (Morgan et al., 2008).
The opposing changes in fumarate and malate may also be explained by the compartmentation of these metabolites. In mitochondria, the interconversion of fumarate and malate is close to equilibrium, so one would expect the levels of these two metabolites to follow one another. The observed decreased fumarate and increased malate levels in both the transgenic and introgression line fruit reflects the fact that the measured malate is mainly extramitochondrial (vacuolar). Most likely, an increased efflux of malate from the mitochondrion leads to increased accumulation of malate in the vacuole. Malate concentration in the mitochondrion is probably decreased, in line with the decreased fumarate content.
The function of cytosolic aconitase in ripening fruit is unclear. One suggestion is that it is involved in the metabolism of citrate released from the vacuole to provide an entry point into amino acid metabolism or the g-aminobutyrate shunt in lemon (Degu et al., 2011). However, in tomato fruit, citrate is accumulating at the phase of ripening under consideration, so extensive efflux of citrate from the vacuole seems unlikely. An alternative possibility is that the metabolism of citrate in the cytosol contributes to cytosolic NADPH provision by providing a substrate for the NADP-dependent isocitrate dehydrogenase or is important for the generation of carbon skeletons for the synthesis of Glu and Asp, both of which accumulate substantially in the later phases of tomato ripening (Baxter et al., 2005).
CONCLUSION
This study demonstrates that individual lines of genetic mapping populations can provide useful information to guide metabolic engineering strategies. Although such lines contain relatively large regions of introgressed DNA from a genetically distinct parental line, detailed biochemical analysis can pinpoint the main point of metabolic disturbance and highlight potential candidate proteins that can be tested in a targeted manner in transgenic plants. Here, the introgression line allowed us to focus specifically on aconitase among myriad possible targets for manipulation of the accumulation of carboxylic acids in tomato fruit. One could envisage further refinement of the transgenic manipulation by using fruit-specific promoters that are more finely tuned to the appropriate developmental stage and take into account the variations in metabolism within different fruit tissues (Moco et al., 2007;Matas et al., 2011).
Seeds of tomato (Solanum lycopersicum 'M82') and introgression line 2-5 (incorporating a segment from Solanum pennellii) were kindly supplied by the Tomato Genetics Resource Centre. Transgenic tomato 'Moneymaker' expressing an antisense aconitase construct were described previously (van der Merwe et al., 2010). Plants were grown in a 16-h photoperiod at 22°C to 23°C day temperature/20°C to 22°C night temperature and with supplementary lighting to maintain an irradiance of 250 to 400 mmol m 22 s 21 . Plants were grown in a standard potting compost supplemented with slow-release fertilizer. Tomato feed was applied during flowering and fruit set, and a 0.5% (w/v) calcium chloride solution was sprayed directly onto all developing fruit weekly to help control blossom end rot.
Analysis of Carboxylic Acids by 1 H-NMR
Freeze-dried pericarp tissue was extracted in 70% tetradeuteromethanol/30% deuterated water exactly as described (Le Gall et al., 2003). 1 H-NMR spectra were recorded at 20°C on a Varian Unity Inova 600 spectrometer using a 5-mm HCN triple resonance z-gradient probe and the standard Varian pulse sequence with a relaxation delay of 2 s, including a presaturation pulse to suppress the residual water signal, a 90°pulse angle, a spectral width of 10 ppm, and a 4-s acquisition time. Tetradeuteromethanol was used for the internal lock signal, and 320 transients were collected for each spectrum. Spectra were processed and analyzed using NUTS (Acorn NMR). A 1-Hz line broadening was applied before Fourier transformation, and peaks were integrated manually within NUTS.
Analysis of Carboxylic Acids by GC-MS
Frozen pericarp tissue powder was extracted in chloroform-methanol, and carboxylic acids were quantified by GC-MS as described previously (Roessner et al., 2001), following a procedure optimized for tomato tissue (Roessner-Tunali et al., 2003).
Analysis of Amino Acid Content
Frozen pericarp tissue powder was extracted in 0.1 M HCl, and proteinogenic amino acids were quantified by HPLC (Bruckner et al., 1995).
Isolation of Mitochondria
Fresh tomato pericarp tissue was roughly chopped, and 50 to 100 g was placed in a square-section polycarbonate container with 200 to 300 mL of extraction medium (0.3 M Suc, 25 mM tetrasodium pyrophosphate, 25 mM TES-KOH [pH 7.5], 2 mM EDTA, 10 mM KH 2 PO 4 , 1% [w/v] polyvinylpyrrolidone-40, 1% [w/v] bovine serum albumin, and 20 mM ascorbic acid). The sample was homogenized using multiple short bursts (less than 1 s) of a Status Polytron blender (Kinematica) on a low setting (4). The sample was filtered through one layer of Miracloth and two layers of muslin. The filtrate was centrifuged at 1,085g for 5 min. The pellet was discarded, and the supernatant was centrifuged for 15 min at 23,500g. The pellet was resuspended in wash buffer (0.3 M Suc, 10 mM TES-KOH [pH 7.2], and 0.1% [w/v] bovine serum albumin) to an approximate final volume of 2 mL before being layered onto a 35-mL gradient of 0% to 4.4% (w/v) polyvinylpyrrolidone-40 in 18% (v/v) Percoll. The gradient was centrifuged at 40,000g for 40 min, and the mitochondria were removed from the band near the bottom of the gradient using a 5-mL pipette. The mitochondria were diluted with wash buffer and centrifuged twice at 27,000g for 15 min. The final pellet was resuspended in a minimal volume of wash buffer.
Enzyme Assays
Maximum catalytic activities of the enzymes of carboxylic acid metabolism were measured spectrophotometrically in desalted extracts of pericarp tissue or in isolated mitochondria according to Morgan et al. (2008).
Statistical Analyses
Student's t test (two tailed, unequal variance) was used to determine the significance of differences in enzyme activities and metabolite levels between tomato lines. The mean 6 SE of ratios of 14 CO 2 release from positionally labeled Glc were subjected to log transformation prior to repeated-measures ANOVA based on type III sums of squares (SPSS/PASW Statistics 18; IBM). The homogeneity of variance of the dependent variable in repeated-measures ANOVA was confirmed using Levene's test prior to assessing the significance of differences between plant lines. Only statistical differences for which P , 0.05 were considered significant.
Supplemental Data
The following materials are available in the online version of this article.
Supplemental Figure S1. Carotenoid content of cv M82 and IL2-5 fruit during development.
Supplemental Figure S2. Rate of oxidation of [ 14 C]Glc to 14 CO 2 by tomato pericarp discs from cv M82 and IL2-5 fruits is linear over the first 12 h. | 2017-06-21T12:09:14.291Z | 2012-11-19T00:00:00.000 | {
"year": 2013,
"sha1": "2930ba468635928f14286bc55d20df9490293d76",
"oa_license": "CCBY",
"oa_url": "http://www.plantphysiol.org/content/plantphysiol/161/1/397.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "759937b2a216933c730202e5e5f139ab6be89872",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
251493150 | pes2o/s2orc | v3-fos-license | On Achievable Rates of Evenly-Spaced Discrete Uniform Distributions in the IM/DD Broadcast Channel
In optical wireless communications, a broadcast channel (BC) employing intensity modulation and direct detection (IM/DD) is often modelled as a peak-constrained BC. A closed-form expression for its capacity region of the peak-constrained BC is not known. This paper presents an analytical capacity inner bound for the peak-constrained Gaussian BC achieved by a class of discrete input distribution, specifically, the evenly-spaced discrete uniform distribution (ESDU). In contrast to the continuous input distribution that provides the benchmark, ESDU is more promising in the application of peak-constrained Gaussian channels. The newly obtained capacity inner bound is easily-computable and is numerically shown to be tighter than the benchmark. Besides, we remark the newly developed analytical upper bound for the ESDU rate, which is tight in all tested settings.
I. INTRODUCTION
Optical wireless communication (OWC) has witnessed an increased research attention over the past decade, owing to the advent of LiFi [1] in addition to emerging applications of free-space optics, visible light communications, and ultraviolet wireless communications [2]- [4]. This renewed interest in optical wireless communications led to increasing efforts in studying the capacity achieved in an OWC system using intensity modulation and direct detection (IM/DD), often modeled as a peak-constrained Gaussian channels with a single user [5]- [7]. Driven by the growing interests in applying OWC in practice [8]- [10], recent developments in this area have led to new results for peak-constrained multi-user channels, including the multiple-access channel [11], the interference channel [12], and the broadcast channel (BC) [13], which can model the uplink and the downlink scenarios in OWC. However, the study on the capacity of the peak-constrained multi-user channels is hindered by a major challenge.
For a peak-constrained Gaussian channel, it is known that the capacity is achieved by a discrete input distribution [14]. However, analyzing the entropy of the mixture of a Gaussian and a discrete distribution is rather challenging. As a consequence, the current literature resorts to either evaluating capacity numerically [6], or to bounding it using continuous distributions such as uniform, truncated-exponential, or truncated Gaussian (TG) distributions leading to analytical results that exhibit a large gap to capacity [5], [13] (see [15] for a survey on the topic). Moreover, although it is known that a discrete uniform distribution over an evenly-spaced alphabet (ESDU) provides a good approximation of the capacity of the peak-constrained Gaussian channel [6], there is no closedform expression for the rate achieved by a general ESDU (or a good rate bound thereon). Consequently, the effectiveness of ESDU in the peak-constrained Gaussian channel is only known through numerical results for specific settings [6].
Due to the challenge in analyzing discrete channel input, the study on the capacity of the peak-constrained Gaussian BC has been hindered for a long time, as observed in the current literature. The earliest study we find in the literature on the capacity of the peak-constrained Gaussian BC is [13], which developed capacity inner bounds and outer bounds. After [13], little new knowledge about the capacity of the peak-constrained Gaussian BC appears in the literature, despite the importance of the peak-constrained Gaussian BC for OWC. The recent literature on the peak-constrained BC tends to accept the disadvantage of continuous input distributions in terms of rate in exchange for the advantage of having simple achievable rate expressions. For example, the recent work [8], [10], [16] are all based on analytical single-user capacity lower bounds that are derived for continuous input distributions and thus have inherited their large gap to capacity. This highlights the importance of deriving simple analytical results for the achievable rates of discrete input distributions towards developing tighter capacity bounds for the peak-constrained Gaussian BC.
In this paper, we aim to address this challenge. We propose to adopt ESDU in the peak-constrained Gaussian BC and study its achievable rate. More specifically, analytical results are derived for the peak-constrained Gaussian point-to-point (P2P) channel based on ESDU input, leading to an upper and a lower bound on its achievable rate, which are then used to obtain an analytical inner bound for the peak-constrained Gaussian BC. This ESDU-based BC inner bound is then examined numerically in comparison with the benchmark inner bound from [13] which is based on a TG distribution, and is shown to achieve a larger rate region. Besides, the numerical results also show that the obtained ESDU rate upper bound is remarkably tight in all tested settings.
The rest of the paper is organized as follows. Sec. II describes the channel model and the objective. The achievable rate analysis of the ESDU and the main results of the paper are given in Sec. III. Then Sec. IV numerically examines the obtained results. Finally, Sec. V concludes this paper and introduces possible future extensions.
Notations: Throughout the paper, we use R + to denote the set of nonnegative real numbers, I(·; ·) to denote the mutual information between two random variables, and H(·) and h(·) to denote the entropy and differential entropy of a random variable, respectively. We also use H(p) to denote the binary entropy function, i.e., for p ∈ [0, 1], H(p) = −p log(p) − (1 − p) log(1 − p). We use log(·) to denote the base-2 logarithm, P[·] to denote the probability of a random event, P X to denote the probability distribution of X, and E[·] and V[·] to denote the expectation and the variance of a random variable, respectively. We write X ∼ Unif([a, b]) to indicate that a continuous random variable, X, is uniformly distributed on the interval [a, b], and X ∼ Unif(X ) to indicate that a discrete random variable, X, is uniformly distributed on the alphabet X . Specifically, we write X ∼ ESDU(A, K) to indicate that a discrete random variable, X, is uniformly distributed over the set { iA K−1 } K−1 i=0 , an alphabet with K elements spanning [0, A] with a spacing of A K−1 , or equivalently, ) defines a Gaussian random variable with mean µ and variance 2 dx is the standard Gaussian tail function.
II. CHANNEL MODEL AND OBJECTIVE
Consider a two-user OWC BC employing an IM/DD scheme. This can be modeled as a peak-constrained Gaussian BC, defined through the input output relations where the transmitter broadcasts X to receivers 1 and 2 through independent noisy transmission links, and receiver i receives Y i = X + Z i , where the additive noise at the receiver is Gaussian, i.e., Z i ∼ N (0, σ 2 i ). Without loss of generality (WLOG), we suppose σ 1 < σ 2 . The transmit signal satisfies nonnegativity and peak constraints so as X ∈ [0, A].
Using this channel, the transmitter wants to send messages M 1 and M 2 with rates R 1 and R 2 to receivers 1 and 2, respectively. Achievable rate pairs (R 1 , R 2 ) and the capacity region of this BC are defined in the standard Shannon sense, see [17,Ch. 5]. Note that this BC belongs to the family of degraded BCs, since givenZ 2 ∼ N (0, σ 2 2 − σ 2 1 ) and Thus, the capacity region of this two-user BC is the set of rate pairs (R 1 , R 2 ) that satisfy [17,Chap. 5] for some P U,X , where U is an auxiliary random variable that conveys the message to receiver 2. This region can be achieved through superposition coding with successive interference cancellation (SC-SIC).
The main challenge is to determine U and X such that the peak constraint is satisfied. The objective of this work is to provide an analytical lower bound on this capacity region when X follows a discrete distribution. To this end, we propose a transmission scheme that combines SC-SIC and an ESDU, and analyze its achievable rate to obtain a closed-form expression. Details are given next.
III. PROPOSED SCHEME AND ACHIEVABLE RATE
We propose to adopt SC-SIC in the two-user BC (1) while designing X so that X ∼ ESDU(A, K) for some K ≥ 2. The construction of X is as follows. Given some integers K 1 ≥ 1 and The random variables X 1 and X 2 will be used to encode M 1 and M 2 , respectively, using an independent and identically distributed (i.i.d.) random code, i.e., the codeword for M i is an i.i.d. sequence of realizations of X i . Finally, the transmit signal is constructed by adding the codewords, and hence X = X 1 + X 2 .
For decoding, receiver 2 decodes M 2 from its received signal, while receiver 1 decodes M 2 to obtain X 2 , subtracts its contribution from the received signal, and finally then M 1 . The achievable rate region is the convex hull of the union over where (3c) follows since I(X 2 ; In order to simplify the evaluation of this achievable rate region, we aim to express the mutual information terms in (3a) and (3c) in closed form. To this end, we need some preliminaries, which are presented in the following subsection.
A. Useful P2P Rate Bounds
Here we present bounds on the rate that can be achieved in a peak-constrained Gaussian P2P channel using an ESDU input distribution. We start by recalling capacity bounds for the peak-constrained Gaussian P2P channel with a continuous uniform input distribution which will be useful afterwards.
Lemma 1 (Continuous uniform distribution rate). Given
and C(A, σ) and C(A, σ) are the lower and upper bounds on I(X; X + Z) given in [5], [7] as C(A, σ) = min Proof: The proof is given in Appendix A. Note that C(A, σ) and C(A, σ) are also the lower and upper bound for the capacity of the peak-constrained Gaussian P2P channel as shown in [5], [7].
Next, we provide lower and upper bounds on the achievable rate of an ESDU input in the following lemmas.
Lemma 2 (ESDU rate lower bound). Given X ∼ ESDU(A, K) and Z ∼ N (0, σ 2 ), we have I(X; X + Z) ≥ F (A, K, σ) where wherein with E as defined in (4), and Proof: The proof is given in Appendix B.
Remark 1.
Regarding the three components of F(A, K, σ), F 1 (A, K, σ) is tighter than the other when K is small so that A K−1 is large, and F 2 (A, K, σ) and F 3 (A, K, σ) are tighter when K is large so that A K−1 is small. As an additional remark, for an ESDU input, the new analytical lower bound F(A, K, σ) is tighter than the Ozarow-Wyner-B bound [18] [19, eq. (9)], which is defined as . The proof is given below. Denote ∆ = A K−1 . Then, we have, wherein and E is defined in (4).
Proof: The proof is given in Appendix C. Fig. 1.
σ) which is good when K is large so that A K−1 is small, and C(A, σ) which is good overall but is a capacity upper bound (not specific for an ESDU distributed input). This conclusion is examined in
Now we are ready to present the main results of the paper on the achievable rate region of a peak-constrained Gaussian BC.
B. BC Achievable Rate Region
The new achievable rate region achieved using an ESDU in a peak-constrained Gaussian BC is given next.
Theorem 1 (A computable BC capacity inner bound). Given a peak-constrained Gaussian BC as defined in Sec. II, and given any K i ≥ 1, i = 1, 2, rate pairs (R 1 , R 2 ) ∈ R 2 + that satisfy are achievable, where F and G are defined in (7) and (12), respectively.
Proof: The proof is obtained based on (3) while using Lemmas 2 and 3 to lower-bound I(X 1 ; X 1 +Z 1 ) and I(X; Y 2 ) and to upper-bound I(X 1 ; X 1 + Z 2 ), respectively, where both X 1 and X follow the ESDU distribution.
To assess the above inner bound, we use the following capacity outer bound [13].
Theorem 2 (BC capacity outer bound). An achievable rate pair (R 1 , R 2 ) ∈ R 2 + in a peak-constrained Gaussian BC as defined in Sec. II satisfies (R 1 , R 2 ) ∈ G G 1 ∩ G 2 , where G 1 is in the convex hull of the union over ρ ∈ [0, 1] of rate pairs (R 1 , R 2 ) ∈ R + satisfying and where Proof: The proof of the outer bound G 1 is given in [13]. The proof of the outer bound G 2 follows due to the degradedness property [17,Sec. 5.4], implying that the sum rate cannot not be larger than the capacity of the less noisy link (from the transmitter to receiver 1).
IV. NUMERICAL RESULTS
In this section, we numerically examine the obtained results. The lower and upper bounds for the ESDU rate in Lemma 2 and 3 are shown first, followed by the achievable rate region of the BC. Without loss of generality, we set σ = σ 1 = 1 throughout the simulations.
A. The Lower and the Upper Bounds of ESDU Rate
Let X ∼ ESDU(A, K) be the input of a peak-constrained Gaussian P2P channel with a peak constraint A, and with output X + Z where Z ∼ N (0, σ 2 ). The achievable rate I(X; X + Z) is lower-and upper-bounded as in Lemmas 2 and 3, respectively. To plot these bounds, we let K = max 2, ⌈ A ∆0 ⌉+1 and we consider ∆ 0 = 0.5iσ, i = 1, . . . , 20 in our simulation. Fig. 1 shows the comparison between the lower bound, the upper bound, and I(X; X + Z), under some representative settings. We also plot C(A, σ) and C(A, σ) as a benchmark, and plot H(X) to examine Remark 2. It can be seen in Fig. 1 that I(X; X + Z) always lies in between the obtained ESDU bounds, where the upper bound remains very close to I(X; X + Z) in all tested settings. (14), we vary ∆ 0 within {0.5iσ 1 |i = 1, . . . , 20}. Then, for each ∆ 0 , we let K = max 2, ⌈ A ∆0 ⌉+1 and K 1 in {0, . . . , K}. For each K 1 , we choose K 2 to be the smallest integer such that ∆ = A K1K2−1 ≤ ∆ 0 . The same procedure is used for evaluating (3), except that ∆ 0 here is within {iσ 1 |i = 1, . . . , 10} for the sake of less computation time. It can be seen from Fig. 2 that the ESDU-based inner bound (3) always outperforms the TG-based one, and the gap between the inner bound (3) and its simplified form (14) is within 0.2 bits in all tested cases. The gap is mainly attributed to the relatively loose ESDU lower bound as shown in Fig. 1. As an observation, it is also worth to note that the settings of K and K 1 given in this simulation can help to achieve the rate pairs close to the boundary of the IB (14) under each ∆ 0 , which is more significant around the maximum sum-rate point. This can be seen from the rate pairs associated with each ∆ 0 , where the case ∆ 0 = 3σ 1 is provided as an example in Fig. 2.
V. CONCLUSIONS AND OPEN QUESTIONS
We studied the achievable rate region of an evenly-spaced discrete distribution (ESDU) in a peak-constraint Gaussian broadcast channel (BC). To this end, we derived new lower and upper bounds for the ESDU rate achieved in a Gaussian channel, i.e., I(X; X + Z) with X following a ESDU distribution and a Gaussian noise Z. We provided numerical results to examine the analytical results. The ESDU-based BC inner bound is shown to outperform the benchmark inner bound in the literature, which is based on a truncated Guassian (TG) distribution. Besides, the obtained ESDU rate upper bound for the P2P channel is remarkably tight in all tested settings.
Future work can target tightening the lower bound on ESDU rate, which can help to close the gap between the approximation and the actual ESDU inner bound. Moreover, the work can be extended to consider non-uniform distributions over an evenly-spaced alphabet (such as geometric distribution [6], [20]) which is useful for peak-and average-constrained channels that model Li-Fi applications.
APPENDIX A PROOF OF LEMMA 1
Firstly, the lower bound I(X; X + Z) ≥ C(A, σ) follows from [5,Thm. 5]. C(A, σ) is the combination of the capacity upper bounds of the [0, A]-peak-constrained Gaussian channel in [5,Thm. 5] and [7, (12)], so that it is direct to obtain I(X; X + Z) ≤ C(A, σ) It remains to prove the upper bound I(X; X + Z) ≤ 1 2 log 1 + A 2 12σ 2 . We have where the inequality follows since the Gaussian distribution maximizes the differential entropy under a variance constraint, and the last equality follows since We first prove that I(X; X + Z) ≥ F 1 (A, K, σ). LetX be the nearest-neighbor estimator of X from X + Z. Then, using Fano's inequality, we have To calculate P(X = X), let ∆ = A K−1 and Note that P(X = X|X = A) = p 0 , and for x ∈ X \ {0, A} we have P(X = X|X = x) = 2p 0 . Thus, By substituting (20b) into (18b), we obtain I(X; X + Z) ≥ F 1 (A, K, σ). Then, we prove that I(X; X + Z) ≥ F 2 (A, K, σ). Define an independent random variable U ∼ Unif [0, ∆) . We have where the first inequality follows since I(U ; U +Z)−I(U ; X+ U + Z) = I(X; U |X + U + Z) ≥ 0, and the second inequality follows since X + U and U are continuous uniform distribution where X + U ∼ Unif([0, A + ∆]), so that Lemma 1 applies. By substituting ∆ = A K−1 , we obtain I(X; X + Z) ≥ F 2 (A, K, σ). | 2022-08-12T01:15:41.838Z | 2022-08-11T00:00:00.000 | {
"year": 2022,
"sha1": "645382bb86a84266c85fe82c7de5db8b097a7a56",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d7c2fc310b1e32143e588ab5a0a72dffba6ca8a6",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
243841170 | pes2o/s2orc | v3-fos-license | RESEARCH OF APPROACHES TO FORMATION OF LEGISLATION IN THE SPHERE OF ONLINE RETAIL SELLING (DISTANCE SELLING) OF MEDICINES
The aim of the study is to study approaches to the formation of legislation in the field of the online retail selling (distance selling) of medicines in the EU with further development of areas for improvement of pharmaceutical legislation of Ukraine. Materials and methods. During the research, scientific methods were used, in particular, system-analytical, content analysis, comparative legal, graphic, etc. Results. The main provisions of the EU Directives 2000/31/EU “On some legal aspects of information services, in particular, electronic commerce, in the internal market” (Directive on electronic commerce) and EU Directive 2001/83 EU “On the Community code relating to medicinal products for human use”, the Council of Europe Convention “On counterfeiting medical products and similar involving threats to public health”, Implementing Regulation of the EU Commission No. 699/2014 of June 24, 2014, as well as the Law of Ukraine “On Electronic Commerce”, “On Medicines”, Resolution of the Cabinet of Ministers of Ukraine (CMU) of March 23, 2020 No. 220, of November 30, 2016 No. 929 and others. Conclusions. The main directions of improving the current legislation of Ukraine in the field of the online retail selling of medicines are proposed. The necessity to supplement the Regulation “On the State Service of Ukraine for Medicines and Drug Control” (Resolution of the Cabinet of Ministers of Ukraine of November 30, 2016 No. 929) with certain norms has been substantiated. In particular, to impose on this state body the obligation to register business entities that plan to sell drugs using information and communication means, as well as maintain their Register
Introduction
After signing and ratifying the Association Agreement between Ukraine and the European Union (EU), our state has undertaken a number of legal obligations, in particular, to ensure the gradual adaptation of national legislation to EU law in accordance with the areas defined in the Agreement, one of which is electronic commerce [1,2]. An important step aimed at fulfilling this task was the adoption on September 17, 2020 of the Law of Ukraine "On Amendments to Article 19 of the Law of Ukraine "On Medicines" (Law 4 904-IX), which enshrines the possibility of pharmacy retail trade in medicines (drugs) through information and communication networks [3].
In the EU, the procedure for conducting selling medicines online (distance selling) to the public is regulated by special regulations, the legal norms of which ensure the acquisition of positive practical experience in carrying out such activities in European countries. According to research in Ukraine, for the effective implementation of the mechanism of the online retail selling of medicines, it is necessary to further develop and adopt a number of regulations [4][5][6].
The urgent need to create a proper system of legal regulation of pharmaceutical care to the population of Ukraine in terms of the introduction of online retail selling of medicines is due to a number of reasons, including: the spread of self-medication among the population; detection of errors related to drugs, in particular during retail selling in pharmacies [7]; formation of new market conditions of competition among pharmacies and their networks, as well as the introduction of new marketing approaches to the formation of the product range of pharmacies and attracting consumers [8].
The aim of the study is to study approaches to the formation of legislation in the field of the online retail selling (distance selling) of medicines in the EU with further development of areas for improvement of pharmaceutical legislation of Ukraine.
Planning (methodology) of research
The study consisted of several stages. At the first stage, regulations and scientific literature on the topic were studied, the directions of research were chosen. In particular, on the basis of elaboration of sources of law in the field of retail selling medicines online (distance), the main requirements to business entities carrying out this type of activity in accordance with the legislation of the EU and Ukraine were determined. The inconsistency of the Ukrainian legislation with the requirements of the EU directives on the existence of a state supervisory body as a participant in the implementation of online retail selling of medicines has been revealed. In addition, the expert opinion of domestic and foreign experts on the possibility of introducing electronic retail trade of drugs in Ukraine was studied [4-8, [9][10][11][12][13][14][15][16]. It is emphasized that the improvement of legal acts in force in Ukraine in this area is due to the practical necessity of this type of activity in connection with restricted movement and reduced physical accessibility of the population of Ukraine to medicines due to pandemic coronavirus infection (COVID19) [4-8, [9][10][11][12][13][14][15][16].
At the second stage the necessity of carrying out research is substantiated, its purpose and tasks are defined, methods are chosen. A number of previously unresolved issues were identified, namely: gaps in the licensing conditions of retail selling medicines online in relation to the requirements for business entities, qualification requirements for the personnel that will collect, process, form the order and pharmaceutical care of the consumer during the online purchase of medicines; lack of legal requirements for own delivery service of pharmacies, telecom operators and transport companies that will deliver medicines to the end user; lack of a legal mechanism for state control (supervision) of retail selling medicines online at all stages in order to prevent the circulation of low-quality, counterfeit and unregistered medicines.
Registration of results of work and conclusions, formulation of directions of improvement of the legislation of Ukraine in the field of online retail selling (distance selling) of medicines, was carried out at the third stage of the research.
Materials and methods
The materials of the study were the legislation of Ukraine and the EU on e-commerce and retail selling of medicines through information and communication tools.
The following methods were used during the study: system-analytical (clarification of the level of the research of the problem in domestic and foreign literature); content analysis (elaboration of the provisions of the regulations governing the retail sale of medicines through information and communication tools, as well as to understand and interpret the content of certain legal norms); comparative legal method (comparison of the features of the mechanism of legal regulation of pharmaceutical assistance to the population in the EU and in Ukraine for further proposals to improve the current Ukrainian legislation in this area); graphic methods were used to visualize the results of the research and schematic representation of a number of provisions.
Research results
The legal basis for the introduction of online retail trade of medicines in the EU is the provisions of the EU Directives "Directive on electronic commerce" (EU Directive 2000/31/EU) and "Community code relating to medicinal products for human use" as amended (EU Directive 2001/83/EU), which establish the basic principles and rules of retail sale of medicines through information and communication tools [17,18].
The EU Directive on e-commerce promotes the creation of a legal framework to ensure the free movement of goods and services between EU countries, identifies the main subjects of the relationship in the process of e-commerce of goods and services, and approves the basic concepts of transactions arising in the results of e-commerce (the procedure for concluding contracts using electronic means, placing orders with the consumer, etc.) [17].
It should be noted that the regulatory framework for the circulation of drugs in the EU, including the use of information and communication tools is EU Directive 2001/83/EU, which regulates the online retail selling (distance selling) of medicines is regulated by Section VIII "Sale of drugs to the public at a distance". According to Section VIII of the Directive, each EU member state can introduce a new type of pharmaceutical activity on its territoryonline sales of medicines to the public [18], and the participants in this process are the entities presented in Fig. 1.
An analysis of the legal norms contained in Section VIIA "Sale at a distance to the public" of EU Directive 2001/83/EU, allows us to state that in the field of retail selling medicines online the relevant government agency has the following main functions: control over compliance with the legislation in the field of retail selling medicines online; providing information on national pharmaceutical legislation on drugs that are allowed for e-retail, differences in legal norms between EU countries regarding the classification of medicines and the conditions of their implementation; maintaining the Register of business entities that are allowed to carry out retail selling medicines online with their names, start date of the specified activity and the address of the website used to sell drugs to the final consumer; informing about the risks associated with the purchase of drugs through information and communication tools, measures to prevent counterfeit drugs from entering legal circulation by approving and implementing a common logo, its function and purpose of use. Another e-retailer is a pharmacy, which must be in a proper way registered in the EU Member State as a legal or individual entrepreneur, have a Retail License and comply with the licensing conditions approved by the EU legislature. In this case, each such pharmacy must be recorded in the Register of business entities engaged in online retail selling of medicines through information and communication tools, and have a website with the following information: name of the pharmacy and its address; name and address of the state controlling body that exercises control over the conduct of online retail selling of medicinal products; data on the availability of a license that allows to carry out activities related to the online retail selling (distance selling) of medicines; a common logo, which is a link to the Register of pharmacies that are allowed to carry out online retail selling (distance selling) of medicinal products.
An equally important subject of e-retail of medicines is the end consumer, who buys medicines through information and communication tools. The main task of the EU regulatory authorities in the health care system is to ensure the consumer of proper quality of medicines and prevent the entry of counterfeit medicines, which can lead to negative consequences for his life and health.
In order to protect consumer rights and prevent the negative consequences of the use of counterfeit medicines, the Council of Europe has developed a number of measures to prevent the circulation of counterfeit medicines, and which introduced legal liability for e-retailers for these actions [9][10][11][12][13][14][15][16]19].
At the next stage of the study, an analysis of the Ukrainian pharmaceutical legislation on the regulation of online retail selling (distance selling) of medicines [11-13, 15, 16]. It should be noted that in Ukraine the formation of the legal framework for this type of activity is at an early stage.
It is established that in Ukraine the legal basis for the introduction of retail selling medicines online consists of several regulations, different in legal force ( Table 1).
The analysis of the current legislation of Ukraine on conducting business activities related to eretail trade of medicines allows to identify several entities, namely: the state regulatory authority, pharmacy, and the end user, as well as the entity whose activities are aimed at delivery of medicines from the pharmacy to the final consumer (postal operator on the basis of contractual relations, own delivery service of medicines, etc.).
According to the Law of September 17, 2020 No. 904-IX "On Amendments to Article 19 of the Law of Ukraine "On Medicines" on the implementation of electronic retail sale of medicines" responsibility for the quality of such drugs purchased through retail selling medicines online, at the stage of their delivery to the end consumer, is a pharmacy [3]. The State Service of Ukraine for Medicines and Drug Control (State Medical Service) has been designated as the state body to control the procedure for conducting retail selling medicines online [3]. But assigning such a function to this body and the procedure for conducting this control requires appropriate amendments to the Regulations on this body [20].
Legal entities in the field of e-retail in EU countries
State body that carries out control activities in the field of retail selling medicines online Pharmacy that carries out retail selling medicines online
Consumer
Own medicine delivery service Postal operators The relationship between the consumer and the seller of goods to order and outside trade or office premises on the basis of a contract of sale concluded at a distance or outside trade or office premises is regulated, as well as requirements for consumer rights to proper quality, safety and proper trade services In addition, the State Medical Service is obliged to register business entities that plan to sell retail medicines with the help of information and communication tools and maintain a Register of business entities that plan to carry out this type of drug activity [3]. However, the structure of the said Register and the procedure for registration of such pharmacies are not defined in any regulations.
The Law of September 17, 2020 No. 904-IX also contains a provision that a pharmacy may carry out retail selling medicines online, who has a license to retail medicines [3]. In this regard, there is an urgent need to develop and adopt legal norms on the structure of the Register of pharmacies that can retail medicines with the help of information and communication tools, as well as the procedure for its maintenance.
Analysis of the content of the provisions of Art. 19 of the Law of Ukraine "On Medicines" allows us to conclude that the basic requirements for pharmacies that plan to conduct online retail selling (distance selling) of ensure the delivery of medicines to the final consumer through its own delivery service or such a service with which there is a contractual relationship for the supply of medicines to the final consumer; have a website on which online retail selling of medicines will be conducted, which meets the requirements of the Law of Ukraine № 904-IX [3].
In addition, the legislator outlined the basic requirements for filling the website with the following mandatory information: availability of information on contact details of the body for licensing and quality control of medicines (State Medical Service); placement on all tabs of the site logo with a hyperlink to the Register of business entities that have the right to carry out retail selling medicines online; availability of the possibility of conducting pharmaceutical care at all stages of retail selling medicines online; indication of information on the cost of medicine delivery.
That is why one of the urgent tasks of lawmaking is to approve the design of the logo with a hyperlink to the specified Register.
In addition, there are gaps in the legislation of Ukraine regarding the establishment of legal liability for the sale of counterfeit and low-quality medicines through information and communication tools, which increases the risk of such medicines reaching consumers and, consequently, negative impact on their lives and health.
Discussion of research results
According to the results of the study, topical issues related to the development of legal norms, as well as organizational measures necessary for the introduction of retail trade of medicines through information and communication tools, which are proposed to be divided into several areas (Fig. 2). Legislative and organizational work in the first direction should be aimed at: defining clearer requirements for pharmacies that are allowed to conduct online retail selling of medicines, taking into account the experience of EU member states and making additions to the Licensing Requirements [21]; development of standard working methods (standard operating procedures) for online retail selling of medicines; determination of the order of registration of orders and dispatches of medicine orders, which are realized by means of information and communication means, and the form of the corresponding magazine in paper or electronic form, which will meet the requirements of the Law of Ukraine "On electronic documents and electronic document management" [22].
In the second direction, it is advisable to develop legal norms in which to establish: -Regulations on the State Medical Service on the introduction of a supervisory function for the implementation of activities related to retail selling medicines online [17]; a list of normative legal acts that regulate the procedure and frequency of inspections by the State Medical Service of pharmacies that carry retail selling medicines online for the quality of medicines; the structure of the Register of pharmacies that are allowed to carry out retail selling medicines online and the procedure for its maintenance; positions and qualification requirements for pharmaceutical workers who have to provide pharmaceutical care to consumers when making online purchases of medicines.
To achieve the goal of the third direction it is necessary: to introduce a new type of economic activity subject to licensing -delivery of medicines to the final consumer in e-retail and requirements for them; adopt relevant legal norms, which are also responsible for the quality of medicines at the stage of delivery to the final consumer and the transport service (the person who delivers medicines); develop legal norms that will set out the mandatory requirements necessary to ensure proper conditions for the transportation of medicines that require special storage conditions.
to develop and approve a logo for Ukraine, which would act as a link between the website of a pharmacy that carries out retail selling medicines online, and the Register of pharmacies that are allowed to carry out this type of activity; establish legal liability for violation of the rules of online retail selling of medicines or the sale of counterfeit or substandard medicines in the course of such activities. Determining the requirements and conditions of supply to the final consumer of medicines that have been purchased through information and communication tools Study limitations. The study of approaches to the formation of the general principles of EU and Ukrainian legislation in the field of the online retail selling (distance selling) of medicines has been studied. A limitation of the study was the study of national legislation of the EU, taking into account the peculiarities of the traditions of national legal systems of different countries.
The prospect for further research. We consider the analysis of the legislation of Ukraine and foreign countries on legal liability for violation of the rules of online retail selling of medicines to be a promising area of further research in this area.
Conclusions
The study of approaches to the formation of EU and Ukrainian legislation on the online retail selling (distance selling) of medicines, highlighted the legal status of the subjects (participants) of this process.
Based on the elaboration of norms regulating retail selling medicines online, the need to further improve the current pharmaceutical legislation of Ukraine in this area in order to harmonize it with the legal norms of the relevant EU Directives is substantiated, and specific areas of such activities are proposed.
Conflicts of interest
The authors declare that there is no conflict of interests.
Financing
The study was performed without financial support. | 2021-11-08T16:03:46.862Z | 2021-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "d7aa141659e1dd6c7ef0be1cf92fa68ff1b2ddc5",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/sr_pharm/article/download/201074/241536",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aee2829d95e58202f56c9732fe3d9669ab5379bb",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": []
} |
230524327 | pes2o/s2orc | v3-fos-license | Mixed Finite Element Method for Geometrically Nonlinear Buckling Analysis of Truss with Member Length Imperfection
This paper focuses on the numerical method for the geometrically nonlinear buckling analysis of truss with initial member length imperfection. The solution of nonlinear buckling problem of truss with imperfection using a displacement-based finite element is dependent on the imperfection implemented. Generally, the operation of incorporating the initial imperfection to the master stiffness equation develops by master-slave elimination method, penalty augmentation method or Lagrange multiplier adjunction methods. Obviously, the initial imperfection considerably increases the difficulty in finite element formulation nonlinear buckling problem. This research proposes a novel approach to formulate the nonlinear buckling problem of truss with imperfection using mixed finite element method. The mixed balanced equation of truss is formulated using the principle of stationary potential energy. The paper presents novel mixed finite truss element, including initial member length imperfection, considering large displacement. Using the arc length technique, the research develops a new incremental-iterative algorithm for solving the nonlinear buckling problem of truss with initial imperfection in different cases of model formulation, including displacement-based finite element and mixed finite element formulation. The numerical test is presented to investigate the equilibrium path for plan truss with initial member length imperfection. The calculation results in solving problem formulated in both displacement and mixed finite model are converged showing the efficiency and reliability of the proposed method.
Introduction
Truss structures have been widely used in various public buildings due to their outstanding advantages in material saving and maximum utilization of structure load capacity. Many truss members have initial geometric imperfection as results of manufacturing, transporting, and handling processes. The member initial imperfection probably will affect the buckling behaviour of structures. In geometrical nonlinear finite element analysis, the nodal equivalent force is impossible to use for replacing the imperfection. Using a displacement-based finite element, the solution of buckling analysis of truss with imperfection is dependent on the implementation of imperfection. For incorporating member imperfection to the master stiffness matrix, the master-slave elimination method, penalty augmentation method or Lagrange multiplier adjunction methods are usually used [1]. Obviously, the operation of imposing initial member imperfection considerably increases the difficulty in finite element formulation nonlinear buckling. This paper proposes a novel approach to formulate the nonlinear WMCAUS 2020 IOP Conf. Series: Materials Science and Engineering 960 (2020) 022075 IOP Publishing doi: 10.1088/1757-899X/960/2/022075 2 buckling problem of truss with imperfection to escape difficulties of the mathematical treatment of imperfection. The contribution deals with mixed variational formulation with initial imperfection condition. Based on the principle of stationary potential energy, the mixed form balanced equation of truss is constructed. For solving the proposed problem, the arc length method is employed, which is a very efficient method to predict the proper response and follow the nonlinear equilibrium path through the limit. Using length technique, the word established the incremental-iterative algorithm for solving nonlinear buckling problem of truss with initial imperfection. Based on the proposed algorithm, the calculation procedure and programs for determining the nonlinear equilibrium path are established. For investigating the nonlinear equilibrium path presented a numerical test for plane truss with member initial geometrical imperfection. u , u , u , u and -1 2 3 4 P ,P ,P ,P nodal displacements and forces in a global coordinate system; e P -resultant external force at the i th cross section after deformation; N -axial load of truss element; A -cross sectional area of truss element, E -elastic modulus of the material.
Le
Le L The virtual external work can be defined as Based on the principle of virtual work, in equilibrium the virtual work of the forces applied to a system is zero, getting Adding deformation from Eq. (2) to Eq.
From Eq. (7) The Eq. (8) Designating: The Eq. (12) can be compactly written as In the Eq. (13), (e) (u) k is a mixed matrix of truss element considering the initial length imperfection, is given by ( ) Where .( L cos u u )( L sin u u ) The potential energy of the unconstrained finite element model is Converting a constrained problem into unconstrained problem a form using the Lagrange multipliers method [4]. To impose the constraint, adjoin additional unknowns -Lagrange multipliers The system (24) is nonlinear, it can be expressed in incremental form as System (25) is written in compact form as
Algorithm for solving the nonlinear buckling problem based on the arc length method
The arc length method [5][6][7][8][9], both load control and displacement control, is a very efficient method in solving non-linear systems of equations when the problem under consideration exhibits one or more critical points.
Based on spherical arc length method, the block diagram of the algorithm for solving the nonlinear buckling problems is established (shown in figure 2). Step: i
Example formulation
The system is composed of bars made of the same material and had the same geometrical properties (the system is shown in figure 3), having length imperfection 1 ∆ . The geometric parameters, material parameters and loading parameters are given
Conclusions
In solving the geometrical nonlinear problem of truss with the member imperfection, the mixed formulation model is simpler and more effective than the displacement-based formulation model.
The mixed model of the finite element formulation has a remarkable advantage in the analysis of problems with nonlinear displacement constraint. Using presented mixed formulation helps to escape difficulties of the mathematical treatment of imperfection in nonlinear finite element model. | 2020-12-17T09:07:44.884Z | 2020-12-10T00:00:00.000 | {
"year": 2020,
"sha1": "ac8010d15e3ca101898c6cbbf366b7d220c9f10c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/960/2/022075",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f0e88dc4b2e63d90e77d6e9ce2d28fa3683e5f7d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
225084046 | pes2o/s2orc | v3-fos-license | Clinical Outcome of Percutaneous Trigeminal Nerve Block in Elderly Patients in Outpatient Clinics
Objective Trigeminal neuralgia (TN) is a severe neuropathic condition that affects several elderly patients. It is characterized by uncontrolled pain that significantly impacts the quality of life of patients. Therefore, the condition should be treated as an emergency. In the majority of patients, pain can be controlled with medication; however, other treatment modalities are being explored in those who become refractory to drug treatment. The use of the trigeminal nerve block with a local anesthetic serves as an excellent adjunct to drug treatment. This technique rapidly relieves the patient of pain while medications are being titrated to effective levels. We report the efficacy and safety of percutaneous trigeminal nerve block in elderly patients with TN at our outpatient clinic. Methods Twenty-one patients older than 65 years with TN received percutaneous nerve block at our outpatient clinic. We used bupivacaine (1 mL/injection site) to block the supraorbital, infraorbital, superior alveolar, mental, and inferior alveolar nerves according to pain sites of patients. Results All patients reported relief from pain, which decreased by approximately 78% after 2 weeks of nerve block. The effect lasted for more than 4 weeks in 12 patients and for 6 weeks in two patients. There were no complications. Conclusion Percutaneous nerve block procedure performed at our outpatient clinic provided immediate relief from pain to elderly patients with TN. The procedure is simple, has no serious side effects, and is easy to apply.
INTRODUCTION
Trigeminal neuralgia (TN) is a neuropathic condition characterized by recurrent brief episodes of sudden stabbing (lancinating) facial pain involving the trigeminal nerve. The condition is mostly unilateral and can involve one or more branches of the trigeminal nerve. The severe paroxysmal pain associated with the condition affects the physical functions of patients and reduces their quality of life 15,25) . TN predominantly occurs in individuals older than 60 years. Its onset is rare in individuals younger than 40 years, except in those with multiple sclerosis or cancer, where symptoms differ from the "classic" TN in terms of facial sensory loss and pain distribution along the branches of the trigeminal nerve 4,22) . The common age of onset of TN is 40 to 60 years; however, years or even decades pass before the initially brief episodes of pain become intractably severe or frequent and unresponsive to medication. A majority of patients seek first neurosurgical intervention for their symptoms in their eighth decade of life. Moreover, several patients at this age or above continue to seek neurosurgical assistance when their TN fails to respond to initial surgery or recurs after transient success 19) . Immediate surgical intervention is not possible in elderly patients presenting with severe facial pain. There is a growing concern of complications associated with surgery with the increase in age. Furthermore, elderly patients find it difficult to wait for surgical treatment 19) . Therefore, other treatment approaches, such as percutaneous administration of medicines, are being used, which are relatively easy to apply and more effective. However, these require expert technical skills. Furthermore, several of these are neurodestructive in nature and may be associated with adverse side effects such as sensory loss and dysesthesia 7) .
This study investigated the clinical outcome of percutaneous nerve block of the trigeminal nerve in terms of its efficacy and safety in elderly patients with TN in our outpatient clinic.
MATERIALS AND METHODS
All procedures performed in studies involving human participants were in accordance with the ethical standards of the Institutional Research Committee of Kyung Hee University Hospital (approval number KMC IRB 1511-14) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Patient and study design
This study was conducted from January 2018 to December 2019 on patients older than 65 years who were treated for TN in our hospital. These patients did not show an improvement in symptoms with medication and thus received additional injection therapy in the outpatient facility.
Patients included in the study visited the outpatient clinic every 2 weeks after injection therapy for assessment of the degree of pain. Numeric rating scale (NRS) was used to measure the degree of pain. The patients were maintained on medication even after the injection therapy. No additional injection treatments were administered during the 6 weeks of outcome evaluation.
Intervention
All nerve block procedures were performed with bupivacaine hydrochloride (0.5%, 5 mg/mL) using a 23-gauge needle. In the majority of patients, 1 mL of the drug was administered at each injection site.
The majority of the ophthalmic (V1) division of the trigeminal nerve had a supraorbital nerve block. In the case of maxillary (V2) division, the infraorbital and superior alveolar nerves were blocked, whereas in the case of mandibular (V3) division, the mental and inferior alveolar nerves were blocked. Neuroblocking was performed according to the previously described procedure 6,10,12) .
The supraorbital nerve is one of the terminal branches of the trigeminal-ophthalmic nerve. It exits the cranium via an opening above the orbit known as the supraorbital foramen. It is visually identified by asking the patient to look straight ahead and transect the pupils at the level of the orbital ridge. Palpation of this region identifies the injection site for performing this nerve block.
The infraorbital nerve is a branch of the maxillary nerve, which is the second division of the trigeminal nerve. The key injection site for performing an infraorbital nerve block is the infraorbital foramen. The injection site is identified by asking the patient to look straight ahead and imagine a line down from the pupils to the inferior border of the infraorbital ridge, bicuspid teeth, and mental foramen.
The mental nerve is one of the branches of the inferior alveolar nerve. It exits through the mental foramen bilaterally in the mandible. The mental foramen is located halfway between the upper (alveolar crest) and the lower edges of the mandible in direct line with the second bicuspid (premolar). Two techniques are used to perform this nerve block : intraoral and extraoral (percutaneous). We selected the intraoral procedure. The mental foramen was identified by asking the patient to retract the lower lip.
The superior alveolar nerve is divided into three branches, namely anterior, superior, and alveolar nerves. It descends from the infraorbital nerve and innervates the ipsilateral incisors and the canine. The nerve block is performed using the intraoral method. After the upper lip is retracted anteriorly and superiorly, the injection is administered at an angle of 45 https://doi.org/10.3340/jkns.2020.0139 degrees on the apex of the canine. The middle superior alveolar nerve is a branch of the infraorbital nerve or the maxillary nerve. This nerve innervates the ipsilateral premolars and the first molar. The injection site is between the apices of premolar and the first molar, and injection is administered at an angle of 45 degrees. The posterior superior alveolar nerve is a branch of the maxillary nerve and innervates the ipsilateral molars. The injection site is the root of the upper secondary molar.
The inferior alveolar nerve is a branch of the mandibular nerve. It is blocked using the intraoral method. After the pterygomandibular triangle is checked, the syringe is held parallel to the occlusal surfaces of the teeth and at an angle such that the barrel lies between the 1st/2nd premolars of the opposite side. The injection is administered between the pterygomandibular raphe and the coronoid notch.
Patients were returned home after confirming that a response appeared within 15 minutes after injection treatment.
RESULTS
A total of 21 patients were included in the study. The mean age was 71.8±4.7 years (range, 65-82). There were nine men and 13 women. The median duration of symptoms was 3 months, ranging from 2 weeks to 9 months. All patients received no treatment other than medication (Table 1).
All patients underwent outpatient follow-up for more than 6 weeks. Pain distribution was V1 in one, V1+2 in two, V2 in 11, V2+3 in five, and V3 in two patients. All patients who received the injection felt a decrease in sensation in the pain area within 15 minutes after the injection.
The mean NRS of patients at the time of the visit was 7.2± 0.8 points (range, 6-9). The mean score measured 2 weeks after the procedure was 1.6±1.1 (range, 0-4), showing that the pain reduced by approximately 78%. The mean score measured after 1 month of the procedure was 3.14±2.1 (range,
1-7)
, and that evaluated 6 weeks after the procedure was 5.7± 1.6 (range, 2-8). The change in the pain score after injection therapy showed statistical significance (p<0.0.001) (Fig. 1). Although patients were divided according to the nerve branch blocked, the majority of them reported effective pain reduction after the procedure (Fig. 2). The average NRS of four patients who received supraorbital nerve block was 8 points before the injection, which reduced to 1 point 2 weeks later. After 4 weeks, the mean NRS was 2 points and increased to 5 points after 6 weeks. The average NRS of seven patients who received infraorbital nerve block was 7 points before the injection; it reduced to 2 points 2 weeks later. After 4 weeks, the mean NRS was 4 points and increased to 5.5 points after 6 weeks. The initial NRS score of five patients who received superior alveolar nerve block was 7 points, which reduced to 1.9 points after 2 weeks. The mean NRS was 2.2 points after 4 weeks and 5 points after 6 weeks. The initial NRS of four patients who received inferior alveolar nerve block was 7 points. It decreased to 1 point after 2 weeks, and the NRS was 3 points after 4 weeks and increased to 8 points after 6 weeks. The initial NRS score of four patients who received the mental nerve block was 9 points. The score decreased dramatically to 0.5 points after 2 weeks; however, the NRS increased dramatically to 7 points after 4 weeks. There was no change after 6 weeks. In the majority of patients, the best effect was observed 2 weeks after the procedure, and the pain score increased after 2 weeks. The duration of pain improvement lasted for more than 4 weeks in 12 patients and 6 weeks in two patients. No complications were reported.
DISCUSSION
Medication is a commonly used treatment for TN. However, in patients refractory to drug treatment, other treatment modalities are explored. Surgery is the commonly used treatment if the cause of the disease is a neurovascular conflict. Microvascular decompression is considered the best surgical treatment for TN owing to its effectiveness and durability 1,16) . Other treatment approaches include radiofrequency ablation of the trigeminal ganglion and gamma knife radiosurgery (GKS) 2,17) .
A systematic review reported a mean postoperative success rate of 83.5% for microvascular decompression 24) . Moreover, the incidence of complications related to the surgery was approximately 1% to 2% 21) . According to a study, acute pain relief was experienced by 97.6% of patients after radiofrequency ablation 14) . Another study reported that 57.7% of patients experienced complete pain relief 5 years after ablation 9) . Marshall et al. 18) reported that the treatment response rate was 86% within 3 months of GKS. Of these, 43% of patients experienced excellent outcomes 18) .
In elderly patients, microvascular decompression surgery can be performed in those fit for general anesthesia. In addition, in the case of complications following the surgery, the sequelae are serious such that the decision of surgery should be made carefully 3) . Radiofrequency ablation requires the operator's proficiency. Moreover, it is associated with side effects such as facial sensory disorders, e.g., hypoesthesia 11) . In addition, the effect of treatment is not permanent leading to additional or re-treatments. In the case of GKS, a latent period is required for the treatment to exert its effect, which does not last long 17) .
Peripheral blockage of the trigeminal nerve branches has the advantage that it exerts an immediate effect that lasts for at least 2 weeks. Furthermore, it can be performed immediately in an outpatient setting due to the ease of the procedure. It has few side effects and can be easily applied to elderly patients. In addition, the bupivacaine monotherapy performed in this study has fewer side effects than the conventional cocktail therapy, with the advantage of re-injection within a short duration depending on the treatment response.
The major disadvantage of injection therapy is that its duration of action is short. In addition, it is difficult to objectively estimate the duration of the treatment effect on different patients. However, injection therapy immediately exerts the desired therapeutic effect on patients with severe pain in an outpatient setting, enabling continuous administration of medication.
NRS of the patients were reduced by 78% after 2 weeks of treatment in the present study. It is commonly reported that bupivacaine has a working time of 2-8 hours 13) . In the current study, the duration of the action for about two weeks is not just the injection effect. We thought that it is because the additional medication was treated together. Bupivacaine has the following mechanism of action. It binds to the intracellular portion of voltage-gated sodium channels and blocks sodium influx into nerve cells, which prevents depolarization. Without depolarization, no initiation or conduction of a pain signal can occur. Moreover, carbamazepine, which is most commonly used for TN, is also a sodium channel blocker, so it can be cautiously speculated that the mechanism of drug action was synergistic with each other 5) . In addition, the duration of action may be seen as extended by the implementation of additional drug administration and dose increases because they were not controlled by conventional drugs.
Drugs in injection treatment may use other drugs than bupivacaine. Wilkinson 23) performed trigeminal peripheral nerve block using phenol/glycerol. As a result, they reported pain relief in about 87% of patients within 24 hours. They also reported that 37% of patients still relieved after a year. However, side effects such as facial palsy, ecchymosis and facial sensory disorders were also reported. Fernandez et al. 8) reported that the action time of bupivacaine was about twice as long as the result of comparative analysis of bupivacaine and lidocaine. Perloff and Chung 20) were injected with a mixture of 2 mL bupivacaine and 1 mL lidocaine in patients with TN to evaluate the response. They reported that all of nine patients had a pain relief of more than 50% immediately after the procedure and the effect lasted more than three months in five of the patients.
This study has certain limitations. First, it was difficult to objectively verify the duration of treatment because it was conducted with a small number of patients. Second, there was heterogeneity among the study subjects as the analysis was not performed according to the drugs administered to patients. Further studies using large patient sample size are required to objectively analyze the treatment duration and effectiveness of injection therapy. | 2020-10-28T13:05:59.379Z | 2020-10-27T00:00:00.000 | {
"year": 2020,
"sha1": "53d7c8e5350288c729050f5118d0955833c81aee",
"oa_license": "CCBYNC",
"oa_url": "https://www.jkns.or.kr/upload/pdf/jkns-2020-0139.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fef0336689a0f7d1fa21ef5d1785618902c94e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263786718 | pes2o/s2orc | v3-fos-license | Gas morphology and energetics at the surface of PDRs: new insights with Herschel observations of NGC 7023
We investigate the physics and chemistry of the gas and dust in dense photon-dominated regions (PDRs), along with their dependence on the illuminating UV field. Using Herschel-HIFI observations, we study the gas energetics in NGC 7023 in relation to the morphology of this nebula. NGC 7023 is the prototype of a PDR illuminated by a B2V star and is one of the key targets of Herschel. Our approach consists in determining the energetics of the region by combining the information carried by the mid-IR spectrum (extinction by classical grains, emission from very small dust particles) with that of the main gas coolant lines. In this letter, we discuss more specifically the intensity and line profile of the 158 micron (1901 GHz) [CII] line measured by HIFI and provide information on the emitting gas. We show that both the [CII] emission and the mid-IR emission from polycyclic aromatic hydrocarbons (PAHs) arise from the regions located in the transition zone between atomic and molecular gas. Using the Meudon PDR code and a simple transfer model, we find good agreement between the calculated and observed [CII] intensities. HIFI observations of NGC 7023 provide the opportunity to constrain the energetics at the surface of PDRs. Future work will include analysis of the main coolant line [OI] and use of a new PDR model that includes PAH-related species.
Introduction
One main goal of the Guarantee Time Key Programme "Warm and dense interstellar medium" (WADI) of the HIFI heterodyne spectrometer (de Graauw et al. 2010) onboard Herschel (Pilbratt et al. 2010) is to investigate the physics and chemistry of the gas and dust in dense photon-dominated regions (PDRs), as well as their dependence on the illuminating UV field.As part of this programme, we observed a prototype PDR, NGC 7023.The region is illuminated by the B2Ve HD 200775 [RA(2000) = 21h01m36.9s; Dec(2000) = +68 • 09 ′ 47.8 ′′ ], and has been shaped by the star formation process leading to the formation of a cavity.NGC 7023 has been widely studied at many wavelengths.It has been shown that this region hosts structures at different gas densities: n H ∼ 100 cm −3 in the cavity, ∼ 10 4 cm −3 in the PDRs that are located north-west (NW), south (S) and east, and 10 5 − 10 6 cm −3 in dense filaments and clumps that are observed in the mm (Fuente et al. 1996;Gerin et al. 1998 and references therein) and near-IR (Lemaire et al. 1996;Martini et al. 1997).
NGC 7023 has been mapped by the instruments PACS and SPIRE of Herschel to study the emission of large cold grains (Abergel et al. 2010).We present here some observations of the gas at the surface of this nebula, taking advantage of the very ⋆ Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia with important participation from NASA.high spectral resolution of HIFI.By combining these observations with previous mid-IR observations, we study the geometry and energetics of the NW and S PDRs.
-On-the-fly (OTF) mapping of the [C ii] 1901 GHz emission line in band 7b.The line was covered by both the WBS and the high-resolution spectrometer (HRS) in USB.Two cuts were performed: a cut from the star to the NW PDR (star-NW cut) and a south-north cut (hereafter S-NW cut) covering the NW PDR, the cavity and the S PDR (∆α = −47 ′′ , −85 < ∆δ < +60 ′′ , see Fig. 1).The pixel size after regriding is 6.5 ′′ .
All these observations include an OFF reference position in the western lobe of the cavity (∆α = −144 ′′ , ∆δ = −47 ′′ )., whereas white crosses indicate the specific positions reported in Fig. 2 and Table 1.The star position is shown with a black cross.
Data was reduced with HIPE 3.0 (Ott 2010) on level-2 data produced with the standard pipeline.For the pointed observations in bands 1a and 3b, manual steps consisted in stitching sub-bands, baseline removal, and correction for main beam efficiency (η mb = 0.71).The [C ii] WBS spectra required defringing, which was performed with standard HIPE tasks, and the best data quality was produced with the subtraction of two sinusoidal fringes.To verify the biases introduced by the fringe removal, we compared the WBS and HRS profiles, which showed good agreement both in profile and absolute intensity except for the weakest lines (T * a 4 K).
Gas kinematics
Figure 2 (left panel) shows the 13 CO 5−4, 13 CO 8−7, C 18 O 5−4, and HCO + 6−5 lines observed by HIFI towards the H 2 peak.All the lines have a central velocity of about 2.2 km s −1 , comparable to previous ground-based observations in several molecular lines (Fuente et al. 1993).Figure 2 profiles at the H 2 peak and at different positions along the S-NW cut.The line is much broader than molecular lines and its profile shows a complex multi-component structure.The observations towards the PDRs show that the emission peaks at intermediate velocities (v lsr ∼1.8-2.8 km s −1 ), which have already been observed towards the NW PDR in several molecular tracers (see, for example, Fuente et al. 1996).There is also a contribution from higher velocity components (v lsr ∼4 km s −1 ), which dominate the emission in the cavity.
Emission from very small dust particles and C +
The 158 µm (1901 GHz) [C ii] and 63 µm [O i] lines are the major coolants of the gas at the surface of PDRs (Hollenbach & Tielens 1999).In these regions, photoelectric effect dominates the heating, while H 2 formation provides a minor contribution.Since the smallest dust particles, polycyclic aromatic hydrocarbons (PAHs) and very small grains (VSGs) contribute to a large fraction to this process (Bakes & Tielens 1994;Habart et al. 2001), and these particles emit in the mid-IR most of the energy they absorb in the UV, we expect that the mid-IR and [C ii] emissions arise in the same regions.We used mid-IR spectro-imagery data of the NW PDR of NGC 7023 that were obtained in the 5.5-14.5 µm range with the Infrared Spectrograph onboard Spitzer (Werner et al. 2004).For the S PDR, we used ISOCAM highly-processed data products (Boulanger et al. 1996) from the ISO data archive.To analyse the mid-IR spectra, we followed the method explained in Rapacioli et al. (2005) and Berné et al. (2007), in which the mid-IR emission is decomposed into three aromatic IR band (AIB) spectra whose carriers are neutral PAHs (PAH 0 ), cationic PAHs (PAH + ), and evaporating VSGs.The fitting procedure was recently improved by including the convolution of the composite spectrum by extinction (Pilleri et al. 2010).Figure 3 displays the AIB flux, I AIB , obtained by summing the fluxes of the PAH 0 , PAH + , and VSG components that were derived from the fit.It shows that the AIB intensity correlates well with the [C ii] line intensity, strongly suggesting that both emissions arise from the same regions.The fit of the mid-IR emission provides two independent tracers of the total gas column density N(H) along the line of sight as explained below.(i) Owing to the excitation mechanism, the AIB intensity can be written as AIB where N C AIB is the column density of carbon in the AIB carriers and G 0 the UV flux in Habing units (Habing 1968).Assuming that N C AIB /N(H) stays constant at the PDR surface, I AIB can therefore be used as a tracer of N(H) if G 0 is known (Pilleri et al. 2010).(ii) If the column density N(H) is high enough, the effect of extinction by silicates can be seen on the AIB spectrum.In the mid-IR fit, the extinction is derived from a simple correction term, assuming that the emitting and absorbing materials are fully mixed: 1−e −τ λ τ λ where τ λ = N(H) C ext (λ) is the optical depth in the line of sight.The extinction cross-section per nucleon C ext (λ) is taken from Weingartner & Draine (2001) for R V = 5.5 and N(H) is a free parameter of the fit.Method (ii) is precise for column densities higher than N(H) ∼ 10 22 cm −2 .Method (i) can probe lower column densities but has two limitations.The AIB emission needs to be corrected for the variation in the UV field G 0 to retrieve the value of N(H).This was done assuming that G 0 scales as the inverse squared distance to the illuminating star HD200775 and a value of G 0 =2600 at 42 ′′ from this star (Pilleri et al. 2010).We used the projected distance as an estimate of the true distance.This introduces an error that can be especially strong at positions close to the star in the plane of the sky. Figure 3 shows that the AIB emission stays almost constant at d < 16 ′′ , therefore we used this value as the minimum effective distance of the NW PDR to the star.Method (i) also needs to be calibrated since the local emissivity of the AIB carriers is not known precisely.Our approach was therefore to derive a calibration factor using the values obtained by method (ii) around position 42 ′′ on the star-NW cut.The same calibration factor was used for all positions along the two cuts.Figure 4 shows that the column densities that were derived on the two cuts correlate quite well with the [C ii] line intensity.
Modelling C + emission
The critical density for the [C ii] 158 µm line is n crit =2500 cm −3 for collisions with H and therefore the line emissivity depends mainly on temperature for n>n crit .We selected a few positions on the HIFI S-NW [C ii] cut, three points on the NW PDR and two on the South PDR (cf.Table 2).The values of G 0 were determined as explained in Sect.2.3, and we assumed a constant average density with two different values: n H = 2 × 10 4 cm −3 that is characteristic of the molecular cloud (Gerin et al. 1998) and Table 2. Summary of the PDR modelling of the [C ii] emission for 5 points along the HIFI S-NW cut (∆α n H = 7 × 10 3 cm −3 that was derived by Rapacioli et al. (2006) in their study of PAH-related species.
We used the 1D Meudon PDR code (Le Petit et al. 2006) to compute the gas temperature T at the cloud surface for all the selected positions (cf.Table 2).The values of T are used to calculate the C + level populations.Line intensities are then derived by integrating along the line-of-sight (perpendicular to model results) and by assuming uniform excitation conditions.The thickness of the observed regions leads to an optical depth τ ∼1, which implies that transfer effects must be taken into account.If we assume constant excitation conditions and gas properties along the line of sight, then τ and the line intensity can be computed.
The agreement between calculated and observed flux values is very good when using n H = 7 × 10 3 cm −3 .In the NW PDR, the ratio is 1.0 for NW3 (16) and NW2 (12), and 1.4 for NW1 (-3).For the S PDR, a value of 2.3 is derived for the two positions, suggesting that systematic effects are causing the deviation between observed and calculated values of the [C ii] flux.There are several parameters that are not precise in our model but looking at Table 2, it seems the local [C ii] emissivity is mainly affected by the local density and not by the value of G 0 .For N(H), we assumed the same regions emit in PAHs and [C ii] , in agreement with the profiles shown in Fig. 3.There is also an error for N(H) due to our method (cf.Sect.2.3), but this error is expected to be the same for both PDRs.Dividing N(H) by a factor of two leads to lower values of the ratio of the calculated over the observed [CII] flux: 0.7-0.8 for the NW PDR and 1.6-1.7 for the S PDR.
One step further in the model would consist in studying the effect of the grain charge on the photoelectric efficiency (Bakes & Tielens 1994).The relative abundances of PAH + , PAH 0 , and evaporating VSGs vary significantly over the nebula (Fig. 5).Regions in the cavity appear mainly populated by PAH + (cf.NW1 (-3) in Table 2).Since the ionization potential of PAH + is much higher than that of PAH 0 (∼10 eV compared to ∼6 eV; Malloci et al. 2007), PAH + should contribute less to the photoelectric heating than PAH 0 , leading to a decrease in the heating rate, hence in the gas cooling.In its current version, the PDR code uses classical grains with an MRN distribution (Mathis et al. 1977) and absorption and scattering cross-sections from Laor & Draine (1993).We have used grains of sizes from 15 Å to 3000 Å with a dust-to-gas mass ratio of 1%.As a result, the ionization parameter γ that quantifies the grain charge (cf.Table 2) does not reflect well the variations of the PAH charge observed in Fig. 5.An upgraded version, in which the PDR code is coupled to the code DUSTEM (Compiègne et al. 2010), is under development (Gonzalez et al., to be submitted) and will allow for including PAHs.NGC 7023 is clearly a template region that could be used for these studies.
Conclusion
By using HIFI and complementary mid-IR data, we have shown that the [C ii] cooling line and the AIB emission arise from the same regions, in the transition zone between atomic and molecular gas.The prototype PDR NGC 7023 was found to be a good object for comparison with PDR models.Further progress on the energetics of this region awaits for the coming [O i] data from the PACS instrument and a PDR model that treats the photophysics of PAHs consistently.
Fig. 1 .
Fig. 1.The NW and S PDRs of NGC 7023 observed by Spitzer-IRAC at 8 µm (red) and 3.6 µm (green).The white circle represents the HIFI beam at 535 GHz (41 ′′ ) towards the H 2 peak.The dotted lines show the cuts that are studied in the [C ii] emission line at 158 µm with a beam of 11′′ , whereas white crosses indicate the specific positions reported in Fig.2and Table1.The star position is shown with a black cross.
Fig. 2 .
Figure2(left panel) shows the 13 CO 5−4, 13 CO 8−7, C 18 O 5−4, and HCO + 6−5 lines observed by HIFI towards the H 2 peak.All the lines have a central velocity of about 2.2 km s −1 , comparable to previous ground-based observations in several molecular lines(Fuente et al. 1993).Figure2(right panel) shows the [C ii] line
Fig. 3 .
Fig. 3. Comparison between the [C ii] 158 µm line flux (solid line) measured with HIFI at a beam size of 11 ′′ and the aromatic IR band (AIB) flux (5.5-14 µm) along the star-NW (a) and S-NW (b) cuts.The error bars for [C ii] are computed at onesigma level.The AIB flux is determined with a fit of the mid-IR spectra using the three PAH-related populations shown in Fig.5; filled diamonds are Spitzer data (1.8 ′′ pix −1 ), and open diamonds are ISOCAM data (6 ′′ pix −1 ).
Fig. 4 .
Fig. 4. Comparison between the [C ii] 158 µm line flux (solid line) measured with HIFI at a beam size of 11 ′′ and the column density N(H) along the star-NW (a) and S-NW (b) cuts.N(H) was derived from both the AIB flux (diamonds) and the mid-IR dust extinction (open circles); filled diamonds and open circles are Spitzer data (1.8 ′′ pix −1 ), and open diamonds are ISOCAM data (6 ′′ pix −1 ).
from the star.(b) From the PDR model using n H = 2 × 10 4 / 7 × 10 3 cm −3 , respectively.(c) Given as the ratio of the mid-IR intensities shown in Fig. 5.(d) Derived from the analysis of the mid-IR emission spectra.
Table 1 .
Summary of the HIFI data For [C ii] , we report the FWHM of the Gaussian profile of equivalent area and peak intensity. † ′′ | 2010-08-07T09:42:45.000Z | 2010-08-07T00:00:00.000 | {
"year": 2010,
"sha1": "1206d6efcf56b66ca2234de8f7f7f86f75a25261",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2010/13/aa15129-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "1206d6efcf56b66ca2234de8f7f7f86f75a25261",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
116484732 | pes2o/s2orc | v3-fos-license | Orbital order and magnetism of FeNCN
Based on density functional calculations, we report on the orbital order and microscopic magnetic model of FeNCN, a prototype compound for orbital-only models. Despite having a similar energy scale, the spin and orbital degrees of freedom in FeNCN are only weakly coupled. The ground-state configuration features the doubly occupied d_{3z^2-r^2} (a1g) orbital and four singly-occupied d orbitals resulting in the spin S=2 on the Fe+2 atoms, whereas alternative (Eg') configurations are about 75 meV/f.u. higher in energy. Calculated exchange couplings and band gap are in good agreement with the available experimental data. Experimental effects arising from possible orbital excitations are discussed.
Based on density functional calculations, we report on the orbital order and microscopic magnetic model of FeNCN, a prototype compound for orbital-only models. Despite having a similar energy scale, the spin and orbital degrees of freedom in FeNCN are only weakly coupled. The ground-state configuration features the doubly occupied d 3z 2 −r 2 (a1g) orbital and four singly-occupied d orbitals resulting in the spin S = 2 on the Fe +2 atoms, whereas alternative (E ′ g ) configurations are about 75 meV/f.u. higher in energy. Calculated exchange couplings and band gap are in good agreement with the available experimental data. Experimental effects arising from possible orbital excitations are discussed. The relationship between spin and orbital degrees of freedom in transition-metal compounds is well established on both phenomenological and microscopic levels. Phenomenologically, the orbital states are described by pseudospin operators and can be treated using diverse techniques developed for spin Hamiltonians [1]. Microscopically, the orbital pattern determines the superexchange couplings that are the main driving force of magnetism in insulators. The opposite effect, the influence of magnetism on the orbital state, is less universal [2] than initially expected [1]. Particularly, recent computational studies of model orbitally-ordered materials [3] indicated the important role of lattice distortions in stabilizing specific orbital states. Although spin and orbital degrees of freedom are inherently tangled, there has been considerable theoretical effort in exploring orbital-only models with no spin variables involved [4][5][6]. In this paper, we will present a compound that features intrinsically weak coupling between spins and orbitals, and may be a feasible experimental probe for such orbital-only models.
As a model compound, we consider the recently discovered iron carbodiimide FeNCN containing Fe +2 cations that form layers of close-packed FeN 6 octahedra in the ab plane [7]. The layers are connected via linear NCN units (Fig. 1). Experimental information on FeNCN is rather scarce. The compound is a colored (dark-red [7] or brown [8]) antiferromagnetic insulator with the Néel temperature of 345 K [7]. The electronic structure of FeNCN was studied by Xiang et al. [8] who arrived at a puzzling conclusion on the dramatic failure of conventional density functional theory (DFT)+U methods that were unable to reproduce the experimentally observed insulating ground state. We will show that this failure is caused by a subtle effect of competing orbital states. Such orbital states are readily elucidated in a careful DFT-based study, and reveal an unusually weak coupling to the magnetism.
Our DFT calculations are performed in a full-potential code with a local-orbital basis set (FPLO) [9]. We used the experimental crystal structure, the local density approximation (LDA) with the exchange-correlation po- tential by Perdew and Wang [10], and well-converged k meshes of 1500−2000 points in the symmetry-irreducible parts of the first Brillouin zone. The application of a generalized-gradient-approximation (GGA) exchangecorrelation potential led to quantitatively similar results.
The LDA energy spectrum for FeNCN ( Fig. 2) strongly resembles that of iron oxides. Nitrogen 2p states form valence bands below −2 eV, whereas Fe 3d states are found in the vicinity of the Fermi level. The energy spectrum is metallic due to the severe underestimation of electronic correlations in LDA.
The local picture of the electronic structure stems from the crystal-field levels of Fe +2 with the electronic configuration d 6 . The octahedral local environment induces the conventional splitting of five d states into t 2g and e g levels. This primary effect is accompanied by a weak trigonal distortion that further splits the t 2g states into the a 1g singlet and e ′ g doublet (Fig. 1, bottom). The balance between the a 1g and e ′ g states is determined by fine features of the local environment. According to simple electrostatic arguments, the squeezing of the octahedron along the three-fold axis should slightly favor the a 1g state (note the N-Fe-N angles α = 95.9 • and β = 84.1 • in Fig. 1), which is represented by d 3z 2 −r 2 orbital in the conventional coordinate system (z along the c axis).
To quantify the orbital energies, we fit the Fe 3d bands with a tight-binding model based on Wannier functions adapted to specific orbital symmetries (Fig. 3). The fit yields ε a1g = −0.29 eV, ε e ′ g = −0.30 eV, and ε eg = 0.58 eV. The t 2g − e g splitting of about 0.9 eV is typical for 3d systems, whereas the energy separation of 10 meV between the a 1g and e ′ g states is very small and opposite to the naive crystal-field picture. The difference may arise from covalency effects and/or long-range interactions inherent to solids (note a similar example in Ref. [11]). Similar to iron oxides, FeNCN is expected to feature the high-spin state of Fe +2 (the non-magnetic low-spin state apparently contradicts the experimental magnetic response reported in [7]). Therefore, in the local picture five out of six d electrons occupy each of the d orbitals, whereas the sixth electron takes any of the a 1g or e ′ g orbitals, thereby creating orbital degrees of freedom. Since the Mott-insulating state implies integer orbital occupations, the ground state of FeNCN should feature one doubly-occupied and four singly-occupied orbitals. The nature of the doubly occupied orbital is determined by the energy difference between a 1g and e ′ g and, more importantly, by correlation effects. To account for correlation effects in FeNCN, we use the DFT+U method that treats strong electronic correlations in a mean-field approximation valid for insulators. In contrast to Ref. [8] reporting the half-metallic solution, we readily obtained insulating solutions by explicitly considering orbitally-ordered configurations. The failure of the previous computational work is likely related to spurious solutions arising from random starting configurations. Such solutions are not converged in charge due to the charge shuffling effect in a (half)-metal. To overcome this problem, we first performed calculations with a fixed occupation matrix (i.e., fixed the orbital configuration) and later released this matrix to allow for a fully self-consistent procedure. A similar approach has been used in previous computational studies [11,12], and was shown to be vital for the proper treatment of systems with orbital degrees of freedom.
The input parameters of the DFT+U method, the onsite Coulomb repulsion (U d ) and exchange (J d ), are evaluated in a constrained LDA procedure [13] implemented in the TB-LMTO-ASA code [14]. We find U LMTO d = 6.9 eV and J d = 0.9 eV, whereas a comparative calculation for FeO yields similar values of U LMTO d = 7.1 eV and J d = 0.9 eV. The magnitude of electronic correlations in FeNCN is, therefore, the same as in Fe +2 oxides. By contrast, a somewhat reduced U d parameter was found in CuNCN (6.6 eV vs. 9 − 10 eV in Cu +2 oxides) and ascribed to sizable hybridization between the Cu 3d and N 2p states [15]. In FeNCN, such a hybridization is rather weak (Fig. 2).
Since the two Fe sites in the unit cell of FeNCN are crystalographically equivalent, we restrict ourselves to ferro-type orbital configurations featuring the same doubly occupied orbital on both Fe sites. Starting from different occupation matrices, we were able to stabilize several solutions [16]. The ground-state configuration features the doubly occupied a 1g orbital and is further referred as A 1g . The E ′ g configurations with two electrons on either of the e ′ g orbitals lie higher in energy for about 75 meV/f.u. This energy difference is nearly independent of the specific U d value. It is also possible to put two electrons on one of the e g orbitals, but such configurations are highly unfavorable (0.81 eV/f.u. above the ground state) in agreement with the large t 2g − e g splitting of 0.9 eV in LDA. The artificial low-spin configuration with all six electrons on the t 2g orbitals has an even higher energy of 3.6 eV/f.u. above the A 1g ground state. The lowest-energy spin configurations are antiferromagnetic, irrespective of the A 1g or E ′ g orbital state. The energy spectra are quite similar [17], although the band gap (E gap ) for the A 1g orbital configuration is systematically higher than for any of the E ′ g configurations: for example, at U d = 7 eV E gap = 2.95 eV and 2.30−2.50 eV for A 1g and E ′ g , respectively. The change in U d causes a systematic shift of the band gaps [18]. While the lack of the experimental optical data prevents us from tuning the U d parameter against the experimental band gap, we note that the calculated E gap values agree well with the dark-red color of FeNCN.
We now investigate the interplay between spin and orbital degrees of freedom in FeNCN. To evaluate magnetic couplings, we doubled the unit cell in the ab plane, and calculated total energies for several spin configurations. These total energies were further mapped onto the classical Heisenberg model yielding individual exchange integrals J i . We evaluated the nearest-neighbor coupling J ab in the ab plane as well as the nearest-neighbor and next-nearest-neighbor interplane couplings J c and J ′ c , respectively ( Fig. 1). Further couplings are expected to be weak due to negligible long-range hoppings in our tightbinding model. Surprisingly, the calculated exchange couplings listed in Table I depend only weakly on the orbital order. Both A 1g and E ′ g orbital configurations induce leading antiferromagnetic (AFM) exchange J c via the NCN groups. The intraplane coupling J ab is ferromagnetic (FM), whereas J ′ c is AFM. The resulting spin lattice is non-frustrated, and features AFM long-range order with parallel spins in the ab plane and antiparallel spins in the neighboring planes. This prediction awaits its verification by a neutron scattering experiment. To test our microscopic magnetic model against the available experimental data, we simulated the magnetic susceptibility using the quantum Monte-Carlo loop algorithm [19] implemented in the ALPS package [20]. The susceptibility was calculated for a three-dimensional L×L×L finite lattice with periodic boundary conditions and L = 12 [21]. The simulated magnetic susceptibility was compared to the experimental data from Ref. [7] (Fig. 4). The data above 100 K and, particularly, the transition anomaly at T N = 345 K are perfectly reproduced with J c = 46.5 K, J ab = −16.3 K, and J ′ c = 2.3 K in remarkable agreement with the calculated exchange couplings for the A 1g orbital configuration (Table I) [22]. The low-temperature upturn of the experimental susceptibility violates the behavior expected for a Heisenberg antiferromagnet, and signifies an impurity contribution or effects of exchange anisotropy that are not considered in our minimum microscopic model.
The three-dimensional magnetism of FeNCN is somewhat unexpected considering the seemingly layered nature of the crystal structure (Fig. 1). The leading exchange is antiferromagnetic and runs between the layers, although the nearest-neighbor interlayer distance of 4.70Å is much longer than the intralayer distance of 3.27Å. The unusually strong interlayer exchange originates from the peculiar nature of the NCN units that feature a strong π-bonding and mediate hoppings between the neighboring layers. This effect has been illustrated by sizable contributions of both nearest-neighbor and second-neighbor nitrogen atoms to Wannier functions in CuNCN [15]. Similar contributions are found for the e g orbitals in FeNCN.
The intralayer interaction is a conventional combination of the direct exchange and Fe-N-Fe superexchange that result in a weakly ferromagnetic coupling (the Fe-N-Fe angles are 95.9 • , i.e., close to 90 • ). The spin lattice of FeNCN reminds of another transition-metal carbodiimide, CuNCN, where the long-range antiferromagnetic superexchange mediated by the NCN groups was also reported [15]. The difference between the two compounds is the strong Jahn-Teller distortion in CuNCN that splits the close-packed layers of transition-metal octahedra into structural chains running along a, with the ferromag-netic coupling resembling J ab in FeNCN. By contrast, FeNCN is not subjected to a Jahn-Teller distortion, and the ground-state orbital configuration is largely stabilized by electronic correlations. Indeed, there is no clear preference for a certain orbital state on the LDA level.
According to our results, the energy scales for the spin and orbital degrees of freedom in FeNCN are comparable, about 52 meV/f.u. and 75 meV/f.u., respectively. However, magnetic couplings weakly depend on the orbital configuration keeping spins and orbitals nearly decoupled. This effect is explained by different d states responsible for the orbital and magnetic effects. The orbital degrees of freedom are operative in the t 2g subspace, whereas magnetic couplings are largely determined by the e g orbitals featuring stronger intersite hoppings. The decoupling of spin and orbital variables along with the low energy scale of the competing orbital states suggest that the orbital-only physics can be probed in FeNCN.
The characteristic energy scale of 75 meV/f.u. corresponds to temperatures around 850 K, and implies that the E ′ g orbital states may emerge at elevated temperatures. Although FeNCN, alike all transition-metal carbodiimides, is thermodynamically unstable [23], it can be maintained up to at least 680 K, which is the preparation temperature reported in Ref. [7]. Other options of activating the E ′ g orbital states of FeNCN are the application of pressure and laser irradiation. The latter has been successfully used for melting the orbital order in several prototype orbital systems [24] and could be applied to FeNCN as well. If the switching of the orbital state is possible, the anticipated effect is a sizable reduction in the band gap (for at least 0.4 eV), while the structure will probably adjust to the new orbital state. However, it is more likely that several competing E ′ g states will form an orbital liquid, thereby maintaining the high symmetry of the crystal structure. To probe such effects, further experimental work on FeNCN is highly desirable. We also mention an isostructural compound CoNCN [25], where orbital degrees of freedom arising from Co +2 (d 7 ) are expected.
In summary, we have shown that FeNCN presents an unusual example of weakly coupled spin and orbital degrees of freedom acting on a similar energy scale. The ground-state orbital configuration features two electrons on the a 1g orbital, in agreement with the simple electrostatic arguments, but in contrast to the LDA-based expectations. The calculated properties, such as the band gap, exchange couplings, and Néel temperature, are in very good agreement with the experiment. We have also remedied the failure of the recent computational work [8] and confirmed the remarkable performance of DFT+U techniques applied to Mott insulators with orbital degrees of freedom.
We are grateful to Deepa Kasinathan and Oleg Janson for fruitful discussions. We also acknowledge Richard Dronskowski and Andrey Tchougréeff for drawing our at-tention to FeNCN. A.T. was funded by Alexander von Humboldt Foundation. | 2011-06-18T15:56:01.000Z | 2011-06-18T00:00:00.000 | {
"year": 2011,
"sha1": "78e918786d627e7eea49a594f87db3a1e8da6b6a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "78e918786d627e7eea49a594f87db3a1e8da6b6a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
198381968 | pes2o/s2orc | v3-fos-license | Hematemesis on Hepatic Cirrhosis Patients in Area with Limited Facilities
Esophageal variceal haemorrhage is one of the more dangerous complications of hepatic cirrhosis. Initial treatment can determine patient mortality and morbidity. But not all hospitals have adequate facilities and medicines to handle it. The 53-year-old woman came with a diagnosis of ascites and hematemesis. At the initial examination found hypotension. Laboratory results show anemia, thrombocytopenia, and leukocytosis. The next morning the patient's condition worsened and was transferred to ICU. During the ICU patients receive 10 lpm oxygen support, cefobactam, pantoprazole, PRC transfusion, and dopamine. Patients begin conscious on the 5th day of treatment. Then the patient started getting diuretics on day 6 and propranolol on day 9. On day 13 the patient's condition improved and the patient was discharged for outpatient treatment. Limitations of the endoscopic tool cause not every hospital in Indonesia can perform emergency endoscopy for therapy. However, with rapid and appropriate pharmacological therapy, patients can be helped not to fall into mortality and prevent recurrent bleeding. © 2018 Biomolecular and Health Science Journal. All rights reserved
Introduction
Esophageal variceal haemorrhage is one of the more dangerous complications of hepatic cirrhosis. In hepatic cirrhosis, hepatic architecture changes due to liver cell necrosis become regenerative nodules (1). This change causes increased blood flow and resistance in the portal vein. Portal hypertension leads to dilatation of blood vessels especially those from the azygos vein, which then causes varicose veins in the gastrointestinal tract (2).
Esophageal varices are closely related to the severity of hepatic cirrhosis. Approximately 30% of newly diagnosed patients with hepatitis cirrhosis already have varicose veins and increase to 90% after 10 years (3). Severe liver disease (Child-Pugh C) has a greater risk of esophageal variceal haemorrhage than patients with milder liver disease (Child-Pugh A and B)(4).
Approximately 30% or 1/3 of patients with esophageal varices will experience bleeding within 1 year after diagnosis (5). Mortality within six weeks of bleeding is about 15-20%, ranging from 0% in patients with grade A children to about 30% of patients with grade C (2). If not in therapy, esophageal varices mortality is 20-60%, but if therapy was done then the mortality decreased to 20% (4).
The initial treatment of bleeding in the gastrointestinal tract can determine the patient's mortality and morbidity, so it should be handled rapidly and appropriately. Not all hospitals have adequate facilities and medicines to deal with, therefore we want to share case reports of severe hematemesis handling in limited facilities.
Cases Illustration
Fifty three years old woman referral from Puskesmas (Pusat Kesehatan Masyarakat/Community Health Center) Buladu came to RS M.M. Dunda District Gorontalo, with diagnosis of ascites and hematemesis. Swelling of the entire body was felt long ago, first swelling felt in the legs and stomach. The stomach felt bigger and tense but did not make breathless. Patients feel nausea and decreased appetite, heat loss, and dizziness since 5 days ago. Vomiting and black defecation since 2 days ago, vomiting> 5x daily volume of approximately 1 glass of water glass package (around 250 ml). The patient felt lethargic, unable to stand and consciousness begins to apathetic. History of diabetes mellitus and hypertension was refuted, history of taking painkillers was denied.
Based on the initial physical examination of the clock obtained GCS E4M6V5, compos mentis, hypotension 90/60, normal pulse 88 x/ m. Conjuctiva anemic, larged abdomen with undulation tests and shifting dullness positive, rough ronkhi in both basal lung, and anasarka edema. Laboratory tests obtained very low hemoglobin, accompanied by leukocytosis and thrombocytopenia (Table 1). Patients were diagnosed with hematemesis and melena et causa esophageal varices of dd gastritis erosiva and acute kidney failure. Initial therapy was given infusion of 0.9% NaCl ruwatan, chrome, tranexamic acid, omeprazole, and sucralfat syrup.
Table 1. Result of Laboratorium Examination
The patient's condition then worsens, the morning GCS decreases E1M3V1 and blood pressure 100/50. Once NGT installed black blood obtained as much as 750 cc. Patients mounted non-rebreathing mask 10 lpm, cefobactam antibiotic 2 x 1gr, omeprazole replaced with pantoprazole, patient was recommend to fasting, PRC transfusion, dopamine syringe pump, and patients treated in ICU.
Examination of thorax photograph obtained by cardiomegaly accompanied by sign of pulmonary dam and minimal effusion of sinus pleura. Abdominal ultrasound acquired hepatic cirrhosis with massive ascites with splenomegaly is considered hypertensive portal (Fig. 1).
Treatment on the day 5, the patient was conscious GCS E4M6V5 and blood pressure 110/60, dopamine titration began to be lowered. Day-6 the patient's blood pressure was stable 120/80 still with dopamine, then the patient was given furosemid 1-1-0, and spironolacton 1 x 25mg. The 8th day of antibiotic and chromed administration was discontinued, the dose of furosemide was raised to 2-1-0. Day 9 the patient given propanolol 3 x 10mg. Day-13 patient condition stabilized compos mentis consciousness, blood pressure 110/60, pulse 75x / min, breath 24x / minute, and temperature 36oC. Laboratory examination showed improvement in hemoglobin, leukocyte, and platelet counts (table 1). The patient was then discharged for outpatient care.
Discussion
The first step in treating upper GI bleeding is to identify the source of bleeding, whether variceal or non-variceal. Variceal hemorrhage is generally followed by signs of hepatic failure and portal vein hypertension such as ascites, gynecomastia, spider nevi, palmar erythema (6). Then evaluate the degree of patient bleeding, plug double IV line, prepare ICU space, intubation, and access the central vein (CVC). Perform fluid resuscitation with blood pressure target> 80mmHg and transfusion with target Hb> 8g/dl (7).
Strict supervision due to too much fluid/transfusion in patients with variceal hemorrhage may aggravate bleeding due to intravascular volume increase, and result in complications of pulmonary edema and ascites after hemostasis is reached (8). This patient was initially given only ruwatan liquids because the blood pressure target was reached 90/60, but the inborn Hb was very anemic 2.5 and the maximum cito blood was only 2 bag/day.
Give splanic vasoconstrictor therapy such as terlipressin or somatostatin. Ditipresin modifies the hemodynamic system by decreasing cardiac output and increasing arterial blood pressure and systemic vascular resistance. When suspected variceal bleeding, patient was given a dose of 2 mg/h for the first 48 hours and continued for up to 5 days then the dose was decreased 1 mg/h or 12-24 h after bleeding stopped (4). Somatostatin was given 250 dose loading dose and then continued with dextrose dextrose 5% 250 μg/h (6 mg/day) (8). Antibiotic prophylaxis was given to treat infections such as spontaneous bacterial peritonitis. Selected antibiotics of ceftriaxone or quinolone may be administered orally or IV, for 7 days (7). In this patient due to limited available drugs, there was no vassopressin drug, while antibiotics are given cefobactam 2 x 1gr.
Endoscopic therapy is the primary choice in the treatment of variceal bleeding, and should be done in less than 12 hours after bleeding. Visual endoscopy can differentiate the location of varicose veins, whether in the esophagus or gastric, the choice of therapy may be by ligation or sclerotherapy (7) (figure 2). In this case, the hospital actually has an endoscope but only as a diagnostic tool, so that the patient is considered endoscopic after a stable vital condition. Once the severity is resolved and the blood pressure stable, the patient may be given a non-selective ßadrenergic inhibitor in combination with the isonorbid mononitrate. Drugs are given orally as long-term therapy to prevent recurrence by decreasing cardiac output and splanic vasoconstrictors (9). The combination of propanolol and ISMN may decrease the incidence of recurrent bleeding by 19-49%, while a combination of sclerotherapy and propanolol is likely to be 30-42% (9, 10) (table 2).
Choice of nonselective β-adrenergic drug may be propanolol or nadolol. Propanolol can be started from 20 mg per oral 2 times a day and nadolol 40 mg per oral 1x / day. The dose is increased to a maximum tolerable dose or pulse frequency of 55x/min. Ligation of endoscopic varicose can be done every 2-4 weeks will usually occur obliteration after 2-4 sessions, then the first surveillance done 1-3 months after obliteration then every 6-12 months. ISMN is given at an initial dose of 10 mg per oral per night, increased to a maximum dose of 2 x 20mg daily or blood pressure> 95mmHg (10)
Conclusion
Esophageal varices are dilated submucous portion of veins into the esophagus, occurring in patients with portal hypertension and may cause serious upper GI bleeding. All patients with variceal bleeding require endoscopic examination which is the standard standard for diagnosis, assessing possible varicose veins and management based on the underlying disease.
The limitations of expensive endoscopic devices and rare operator personnel make not every hospital in Indonesia can perform emergency endoscopes. However, with rapid and appropriate pharmacological therapy, it can help patients not to fall into mortality and prevent recurrent bleeding. | 2019-03-18T14:03:34.603Z | 2018-05-31T00:00:00.000 | {
"year": 2018,
"sha1": "5dac1f8c68d7ca399de1c6e795e23893fa70021e",
"oa_license": "CCBYSA",
"oa_url": "https://e-journal.unair.ac.id/BHSJ/article/download/8243/4926",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3c9120f8d64e7f7bb7892c6475cf3d010b8f0071",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12161442 | pes2o/s2orc | v3-fos-license | A hundred years of Dunaliella research: 1905–2005
A hundred years have passed since the description of the genus Dunaliella, the unicellular green alga which is responsible for most of the primary production in hypersaline environments worldwide. The present paper provides an historical survey of research on Dunaliella, from the early work in the 19th century to the thorough taxonomic studies by Teodoresco, Hamburger, Lerche and others from the beginnig of the 20th century onwards. It attempts to trace the origin of some of the most important breakthroughs that have contributed to our present understanding of this alga that plays such a key role in many hypersaline environments.
Introduction
A hundred years have passed since the description of the genus Dunaliella, the unicellular green alga which is responsible for most of the primary production in hypersaline environments worldwide. First sighted in 1838 in saltern evaporation ponds in the south of France by Michel Felix Dunal [1], it was named after its discoverer by Teodoresco in 1905 [2].
In the century that has elapsed since its formal description, Dunaliella has become a convenient model organism for the study of salt adaptation in algae. The establishment of the concept of organic compatible solutes to provide osmotic balance was largely based on the study of Dunaliella species. Moreover, the massive accumulation of β-carotene by some strains under suitable growth conditions has led to interesting biotechnological applications.
The present paper provides an historical survey of research on Dunaliella, from the early work in the 19 th century to the thorough taxonomic studies by Teodoresco, Hamburger, Lerche and others from the beginning of the 20 th century onwards. It attempts to trace -often through quotations from the original articles -the origin of some of the most important breakthroughs that have contributed to our present understanding of this alga that plays such a key role in many hypersaline environments.
Extensive additional information on the alga can be found in a review by Ginzburg [3], in the multi-author review edited by Avron and Ben-Amotz [4], and in my monograph on halophilic microorganisms and their environments [5].
Reports on Dunaliella Prior to 1905
The first description of a unicellular biflagellate redcolored alga living in concentrated brines ( Fig. 1) was given in 1838 by Dunal [1], who reported occurrence of the organism we know today as Dunaliella salina in the salterns of Montpellier, on the Mediterranean coast of France. He named the organisms observed Haematococcus salinus and Protococcus. The discovery of these algae was made in the framework of an investigation, invited by the Académie des Sciences, Paris, of the cause of the red coloration of saltern brines. At the time it was widely assumed that chemical and physical parameters are responsible for the coloration of these brines. Dunal refuted an earlier claim that the color is due to the brine shrimp Artemia salina. The Académie then appointed a committee to reexamine the matter, and this committee confirmed Dunal's finding [6]; see also [7]. Another idea brought forward during that period was that Artemia contributes to the color due to the partially digested and decaying red flagellates ("Monas Dunalii") present in its intestine [8]. Nowadays it is clear that, although β-caro-tene-rich Dunaliella salina are indeed present in the saltern ponds, most of the coloration of the crystallizer brine is caused not by the algae but by red halophilic Archaea instead [9,10].
Dunaliella salina cells from the crystallizer brine of the salterns in Eilat, at the Red Sea coast of Israel
The Description of the Genus Dunaliella
The year 1905 saw the publication of two papers presenting in-depth descriptions of Dunaliella as a new genus, one by E.C. Teodoresco from Bucharest [2] and the second written by Clara Hamburger from Heidelberg [7]. Teodoresco's publication preceded that by Hamburger, who only learned about the Teodoresco paper when finalizing the writing of her own article [7]: Prof. Lauterborn a paper by Teodoresco entitled: "Organization and development of Dunaliella, a new genus of the Volvocida -Polyblepharidae", which was just sent as offprint from the Botanisches Centralblatt. Dunaliella is the organism that I had been investigating, and that I had already recognized as representative of a new genus. Our results corresponded in many respects, while in other respects I am of the opinion that further investigations will have to decide. However, because my studies, especially with respect to the internal structure, are more thorough (Teodoresco had studied only live material) and I also can fill in certain still existing gaps in the knowledge, and also because my results were obtained independently of those of Teodoresco, I still would like to publish them.] Teodoresco studied material collected from a Romanian salt lake, while Hamburger worked with samples sent to her from the salterns of Cagliari, Sardinia. Both authors presented detailed drawings of the organisms ( Fig. 2 and 3) and provided extensive information on its morphology, cell structure, reproduction, behavior and ecology. A formal description of the genus Dunaliella, named in honor of Dunal who had first seen these organisms in salterns in France almost seventy years earlier, and of the first two species within the genus, D. salina and D. viridis, was published in 1906 [20].
The papers by Teodoresco and Hamburger were soon followed by others. Noteworthy studies in the early years of Dunaliella research are articles by Cavara [21], who extended the study of the organism in the Cagliari, Sardinia salterns, a study of the algae in the Salton Sea, California [22], a series of ecological papers by Labbé based on observations made in the salterns of Le Croisic on the Atlantic coast of France [23][24][25], articles by Baas Becking and coworkers, who collected specimens from all over the world [26][27][28], and the taxonomic studies by Hamel [29] and Lerche [30].
The Taxonomy of the Genus Dunaliella
Dunaliella is a genus of unicellar algae belonging to the family Polyblepharidaceae. Its cells lack a rigid cell wall, and they reproduce by longitudinal division of the motile cell or by fusion of two motile cells to form a zygote.
Teodoresco [2,20] described two species: D. salina and D. viridis. D. salina has somewhat larger cells, and under suitable conditions it synthesizes massive amounts of carotenoid pigments, coloring the cells brightly red. D. viridis never produces such red cells. It is interesting to note that in the early years there have been extensive discussions whether indeed two species are present or whether the red and the green cells represent different forms of the same species. For example, Blanchard [13] and Hamburger [7] considered the green cells as juvenile stages of the red ones. Labbé [23,24] was of the opinion that differences in salt concentration of the environment are responsible for the different colors of the cells. Upon transferring of saltern brine samples to a lower salinity he grew a form of Dunaliella adapted to fresh water and lacking the brownred pigment. His statements that: "En ce qui concerne les facteurs de la transformation, l'hypothèse simpliste de Teodoresco ne peut être conservée et it ne s'agit pas là de deux espèces distinctes (D. Teodoresco [2,20] Drawings by Hamburger (1905) of red cells (Dunaliella salina) (1-4) and green cells (D. viridis) (5)(6)(7)(8), diverse shapes observed in a drop that becomes more concentrated by evaporation , spherical forms obtained upon dilution (30)(31), and initiation of cell division (32-34)
Figure 2
Drawings by Hamburger (1905) of red cells (Dunaliella salina) (1-4) and green cells (D. viridis) (5)(6)(7)(8), diverse shapes observed in a drop that becomes more concentrated by evaporation , spherical forms obtained upon dilution (30)(31), and initiation of cell division (32)(33)(34). From [2]. have not withstood the test of time. We now know that not all Dunaliella species produce massive amounts of carotene, and those that can, do so only under suitable conditions (exposure to high light intensities, nutrient limitation, etc., see also section 6 below). Lerche [30] thus saw that under suitable conditions all red clones became green, but after several weeks they turned olive to yellowgreen and after several months they were red again.
Additional species were later added to the genus, especially thanks to the in-depth studies by Lerche [30] and Butcher [31] ( Table 2). Lerche investigated material collected from salt lakes in Romania and in California, as well as the above-mentioned Cagliari salterns. She concluded that the former species D. viridis is heterogeneous and should be split into several new species. Thus the species D. media, D. euchlora, D. minuta, and D. parva were created. It must be stressed here that not all species mentioned tolerate the extremely high salt concentrations in which D. salina and D. viridis are found in nature. Some are typically marine organisms that were never reported to occur in hypersaline environments.
An in-depth taxonomic treatment of the genus was given in Massyuk's 1973 monograph [32]. She divided the genus into two subgenera, Dunaliella (23 species) and Pascheria (5 species), the latter consisting of freshwater species only. Some of the species recognized by Massyuk may eventually be found to be polymorphic forms of a single taxon [33].
A species of considerable interest is Dunaliella acidophila, isolated from acidic waters and soils in the Czech Republic and in Italy [34,35]. This is not a true halophile but an acidophilic alga that grows optimally at pH values between 0.5 and 2. In recent years it has become a popular research object for the study of adaptation of life to low pH environments [36]. Its taxonomic/phylogenetic affili-ation with the halophilic Dunaliella species has to my knowledge never been verified.
Molecular phylogeny techniques have been applied to the taxonomic study of Dunaliella from 1999 onwards. These studies have encompassed the 18S rRNA genes and the internal transcribed spacer regions, and have been based on gene sequence comparisons as well as on restriction fragment length polymorphism studies. Little correlation was found between the molecular data and the morphological-physiological attributes used in older studies to delineate species within the genus [37,38]. On the basis of 18S rRNA gene sequences, Olmos et al. [39] could differentiate between D. salina, D. parva and D. bardawil as species containing one, two and three introns, respectively, within the 18S rRNA gene. The molecular studies have made it clear that many culture collection strains are probably misnamed, and that some unnecessary species names may have been proposed in the past.
Life Stages and Sexual Reproduction in Dunaliella
Dunaliella salina and some of the other species undergo complex life cycles that encompass, in addition to division of motile vegetative cells, the possibility of sexual reproduction. Fusion of two equally sized gametes to form a zygote was documented in many of the early studies [7,20,29]. We thank a most detailed study of sexual reproduction in Dunaliella to Lerche [30], who reported sexual zygote formation in five of the six species studied (D. salina, D. parva, D. peircei, D. euchlora, and D. minuta). She reported zygote formation in D. salina to be induced by a reduction in salt concentration from 10 to 3%. In the process first the flagella touch, and then the gametes form a cytoplasmic bridge and fuse. The zygote has a thick outer layer. It can withstand exposure to freshwater and also survive prolonged periods of dryness. These zygotes germinate with the release of up to 32 haploid daughter cells through a tear in the cell envelope [30]. It is well possible that the cyst-like structures observed by Oren et al. [40] at the end of a bloom of green Dunaliella cells in the Dead Sea in 1992 were actually such zygotes. In this case, however, the formation of these rounded, thick-walled cells took place at a time of an increase in water salinity. Lerche [30] performed a series of elegant experiments in which carotenoid-rich red cells were crossed with green cells, enabling the investigator to follow the fusion of the two parent cells to form a zygote. A few of her drawings to illustrate the process are reproduced in Fig. 4. The possibility of formation of asexual resting cysts by D. salina was indicated by Hamburger [7], a finding that was disputed by Lerche. However, more recently, Loeblich [41] has reported formation of such cysts in media of reduced salinity (for a discussion see also [42]). Aggregation of the red and the green form of Dunaliella salina (upper part) and zygote formation of D. salina (green and red form) (lower part) Figure 4 Aggregation of the red and the green form of Dunaliella salina (upper part) and zygote formation of D. salina (green and red form) (lower part). From [30].
Some Dunaliella species can also develop a vegetative palmelloid stage consisting of round non-motile cells.
Lerche [30] has documented this phenomenon in D. salina cultures at lowered salinities, and Brock [43] observed such palmelloid forms of Dunaliella in benthic algal mats of Great Salt Lake, Utah.
Carotenoid Pigments of Dunaliella
The pigment responsible for the brightly red coloration displayed by D. salina, often designated in the older literature as "hematochrome", was recognized already very early as a carotenoid. As such it was identified by Blanchard [13], and Teodoresco [20], Lerche [30] and Ruinen [44] confirm this identification based on the solubility of the pigment in alcohol and in ether and on the blue color formed in the presence of concentrated sulfuric acid.
Before the modern electron microscope showed the β-carotene as granules between the thylakoids of the cell's single chloroplast, considerable differences of opinion existed regarding the intracellular location of this red carotenoid pigment. Thus, both Teodoresco [2,20] and Labbé [23] stated that the red pigment was distributed all over the cells' cytoplasm. Relating to a different claim by Hamburger [7], Teodoresco [20] wrote: "je n'hésite pas à croire que ce pigment imprègne tout le corps des zoospores, excepté, bien entendu, l'extrémité antérieure, à l'endroit de l'insertion des flagellums." [I don't hesitate to believe that the pigment impregnates the whole body of the zoospores, except, of course, the extreme anterior part, on the place where the flagella are inserted.] Likewise, Hamel [29] claimed that at elevated salt concentrations, D. salina forms "hematochrome" that penetrates not only the "chromophore" (= chloroplast) but the entire cytoplasm as well. On the other hand, Hamburger believed the red pigment to be located as small droplets (which is true, see e.g. [45,46]), but she was mistaken about the location of the pigment: "Er tritt in Form kleiner Tröpfchen auf, und ist, wie mir sicher scheint, nur der äußeren Alveolarschicht des Plasmas eingelagert, während das Chromatophor Träger des grünen Farbstoffes ist. Die Bemerkung Teodoresco's "hématochrome imprégnant non seulement le chromatophore, mais encore tout le corps des individus âgés", stimmt mit meinen Beobachtungen nicht überein." [It occurs in the form of small droplets, and is, as seems sure to me, only deposited in the outer alveolar layer of the plasma, while the chromatophore is the bearer of the green pigment. The remark by Teodoresco that "the hema-tochrome that impregnates not only the chromatophore, but also the whole body of adult individuals" does not correspond with my observations.] Baas Becking [27] correctly located the red-orange pigment in the chloroplast, and Leche [30] realized that the carotene masks the chlorophyll, so that the chloroplast can assume all shades from orange-red to yellow-green, olive and green: "Der rote Farbstoff ist in Form öliger Tröpfchen zwischen den Wabe des Chloroplasten eingelagert und nicht wie Hamburger (1905) annimmt, in der äußeren Alveolarschicht des Protoplasmas." [The red pigment is located in the form of oily droplets between the honeycomb structure of the chloroplast and not, as Hamburger (1905) assumes, in the outer cytoplasmic layer of the protoplast.] β-Carotene, the major carotenoid accumulated by D.
salina and D. bardawil, is a valuable chemical, in high demand as a natural food coloring agent, as pro-vitamin A (retinol), as additive to cosmetics, and as a health food [47]. Some Dunaliella strains can accumulate very large amounts of this carotenoid. Thus, as much as 13.8% of the total dry organic matter in the D. salina community in Pink Lake, Victoria, Australia, was estimated to be β-carotene [48]. Also in culture some strains may contain up to 10% and more of β-carotene in their dry weight, including a large percentage of the 9-cis isomer [46]. Therefore the biotechnological potential of Dunaliella as a source of βcarotene was investigated already relatively early. The first pilot plant for Dunaliella cultivation for β-carotene production was established in the USSR in 1966 [49,50]. The commercial cultivation of Dunaliella for the production of β-carotene throughout the world is now one of the success stories of halophile biotechnology [51][52][53]. Different technologies are used, from low-tech extensive cultivation in lagoons to intensive cultivation at high cell densities under carefully controlled conditions [54].
One of the methods used in such biotechnological operations to induce massive carotenoid accumulation is reduction of the growth rate by deprivation of nutrients. That a high carotenoid content of the cells may be caused by nutrient limitation as well as by high light intensities was already reported by Lerche [30]: [As the red coloration occurred especially in old cultures, it was reasonable to assume a correlation with the nutritional conditions and in particular with the lack of one or more compounds. As phosphorus and nitrogen are in plant nutrition often the substances present in limiting amounts, we directed our attention first of all to these substances.]
Population Dynamics of Dunaliella in Salt Lakes and Salterns
Only few studies have been devoted to the quantitative evaluation of Dunaliella populations in salt lakes and salterns, the dynamics of their appearance and decline, and their contribution to the primary production in their habitats. Stephens and Gillespie (1976) reported measurements of the primary production in the south arm of Great Salt Lake, Utah, performed in 1973 (salinity around 135 g/l). Post [56] reported that in the cold season, round cyst-like cells of D. salina increased in numbers in the Great Salt Lake, especially on the lake's bottom. In the Dead Sea, green Dunaliella cells have been reported since the 1940s [57]. The first quantitative estimates of the Dunaliella population in the lake were made in 1964, and showed very high numbers: up to 4 × 10 4 cells per ml of surface water (sampling season not specified) [58]. Systematic monitoring of the population density at different seasons and depths in the Dead Sea from 1980 onwards have yielded a clear picture of the factors that determine development of the alga in this unusual environment. High concentrations of magnesium and calcium ions are known to be inhibitory to Dunaliella since Baas-Becking's earlier studies [27]. Dunaliella blooms therefore occur in the Dead Sea only when during unusually wet winters the upper water layers of the lake become sufficiently diluted to enable growth, and when phosphate, the limiting nutrient, is available. Such events have been observed in 1980 and again in 1992 [40,59].
Surprisingly, very little is known about the factors that determine the dynamics of Dunaliella in saltern pond systems. It is therefore interesting to note that some of the most in-depth studies on this topic were performed in the early 1920s in the salterns of Le Croisic on the Atlantic coast of France, where salt making is a seasonal operation. Labbé [23,25] showed changes in the algal community structure and related these to changes in salinity ("osmotic pressure; viscosity") of the brine, but he also recognized the role of the light intensity and the water temperature, as well as that of the pH. Based on the faulty assumption that the smaller green and the larger red Dunaliella cells are stages in the development of a single organism (see section 4 above), he described an annual cycle in which in the beginning of the winter few red motile cells ("érythrospores") and smaller green motile cells ("chlorospores") are present [24]. Dilution of the water by winter rains triggers the formation of red cysts ("érythrocystes"), but the "chlorospores" develop rapidly, conjugate, and form "chlorocystes". When the salt concentration increases in the summer season, red motile cells start to appear, always accompanied by green cells: "Peu à peu, les érythrospores provenant de chlorospores prolifèrent, et leur dominance est fonction de la concentration saline." [Gradually the "erythrospores" that are formed from "chlorospores" proliferate, and their dominance is a function of the salt concentration.]
Cultivation and Salt Tolerance of Dunaliella
The first controlled experiments to evaluate the effect of salinity on the growth rate of different Dunaliella isolates were reported in the 1930s. Baas-Becking [27] observed that D. viridis thrives equally well over the whole range of 1-4 M (6-23%) NaCl and over the pH range 6-9. He found calcium and magnesium ions in high concentrations to be inhibitory. More detailed and well-documented experiments, using a variety of species and isolates, were reported by Lerche [30]. She found most isolates to grow optimally between 2 and 8% salt, with very slow growth, if at all, at salt concentrations above 15% (Fig. 5). Between 0.47 and 1.22 divisions per day were recorded under optimal conditions. The nutritional requirements of different Dunaliella strains were investigated in-depth by Gibor [60], Johnson et al. [61], Van Auken and McNulty [62], and others, enabling the optimization of media to grow the alga. Optimal salt concentrations for cultivation varies according to the strain, with values reported for D. viridis around 6%, for D. salina around 12% [42], while different Great Salt Lake isolates had optima of 10-15% or even 19% salt [43,62]. A general trend, observed in all these studies, is that the actual salinity of the environments from which the strains had been isolated was always much higher than the salt concentration found to be optimal in laboratory experiments. This may well reflect the fact that growth of an organism occurs in a certain environment not necessarily means that that environment is optimal for its development, but rather that the organism performs there better than all its competitors.
Osmotic Behavior of Dunaliella Cells
Dunaliella cells lack a rigid cell wall, and the cell is enclosed solely by a thin elastic plasma membrane. As a result, the cells' morphology is strongly influenced by osmotic changes. This was documented already in the Division rate ("Teilungsrate") (as number of divisions per day) of different Dunaliella isolates belonging to several species, as a function of the NaCl concentration of the medium Figure 5 Division rate ("Teilungsrate") (as number of divisions per day) of different Dunaliella isolates belonging to several species, as a function of the NaCl concentration of the medium. From [30].
early days. The descriptions by Teodoresco [2] are very exact here, and they deserve to be cited unabridged: "Ces zoospores sont dépourvues de membrane cellulosique; celle-ci est représentée par une enveloppe qui possède une certaine souplesse et une certaine extensibilité, qui permet au corps de prendre les formes assez variées, suivant la concentration de l'eau. A ce point de vue, le genre Dunaliella diffère totalement de toutes les espèces de Chlamydomonas ..." [These zoospores are devoid of a cellulose cell wall; instead there is a cell envelope that possesses a certain flexibility and a certain elasticity, which allows the body to take quite different forms, in accordance with the [salt] concentration of the water. In this respect the genus Dunaliella differs completely from all species of Chlamydomonas ...] and: "Ainsi, si nous plaçons une goutte d'eau salée, contenant des zoospores, sur le porte-objet, on constate, au microscope, qu'elles se présentent sous la forme mentionée plus haut. Mais si nous laissons la goutte s'évaporer un peu, on observe que le corps commence à s'allonger et à se difformer ... ; si alors nous ajoutons à la préparation une goutte d'eau douce, les zoospores s'arrondissent brusquement .... Cette expérience, que j'ai répétée un trés grand nombre de fois, m'a toujours donné les mêmes résultats." [Thus, when we place a drop of salt water that contains zoospores [= motile vegetative cells] on a microscope slide, one detects in the microscope that these present themselves in the above-described form. However, when we let the drop evaporate a little, one observes that the body starts to elongate and to lose its shape. ... ; when we then add to the preparation a drop of fresh water, the zoospores suddenly round up. .... This experiment, which I have repeated a great number of times, has always given me the same results.] The phenomena described above are illustrated in Fig. 2, drawings 9-29 and 30-31, respectively. Teodoresco further writes: "Si à une goutte d'eau salée on ajoute une goutte plus grande d'eau douce, ce qui amène une abaissement brusque de la concentration, les zoospores non seulement s'arrondissent, mais encore cessent leurs mouvements; le volume du corps augmente et devient parfois deux fois plus grand et à la fin la zoospore éclate. La cause de cet éclatement n'est pas difficile à comprendre: c'est l'action méchanique de la pression osmotique trop élevée par rapport à la densité diminuée du milieu ambiant." [If to a drop of salt water one adds a larger drop of fresh water, which leads to a sudden drop in concentration, the zoospores not only round up, but in addition cease their movements; the volume of the body increases and sometimes becomes twice as large, and finally the zoospore bursts. The cause of this burst is not difficult to understand: it is the mechanical action of the too high osmotic pressure in comparison to the decreased density of the ambient medium.] Lerche [30] likewise observed the osmotic changes that occur when the salt concentration is changed. She noted that when a drop of D. salina cells suspended in 20% salt is flooded with distilled water, a large fraction of the cells burst, but some cells survived the treatment.
Intracellular Salt and Solute Concentrations of Dunaliella
Marrè and Servetta ( [63], as cited in [61]) described measurements of the freezing point of the cytoplasmic fluid of D. salina to obtain information on the intracellular salt concentration. The results indicated an apparent "salt" concentration that exceeded the 3.9 M salt in which the cells were grown. At the time it was postulated that NaCl is taken up through the allegedly very permeable cell membrane during salt upshock, followed by free water flux to equalize intracellular and extracellular osmotic pressures [63][64][65].
That the salt concentrations within Dunaliella cells cannot be that high, was convincingly shown by the enzymological studies by Johnson et al. (1968), who demonstrated that some of the key enzymes of the algal metabolism such as pentose phosphate isomerase, ribulose bisphosphate carboxylase, glucose-6-phosphate dehydrogenase and phosphohexose isomerase, are strongly inhibited by NaCl. We now know that the intracellular ionic concentrations of Dunaliella are very low indeed. Using lithium ions as a marker for the extracellular water space to estimate the intracellular volume, the intracellular Na + concentrations, both in cells grown in 0.5 M and in 4 M NaCl, was found not to exceed 100 mM [66]. Such low intracellular Na + levels are achieved by the activity of a Na + /H + antiporter in the cytoplasmic membrane [67], as well as by direct electron transport-coupled Na + extrusion [68].
The enigma of the apparent incompatibility between the low intracellular ionic concentrations and the need for osmotic equilibrium of the cells' contents with the external medium was solved with the discovery that the cells accumulate photosynthetically produced glycerol as osmotic, "compatible" solute. It is interesting to note that the first experiments in which the effects of glycerol on Dunaliella were tested had already been performed by Teodoresco [20], almost hundred years ago. He examined the effect of glycerol and other non-ionic compounds that normally cause plasmolysis. He observed that D. salina cells temporarily lose their motility when suspended in 50% glycerol, but that motility is rapidly restored when the glycerol concentration is then slightly lowered in a humid environment. With 75% glycerol results were largely similar, except that a large fraction of cells died, and in 100% glycerol only few cells survived.
The first indications that glycerol is accumulated by Dunaliella to provide osmotic balance can be found in a short paper published in 1964 by Craigie and McLachlan [69]. They incubated D. tertiolecta with 14 CO 2 , then extracted the cells with ethanol, separated the neutral fraction containing soluble carbohydrates and related compounds using ion exchange procedures, and characterized the compounds by two-dimensional paper chromatography and autoradiography. When the salinity of the medium was increased 100-fold from 0.025 to 2.5 M, 94fold more radioactivity was found in the neutral fraction. Glycerol amounted to 56, 76, and 81% of the radioactivity of the neutral fraction extracted from cells incubated in 0.025, 0.5, and 2.5 M NaCl, respectively, most of the remainder probably consisting of soluble polysaccharides. In a subsequent study, Wegmann [70] confirmed that the proportion of the radiolabel from 14 C-bicarbonate that ends up as glycerol increases with increasing salt concentration up to 2.8 M. He postulated that "The glycerol formation is considered to be a protective mechanism for the survival of Dunaliella in its natural habitat".
The role that glycerol plays in the salt adaptation of Dunaliella was firmly established by the studies of Ben-Amotz and Avron [71] and Borowitzka and Brown [72]. The concept of "compatible solutes", a term coined by Duncan Brown to indicate solutes that not only contribute to the osmotic status of the cell but also maintain enzyme activity under conditions of low water activity, was largely based on the study of the function of glycerol in Dunaliella.
Intracellular glycerol concentrations in Dunaliella can be very high indeed: cells grown in 4 M NaCl were reported to contain approximately 7.8 M glycerol inside, equivalent to a solution of 718 g l -1 glycerol in water [73]. Maintenance of such a high concentration requires special properties of the cell membrane in view of the fact that most biological membranes are relatively permeable to glycerol. It has been established that Dunaliella possesses a membrane with an unusually low permeability for glycerol [74,75], and this enables the cells to keep the glycerol inside the cell. The causes of the low glycerol permeability of the Dunaliella membrane are still not fully understood.
Attempts have been made to exploit the high concentrations of glycerol accumulated by Dunaliella as the basis for the commercial production of this compound. Although technically the production of glycerol from Dunaliella was shown to be possible [51,52,76] economic feasibility is low, and to my knowledge no biotechnological operation presently exists that exploits the alga for glycerol production.
Proteomics Approaches to the Understanding of Salt Tolerance in Dunaliella
A versatile organism such as Dunaliella that can adapt to a wide variety of salt concentrations can be used as a convenient model to study the formation of specific proteins as a function of changes in medium salinity. Such proteomic approaches have led to some interesting observations in recent years.
A number of such studies were directed to the detection of changes in the protein content of the cytoplasmic membrane, whose outer side is exposed to the medium salinity, when the cells are shifted from low to high salinity. Two membrane proteins were strongly induced by salt upshock, one with an apparent molecular mass of 60 kDa [77] and one of 150 kDa [78]. These proteins have been purified and characterized. The 60 kDa protein is a carbonic anhydrase that apparently helps the cell to take up carbon dioxide in concentrated brines in which the solubility of gases is decreased [79]. The 150 kDa protein is an unusual transferrin-like protein, involved in the transport of iron into the cell [80].
With a study published in 2004 by Liska et al. [81], Dunaliella research has entered the era of modern proteomics. Comparison of protein patterns of low-and of high-salt-grown cells were compared on two-dimensional gels led to the identification of 76 salt-induced proteins. Among the proteins up-regulated following salinity stress were key enzymes in the Calvin cycle, enzymes involved in starch mobilization and in redox energy production, regulatory factors in protein biosynthesis and degradation, and a homolog of bacterial Na + -redox transporters.
The results indicate that Dunaliella responds to transfer to a high salinity by enhancement of photosynthetic CO 2 assimilation and by diversion of carbon and energy resources for synthesis of glycerol. This beautiful study is a worthy conclusion of the first century of Dunaliella research, and provides us with a preview of the kind of information that may be expected to be obtained in the coming years, using approaches of genomics, proteomics and systems biology. | 2014-10-01T00:00:00.000Z | 2005-07-04T00:00:00.000 | {
"year": 2005,
"sha1": "327cab05d0e63c892a7de86cb3b403733f87e669",
"oa_license": "CCBY",
"oa_url": "https://aquaticbiosystems.biomedcentral.com/track/pdf/10.1186/1746-1448-1-2",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "327cab05d0e63c892a7de86cb3b403733f87e669",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118913977 | pes2o/s2orc | v3-fos-license | Competition between static and dynamic magnetism in the Kitaev spin liquid material Cu2IrO3
Anyonic excitations emerging from a Kitaev spin liquid can form a basis for quantum computers. Searching for such excitations motivated intense research on the honeycomb iridate materials. However, access to a spin liquid ground state has been hindered by magnetic ordering. Cu2IrO3 is a new honeycomb iridate without thermodynamic signatures of a long-range order. Here, we use muon spin relaxation to uncover the magnetic ground state of Cu2IrO3. We find a two-component depolarization with slow and fast relaxation rates corresponding to distinct regions with dynamic and static magnetism, respectively. X-ray absorption spectroscopy and first principles calculations identify a mixed copper valence as the origin of this behavior. Our results suggest that a minority of Cu2+ ions nucleate regions of static magnetism whereas the majority of Cu+/Ir4+ on the honeycomb lattice give rise to a Kitaev spin liquid.
Long-range magnetic order is the natural ground state of an interacting electron system.
Magnetic frustration is capable of disrupting the order and establishing a highly entangled ground state with non-local excitations known as a quantum spin liquid 19 . Among various spin liquid proposals, the Kitaev model has unique appeal because it offers an exact solution to a simple Hamiltonian H ij = − γ K γ S γ i S γ j of spin-1/2 particles with bond dependent ferromagnetic coupling (K γ ) 1 . The index γ corresponds to three inequivalent bonds at 120 • on a honeycomb lattice. Two alkali iridates, Li 2 IrO 3 and Na 2 IrO 3 , were the first proposed Kitaev materials based on their honeycomb lattice structures that accommodate Ir 4+ ions with pseudospin-1/2 (J eff = 1/2) 3-5, [20][21][22] .
Despite satisfying the basic assumptions of a Kitaev model, both compounds exhibited antiferromagnetic ordering with sharp peaks in both DC-magnetization and heat capacity at 15 K 4, 9 .
Further investigations on the honeycomb 13, 14 , hyperhoneycomb 15,16 , and harmonic honeycomb 17 materials revealed the presence of a Heisenberg interaction (J) and a symmetric off-diagonal interaction (Γ) in the modified Hamiltonian of Kitaev materials 23,24 : The search for a Kitaev material with a negligible Heisenberg interaction and without a long-range order has recently lead to a new honeycomb copper iridate, Cu 2 IrO 3 18 . Despite having a similar magnetic moment and Curie-Weiss temperature as the alkali iridates, Cu 2 IrO 3 barely revealed a small peak in DC-magnetization at 2 K and a broad hump in the heat capacity 18 . These results indicated short-range correlations and suggested proximity to the Kitaev spin liquid phase. A spin liquid ground state is expected to exhibit dynamical local fields without long-range ordering. In this letter, we use muon spin relaxation (µSR) as a direct probe of local magnetic fields and provide compelling evidence for a Kitaev spin liquid phase in Cu 2 IrO 3 . Furthermore, our µSR results reveal a competition between dynamic and static magnetism in distinct volumes in the ground state. The source of such behavior is traced to a mixed valence of Cu + /Cu 2+ by X-ray absorption spectroscopy and first-principles calculations.
In µSR, spin polarized positive muons are implanted in the sample, and the time evolution of the muon spin polarization in the local magnetic field is traced upon accumulating several million muon decay events. In Fig 1a, we show three muon polarization spectra in zero applied field (ZF) at 16, 4.5, and 0.05 K, and one spectrum at 16 K in a 50 Oe applied field parallel to the initial muon polarization (longitudinal field, or LF). The ZF spectra at all temperatures are described by where G KT (t) is the Gaussian Kubo-Toyabe function describing depolarization by quasi-static randomly oriented magnetic moments 25 Fits at 16 K yield ∆ = 0.11 µs −1 , a typical rate for depolarization by nuclear moments 26 . As expected, this relaxation channel is largely suppressed by a weak LF of 50 Oe (Fig. 1a). The slow and fast exponential decays (λ slow and λ fast ) represent a two-component electronic spin contribution to the muon depolarization, and f is the fraction of the signal associated with the fast decay. We will show below that λ slow and λ fast correspond to muons depolarizing in regions of dynamic and static magnetism, respectively.
In Fig. 1a, the fast relaxation is primarily observed as a missing polarization at t < 0.2 µs which is outside the bandwidth of the pulsed muon facility. However, enough of the fast relaxation T e m p e r a t u r e ( K ) a Figure 1: | µSR data. a, Representative zero field (ZF) spectra obtained at 16 K (red diamonds), 4.5 K (blue triangles), and 0.05 K (gray circles) as well as longitudinal field (LF) spectrum at 16 K and 50 Oe (green squares). Continuous lines are fits to the data. Supplementary µSR data are presented in Fig. S1. b, Temperature dependence of the slow depolarization rate λ slow shows a plateau below 2 K at both ZF and LF of 1000 Oe with data extending over two decades of temperature from 20 to 0.05 K. c, Temperature dependence of the fast depolarization fraction f shows a plateau below 2 K. d, DC magnetic susceptibility shows a small peak at 2 K and a splitting between field-cooled (FC) and zero-field-cooled (ZFC) at 10 K. e, µSR spectra at 75 mK in several longitudinal fields show a persistent slow depolarization and a vanishing fast depolarization component. tail leaks into the spectra in Fig. 1a to fit its contribution with a temperature independent relaxation rate λ fast = 9(3) µs −1 . A pulsed muon source is particularly suitable to characterize the slow mode with relaxtion rate λ slow = 0.48(1) µs −1 at 50 mK ( Fig. 1b) which is 18 times slower than λ fast . Temperature dependences of λ slow and f are shown in Fig. 1b,c. The slow and fast modes grow rapidly below 10 K. This onset of magnetism correlates with the temperature at which the field-cooled (FC) and zero-field-cooled (ZFC) susceptibility curves deviate (Fig. 1d). With further decreasing temperature, both λ slow and f form plateaus below T = 2 K (Fig. 1b,c). The onset of a plateau in f coincides with a small peak in the ZFC susceptibility ( Fig. 1d), suggesting the presence of frozen spins in a fraction of the sample volume.
Field dependence of µSR can be used to probe the dynamics of the slow and fast modes. Figure 1e shows that the application of a 1000 Oe LF restores the missing polarization from the fast relaxing muons, indicating the fast relaxation is caused by static local fields that are significantly less than 1000 Oe. In contrast, relaxation of the slow component appears to be due to dynamic rather than static local fields. Because λ slow λ fast , if the local fields were static for slow relaxing muons, we would expect the slow channel to also be suppressed by the 1000 Oe LF.
Indeed, if the slow relaxation were caused by a static field, the magnitude of that field would be approximated by B i = 2πλ slow /γ µ = 37 Oe 1000 Oe (γ µ /2π = 135.5 MHzT −1 is the muon gyromagnetic ratio). The nearly unchanged relaxation rate and amplitude of the slow mode in 1000 Oe LF (Fig. 1b,e) demonstrate that it is caused by fluctuating local fields. Therefore, we ascribe λ fast to muons depolarizing in static magnetic domains, and λ slow to muons depolarizing in distinct regions with spin-liquid-like fluctuating local fields. The observation of a slight decrease in amplitude of the slow depolarization in Fig. 1e, in contrast to the nearly complete suppression of the fast mode suggests that dynamic and static magnetism do not coexist, but rather compete with one another. The dynamic component is consistent with theoretical predictions of a Kitaev spin liquid in honeycomb iridates 11,21,23,27 but the source of static magnetism is unclear. Next, we use spectroscopic techniques to clarify this.
Charge neutrality in Cu 2 IrO 3 dictates conjugate oxidation states of either Cu + and Ir 4+ , or The results in Fig. 2d show that Cu1 has a spectrum different from Cu2,3,4 as expected from the coordination environments. Specifically, the edge for Cu1 is shifted to higher energy than the others, indicating a probable Cu 2+ state. Since all copper sites in Cu 2 IrO 3 have the same Wyckoff multiplicity 18 , it is conceivable to reproduce the experimental curve by adding the four partial contributions in Fig. 2d with equal weight (25%). The resulting curve in Fig. 2e shows a mild disagreement with the experimental data. Specifically, the contribution from Cu1 (nominally Cu 2+ ) appears to be overestimated. The experimental data can be more precisely fit to a weighted sum of partial µ(E) contributions as reported on Fig. 2f. According to this analysis, we estimate 8.5% Cu 2+ content which means the honeycomb layers contain 1/3 Cu 2+ 8.5% 25% and 2/3 Cu + . This is only a rough estimate because we do not know the detailed structure of µ(E) for Cu + in octahedral coordination. Analysis of XANES data from Cu L 2,3 -edges in the Supplementary Fig. S3 yields an average Cu 2+ content of 13% which means the honeycomb layers contain 1/2 Cu 2+ 13% 25% and 1/2 Cu + . These results are substantiated by self-consistent DFT calculations in the Supplementary Fig. S4 where the spectroscopic data are best reproduced using 12% Cu 2+ content. The spin-1/2 Cu 2+ ions can nucleate regions of static magnetism within each honeycomb layer giving rise to a fast depolarization of muons (λ fast ). Outside these regions, the Cu + /Ir 4+ combination gives rise to a spin liquid phase with dynamical local fields giving rise to a slow depolarization of muons (λ slow ).
The most fundamental ingredient of a Kitaev material, apart from having spin-1/2 ions, is the honeycomb geometry. A direct image of Cu 2 IrO 3 lattice is presented in Fig. 3a and Cu 1.5 Na 0.5 SnO 3 , and the iridate Cu 2 IrO 3 . Only one L 3 peak is observed in the stannates corresponding to Cu + (note the CuO reference). Cu 2 IrO 3 shows two L 3 peaks corresponding to Cu + and Cu 2+ . c, Selfconsistent DFT calculations reproduce EELS spectra in agreement with the experiments. The calculations reveal one peak in stannates corresponding to Cu + in dumbbell coordination but two peaks in Cu 2 TEM is also used for electron energy loss spectroscopy (EELS) with the data presented in Fig. 3b. A comparison between the L 3 -edge in stannates and Cu 2 IrO 3 confirms that Cu 2 IrO 3 contains both Cu + and Cu 2+ whereas the stannates contain only Cu + . In the stannate materials, Cu atoms are restricted between the honeycomb layers in a dumbbell coordination 30 . Thus, all Cu 2+ in Cu 2 IrO 3 must be contained within the layers. Self-consistent DFT calculations in Fig. 3c reproduce the EELS spectra and confirm a single L 3 peak in stannates but two distinct peaks in
Methods
Material Synthesis. Cu 2 IrO 3 was synthesized using a topotactic cation exchange reaction according to Na 2 IrO 3 + 2CuCl → Cu 2 IrO 3 + 2NaCl under mild conditions (350 • C and 16 h). Details of the synthesis are explained in reference 18 .
Muon Spin Relaxation. µSR measurements were performed at the ISIS Pulsed Neutron and Muon Source at the Rutherford Appleton Laboratories (UK) using the EMU and MuSR spectrometers with the sample inside a dilution refrigerator and a helium exchange cryostat, respectively. The powder sample was pressed into a disk of 8 mm diameter and 1.9 mm thickness, and was wrapped in a 12.5 µm thin silver foil. Measurements in EMU were performed on a silver mounting pedestal in a Dilution fridge (50 mK< T <4.5 K, along with data at 16.4 K). Due to the small sample area, measurements inside the dilution refrigerator were made in flypast mode (SI reference) in order to reduce the signal from muons not landing in the sample. In this case, the background results from muons landing in the cryostat. Measurements in the MuSR spectrometer were performed with the same sample mounted on a silver mounting plate in a helium exchange cryostat (1.7 K< T < 20 K). In this case the background results from muons landing in the silver holder.
The background signals for each spectrometer were fixed at the values determined from long-time asymmetry at low temperatures (40% of the total signal for EMU, 76% for MuSR), where the sample was strongly magnetic. The total asymmetry was fixed at the value determined from the initial asymmetry at high temperatures where the material had no fast relaxing component. The sample contribution to the asymmetry is the difference between these two values. Data were fit using WIMDA software (SI reference) and all fits had a χ 2 per degree of freedom of approximately trometer. TEM samples were prepared by grinding the materials in an agate mortar with ethanol and depositing the obtained suspension on a Ni-carbon holey grid.
Density Functional Theory. The geometric optimization of Cu 2 IrO 3 , Cu 1.5 Na 0.5 SnO 3 , and Cu 1.5 Li 0.5 SnO 3 were implemented in the pseudopotential VASP code (SI reference) using a projected augmented wave (PAW) method and the Perdew-Burke-Ernzerhof (PBE) exchange-correlation potential (SI reference). The Hubbard correction was implemented using the Dudarev's scheme (SI reference) with U eff = 3 eV for iridium 5d orbitals and 5 eV for copper 3d orbitals. The atomic positions were relaxed until forces were converged to 0.03 eV/Å. Simulations of the spectroscopic data were implemented in the full potential Wien2k code (SI reference) using a linearized augmented plane wave (LAPW) approach and PBE0 hybrid functional with on-site corrections to iridium 5d and copper 3d orbitals. Radius of muffin tin (RMT) was selected to be 1.46, 1.48, 1.50, 2.00, 2.00, 1.94 bohr for O, Li, Na, Ir, Sn, and Cu atoms and the basis size control parameter was RK max = 6.
Both structural relaxation and spectroscopic calculations were spin polarized and included spin orbit coupling (SOC). | 2018-11-01T18:02:40.000Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "8799cbf72fc601b27c844e06101560d1b9b2651e",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.100.094418",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "1e45d603099bfc10fedd148ee3f03e37d78741a9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233966509 | pes2o/s2orc | v3-fos-license | Artificial Intelligence in the Banking Sector – A Critical Analysis
“Computers will overtake humans with AI in the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.” The pros and cons of AI is evident in this statement made by Stephen Hawking.The last decade had witnessed tremendous changes in how each industry functions. The rapid growth of technology, internet and infrastructure has fuelled this disruption at a 10X speed. Talk of the town in digital disruption is Artificial Intelligence. Number of mentions of AI or Machine Learning in earning calls by public company executives shows an exponentially rising trend since 2015 as per the data by CBInsights. AI has brought in groundbreaking changes in the global banking industry. The future of AI in banking is enormous as the power of advanced data analytics can combat fraudulent bank transactions and improve compliance. AI technologies reduce costs in the banking sector by increasing productivity. According to Open Text survey of financial services professionals, 80% of banks are highly aware about the potential benefits that AI can bring to the business. What are the potential benefits of AI in financial institutions? Does adopting AI come with risks and costs? What are the regulatory constraints which could be the impediments for implementing AI in the Banking sector? This conceptual paper deals with risks, rewards, use cases and ways to adopt AI in the banking sector. This article also tries to identify the paybacks and also the key uses of some of the tools which are used by both financial institutions and central banks. It also indicates the main constraints of the technology and its likely consequences for the correct functioning of the financial system.
Introduction
In 2017, the Financial Stability Board (FSB), a global association that checks and makes commendations about the worldwide financial system, set out different applications of AI in the financial sector, such as in the verticals of portfolio management, client due diligence, credit scoring, and regulatory compliance. The FSB also outlined possible benefits for retail customers and small and medium-sized enterprises (SMEs), as well as efficiency gains in back-office procedures carried out by banks. In a 2018 report, the Basel Committee on Banking Supervision (BCBS) -a committee of banking supervisory authorities and the primary global standard setter for the prudential regulation of banks -encouraged banks to harness emerging technologies such as AI to surge their competence in responding to fintechrelated perils. TheRBI had set up a working group which was Inter-Regulatory to understand the problems pertainingto Financial Technology and digital banking in India. The report states that the digital transformation of the banking and financial sector would ride on three pillars: Artificial Intelligence , Block Chain and the Internet of Things .One of the finding of the report was that as the variousequipmentwere interconnectedwhich will lead to self-learning usingArtificialIntelligence, the banking sector will growpastwebsites, applications and brick and mortar branches.
Rewards and benefits of Applying AI in Financial Business Processes
The authors have tried to understand the application of of AI in banking sector from three categories (1) Customers (2) Business (3) Employees
Risks & Challenges
Cybersecurity was recognized as a veryfruitful area for ArtificialIntelligenceenabled weaknesses. The major objective of artificial agents (both informational and cyber physical artificial agents) is the efficient operation of information. Most of the present autonomous learning systems have a data diet susceptibility (Osoba and Welser, 2017). The systems run on AI areclassically only as good as the data on which they are skilled. They manifest any biases or untruths found in their skilled data (Barocas and Selbst, 2016) Some other challenges which was identified were as follows
Meeting Regulatory and Legal Compliance requirements
There are plenty of regulatory mechanisms in place for AI on national and international levels. It is unavoidable to meet the compliance requirements specified by each body. The consequences of violating any of the regulatory requirements lead to serious consequences like brand value deterioration, loss of money and so on.
Ethical Bias Management
AI is a self-learning mechanism. The automated decisions are made on the basis of the data that the system learns. The major advantage and disadvantage of AI is the same that it takes logical decisions and this logic need not be ethical. For instance: A company is hiring for the position of sales manager. Just because 90% of current sales managers are male, it cannot take a majority based decision to look only for male candidates in hiring. Incorporating ethics management into AI is a herculean but necessary task.
Mapping & Enforcing Accountability and Responsibility for the Outcomes
One essential difference between AI and human employees is accountability and responsibility. Can we have a performance appraisal system for AI based machines? If something goes wrong, will the AI machines be responsible and accountable for the mistakes? It is a hard task to incorporate responsibility and accountability to machines. Greater autonomy should come with greater responsibility and accountability. Each decision an AI takes should be justifiable in all aspects. The decision making algorithms must include moral values, societal norms, beliefs and values that makes it accountable for every single decision.
Loss of Jobs
This is the most common threat of AI perceived by humanity. It is true that many of the monotonous repetitive jobs and logic based decision making jobs in banking industry will be taken away by AI. Deploying AI instead of human capital, increases efficiency and effectiveness of the work and saves cost in the long run for the company. Imagine a bank with an AI Robot as a cashier. The daily transactions will be perfectly maintained with respect to associated variables like time of transaction, denominations, face recognition of the customer, authenticity of the cheque and so on.
Even though many such jobs become redundant in the future, on the other side there would be jobs generated to develop, monitor, modify and manage such AI systems.
Opacity in Processes
The backend of AI and its functioning is opaque for the common people and normal employees. This opacity leads to dissatisfaction to the users one or the other way. The EU Commission's High-Level Expert Group on AI (AI HLEG) published the ethical guidelines in April 2019 which stated transparency as one of seven key mandates for a 'trustworthy AI'. It is also emphasized as one of the five key requirements in a recent study of ethical guidelines addressing AI on a global level (Jobin et al., 2019).
Use-Cases
The possibilities of applying AI to banking is numerous. A few of the use cases identified are in the maturity stage whereas many are still in development. Going ahead, there is tremendous potential to have more number of use-cases for AI in banking. In the present scenario, most widely used applications include fraud detection, anti-money laundering analysis, credit scoring and AI chatbots. As per the Autonomous Next Research by Business Insider Intelligence, the cost savings arising from usage of AI in banking industry is estimated to be about $446 bn by the year 2023.
Choosing the right use-cases is a very crucial decision for any bank. It depends on what problem or need is the bank trying to address. Budget availability also play a key role in selection of usecases. Authors have classified the use-cases of AI in banking sector based on its benefit for three stakeholders 1) Business 2) Customer and 3) Employees
Regulatory Mechanism
GDPR -General Data Protection Regulation aims to protect the privacy of data. This European Union regulation has affected most of the companies who are using and tackling with and keeping personal data of individuals residing in the European Union. Most of the AI based systems use bulk volumes of data to train and learn the scenarios and thus to help in better decision making. These training and validating datasets include personal data of individuals as well. Also the regulation directly mentions 'automated individual decision making' i.e. decision making without intervention of people which is the soul job of AI. • PIPEDA -It stands for Personal Information Protection and Electronic Documents Act. This is a Canadian federal privacy legislation for private sector organizations in Canada. The law intends to govern the assortment, use and disclosure of personal information in an organization. • PCI-DSS -Payment Card Industry Data Security Standards are mandatory requirements that has to be followed by all establishments that use, stock or communicate credit card data. This is an autonomousorganization created by major players like VISA, MasterCard, American Express, DIscover and JCB.
Artificial Intelligence in Banking Sector-Adoption and the Barriers
Adoption of AI is a crucial decision for any organisation. Companies should be mature enough to embrace technology by all means. Without data, there is no AI. First and foremost question is whether companies have proper database management systems or not. Banks and financial institutions should ask below questions before deciding to adopt AI. 1. Do we have proper data to be leveraged for decision making? 2. Is the company mature enough to adopt AI? 3. Do we have the right resources to manage AI? 4. Which need or opportunity of organization are we trying to address? 5. What is the expected short term and long term outcome through adopting AI? 6. Is the company financially sound to embrace AI? 7. What is the intensity of potential resistance from stakeholders?
Internalizing the above questions will help bring clarity to the financial institutions in adopting AI. Proper plan and strategy leads to successful implementation of the technology.
Similar to any change, adoption of AI also comes with resistances and barriers. These include tight regulatory mechanisms and a culture of innovation in the organization. It is difficult for traditional banks to quickly migrate to AI technology. Tremendous amount of unlearning and relearning for both employees and customers is mandatory for a smooth transition. Companies often lack a clear strategy for implementing AI enabled banking. Improper strategies backfire the organization quickly and results in financial losses . Today, data is a sensitive entity that has to be dealt with carefully aligned to all compliances and policies.
Fig 4: Barriers to adoption of AI in banking industry
In addition to the above regarding data security some of the additional challenges faced by AI and ML are • Limited processing power -While AI and ML have great potential, they utilize a ton of processing power. Most computing simply isn't that advanced. As a result, it's difficult to fully utilize these technologies outside of very specific environments. • Limited knowledge-Only a handful of people truly understand AI well enough to explain it to the marketplace. This has kept adoption rates from being where they should be and is slowing down growth. • Lack of Trust -There will always be a degree of mistrust between people and computers.
How to Overcome the Barriers?
A planned effort is necessary to overcome the barriers to embracing of Artificial Intelligence in the banking sector. Firstly, the barrier have to be identified. This may differ from one bank to the other. Soft aspects of barriers like culture, skill, style, etc. can be overcome using change management methods like Kotter's 8 step model or ADKAR model of change. But the hard aspects like structure, system, strategy etc. should be driven exclusively from the top leadership of the organization. It would be significant for the organization to understand if its set-up in a way that allows it to adopt AI. This can be done using McKinsey's 7S framework. Keeping the core values central to all elements, this framework throws light on the readiness of the organization to embrace change. Authors have tried to apply the McKinsey framework to the context of implementation of AI in the banking sector, globally.
Conclusion
AI has brought about revolutionary changes in banking.Forget the physical branches, AI brings about a whole new world of modern banks. The expansion and growth is tremendous with the new banking services enabled by AI. The penetration is increased and cost effectiveness has turned to betterment.How we deal with our money is being decided by the dual computational intelligence-AI and ML. The banking sector has been given a new structure of meeting the demands of the customers, in a convenient, safe and smart way. The financial institutions need to notice the need of the hour. They have realized that technology is not expensive or complicated to learn; everything is bundled together in a smartphone that an ordinary man can easily operate (Donepudi, 2017). | 2021-04-17T06:09:04.046Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "8a928b28aa2f395c2101240567c663d77efea429",
"oa_license": null,
"oa_url": "https://doi.org/10.34293/management.v8is1-feb.3778",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8a928b28aa2f395c2101240567c663d77efea429",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
232136886 | pes2o/s2orc | v3-fos-license | Is Misdiagnosis of Type 1 Diabetes Mellitus in Malaysian Children a Common Phenomenon?
Background Children with Type 1 diabetes (T1DM) commonly present in diabetic ketoacidosis (DKA) at initial diagnosis. This is likely due to several factors, one of which includes the propensity for T1DM to be misdiagnosed. The prevalence of misdiagnosis has been reported in non-Asian children with T1DM but not in Asian cohorts. Aim To report the rate of misdiagnosis and its associated risk factors in Malaysian children and adolescents with T1DM. Methods A retrospective analysis of children with T1DM below 18 years of age over a 10 year period was conducted. Results The cohort included 119 children (53.8% female) with a mean age 8.1 SD ± 3.9 years. 38.7% of cases were misdiagnosed, of which respiratory illnesses were the most common (37.0%) misdiagnosis. The rate of misdiagnosis remained the same over the 10 year period. Among the variables examined, younger age at presentation, DKA at presentation, healthcare professional (HCP) contact and admission to the intensive care unit were significantly different between the misdiagnosed and correctly diagnosed groups (p <0.05). Conclusion Misdiagnosis of T1DM occurs more frequently in Malaysian children <5 years of age. Misdiagnosed cases are at a higher risk of presenting in DKA with increased risk of ICU admission and more likely to have had prior HCP contact. Awareness of T1DM amongst healthcare professionals is crucial for early identification, prevention of DKA and reducing rates of misdiagnosis
INTRODUCTION
Type 1 diabetes (T1DM) is a common autoimmune condition of childhood with a peak age of onset at 10-14 years of age (1). The incidence of T1DM varies worldwide, being higher in Northern Europe. T1DM is the commonest form of childhood diabetes in Malaysia, accounting for 73%-77% of all childhood diabetes cases as reported by the DiCARe registry (1)(2)(3). The International Diabetes Federation reported 977 cases of T1DM in Malaysian children aged 0-19 years in 2019 (4).
Timely diagnosis of T1DM is essential to initiate prompt treatment and avoid the progression to diabetic ketoacidosis (DKA), which occurs in 30%-70% of children (3,5). The incidence of DKA at initial diagnosis is influenced by younger age (<5 years old) at presentation, lower socioeconomic status, lower parental education, lack of parental and health practitioner awareness of the symptoms of diabetes, limited access to healthcare and a lower prevalence of childhood T1DM (6). In Malaysia, 64.2% of children with T1DM present in DKA at initial diagnosis (3).
There is a positive correlation between misdiagnosis of T1DM and DKA (7,8). This may be due to the fact that the presenting features of T1DM overlap with symptoms of more common childhood diseases such as respiratory, gastrointestinal, and surgical illnesses (1,9). Furthermore, if the osmotic symptoms are not forth-coming in the medical history, T1DM may not be considered as a potential diagnosis. Some studies have reported on the risk factors which contribute to misdiagnosis of diabetes in North-American and European children, however, there are no published studies on the rates of misdiagnosis and its associated factors in Asian countries (10,11). The aim of this study is to report on the rate of misdiagnosis of T1DM in Malaysian children and adolescents and the risk factors which predispose to misdiagnosis.
METHODS
A retrospective review was conducted on all newly diagnosed cases of paediatric T1DM who presented to University Malaya Medical Centre (UMMC) between January 1 st 2010 and December 31 st 2019. A diagnosis of T1DM was made according to the ISPAD guidelines for the year in which the diagnosis was made, as was diabetic ketoacidosis (DKA) (12)(13)(14). Misdiagnosis was defined as any subject who had been given a diagnosis other than diabetes or DKA, either by the referring physician or by the doctor at UMMC. Children with diabetes mellitus other than T1DM or those cases in which misdiagnosis could not be confirmed were excluded from the analysis. Data was extracted from the electronic medical record (EMR) system and the referral letters from the referring physicians. Details on age, sex, ethnicity, symptoms, and contact with a health care professional prior to the initial diagnosis were obtained. Additional details on anthropometry and biochemistry were also extracted. UMMC has an EMR system with a proforma for in-patient admission clerking, into which details on the presenting history are entered on admission. All new diagnoses of paediatric T1DM at UMMC are admitted as inpatients irrespective of whether they present in DKA or not and all are reviewed by the Paediatric Endocrinology team.
Statistical analysis was done using SAS@ 9.4 software (SAS Institute, Cary NC). The demographic data, clinical and laboratory features, were analyzed using descriptive statistics. Data are expressed as the median with interquartile range (IQR) or means for continuous variables and frequencies/percentages for categorical variables. Comparisons were made between the misdiagnosed and correctly diagnosed groups for demographic and clinical variables using T-test (two continuous group comparison); and chi-square/Fisher-exact test (for categorical group comparison). A p-value of less than 0.05 was accepted as being statistically significant.
RESULTS
A total number of 223 cases of childhood diabetes were newly diagnosed between January 1 st 2010 and December 31 st 2019 at UMMC. Out of these, 62% (n=138) were T1DM, 35% (n=77) T2DM and 4% (n=8) other diabetes. Of the 138 T1DM cases during the study period, 19 were excluded, as misdiagnosis could not be confirmed based on information entered into the EMR system. Seventy-three cases (53.8%) were correctly diagnosed as T1DM, whereas 46 (38.7%) were misdiagnosed.
Demographics
The mean age at diagnosis was 8.1 years ( ± 3.9). The majority, 47.1% (n=56), were aged 5-10 years and 53.8% (n=64) were female. Malay ethnicity representation was 35.6% (n=42), Chinese 32.8% (n=39) and Indian 21.0% (n=25). The majority 63.0% (n=70) of the referrals came from within the UMMC coverage area of 450 sq. kilometres with a 25 km radius within the Klang valley region in the states of Selangor and W.P. Kuala Lumpur. Table 1 illustrates the demographic and clinical characteristics of the T1DM patients.
In terms of anthropometric measures, 67.8% (n=61) had normal weight, 19% (n=17) were underweight and 13.3% (n=12) were overweight at presentation. The mean HbA1c at presentation was 12.3% ( ± 2.5%) and 71.3% (n=82) had presented in DKA. Of those who had presented in DKA, 56% (n=42) were severe DKA. In terms of HCP contact prior to the initial diagnosis of T1DM, 76.3% (n=90) were seen by one healthcare professional prior to referral to UMMC. Referrals from other hospitals accounted for 68.8% (n=77) of the cases to UMMC. Paediatric ICU admission was needed in 20% (n=47) of the T1DM cases.
Misdiagnosed Cases
The rate of misdiagnosis in this cohort was 38.7% (n=46). The most commonly reported symptoms were polyuria (45.7%), polydipsia (43.5%), weight loss (32.6%) and vomiting (32.6%). Symptoms such as skin infection, headache, polyphagia were reported in less than 5% of cases. The most common erroneous diagnoses made in children presenting with T1DM, were respiratory (36.9%), gastrointestinal (34.8%) and infectious (10.9%) illnesses. The rate of DKA in the misdiagnosed group was 87% (n=40). Figure 1 depicts the categories of misdiagnoses made according to system in the T1DM cohort.
Regarding referral source, the total number of visits to primary care made just prior to final diagnosis of T1DM was (n=40). Of these, 52.5% (n=21) were misdiagnosed. The total number of visits made to either secondary or tertiary care institutions was (n=83), and of these 32.5% (n=27) were misdiagnosed.
The rate of misdiagnosis over the 10 year period varied between 20-64% with the highest rate being reported in 2019. Between 2012-2017, the rate of misdiagnosis of T1DM remained the same between 26%-40%. In 2018-2019, of those that were misdiagnosed (n=14), 65% (n=9) were diagnosed by primary care and 36% (n=5) by hospitalists. The increase in misdiagnosed cases between these two time periods was not contributed to by an overrepresentation of younger children (<5 years), as 29% were misdiagnosed during 2018-2019 and 33% during 2010-2017. During 2018-2019, the majority of cases were in the age group >5 years; 71% (n=10).
Comparison of the Misdiagnosed and Correctly Diagnosed Cases of Childhood T1DM
No significant differences were found in terms of the mean age, gender or ethnicity between the 2 groups, as shown in Table 2. However, there was a higher representation of children aged <5 years old in the misdiagnosed group (56% vs. 44%, p=0.028). No significant differences in BMI-SDS, mean blood glucose, mean HbA1c, or mean pH at diagnosis were found between the two groups. Of the children that presented in DKA, 48.8% (n=40) of cases were from the misdiagnosed group. Conversely, 84.9% (n=28) of children from the correctly diagnosed group had a non-DKA presentation. The odds ratio for DKA at initial diagnosis in the misdiagnosed group was 5.3 (CI 1.5748-15.1724, p 0.0008). No significant difference was found in terms of severity of DKA between the two groups. In the misdiagnosed group, 89.3% of cases were seen by ≥ 2 HCP prior to the initial diagnosis of T1DM (p 0.0001) at our centre. A significantly higher percentage of the misdiagnosed cases were referred from primary care doctors (57.1% vs. 42.9%, p 0.0136). Admission to the paediatric intensive care unit was more frequent in the misdiagnosed group (57.5% vs. 42.6, p=0.001) but no difference in terms of mean length of hospital stay was found. Comparisons of variables between the misdiagnosed and correctly diagnosed children are shown in Table 2.
DISCUSSION
Our study shows that Malaysian children with T1DM have a misdiagnosis rate of 38.7%. Children are commonly diagnosed with alternative diagnoses, such as respiratory or gastrointestinal illnesses and despite the presence of osmotic symptoms, onethird of children are misdiagnosed. Younger children, < 5 years old, are more frequently misdiagnosed. As a result of the misdiagnosis, there is a higher risk of presenting in DKA and requiring ICU admission. Misdiagnosed cases are frequently referred from primary care services and more likely to have had ≥2 prior HCP contacts. To our knowledge, this is the first study to report on the prevalence of misdiagnosis in T1DM in Malaysian children.
The high rates of misdiagnoses (38.7%) were sustained with no reduction in rates during the study period. This is higher than the misdiagnosis rate of 16% reported by Munoz et al. (10) in American children <18 years. Methodological differences such as larger sample size, use of questionnaires may explain the discrepancy in rates of misdiagnosis. The use of questionnaires rather than accessing healthcare records for data collection means that ascertainment of a "true" misdiagnosis is difficult to make. Finally, the survey was conducted on participants from North America, where T1DM in children is more prevalent and hence greater public awareness of the disease as compared to Malaysia (4,15,16).
Respiratory, gastrointestinal conditions, infective and renal conditions were the commonest misdiagnoses given to T1DM children in this study. Other studies have similarly reported that infective conditions are common misdiagnoses in children with T1DM and this may be attributed to the fact that HCPs have greater exposure to common childhood illnesses rather than T1DM (10). A lack of familiarity amongst HCPs with the symptoms of childhood T1DM can lead to an erroneous diagnoses such as asthma, pneumonia and gastroenteritis, in which presenting symptoms are common with T1DM (1). This lack of familiarity is also evident from the finding that over onethird of misdiagnosed patients reported the presence of polyuria and polydipsia, which are classical symptoms of diabetes. The results also demonstrate that there is a lack of awareness about less common symptoms of diabetes in children, such as weight loss which was reported in one third of the misdiagnosed cases. The findings highlight the need for heightened awareness amongst HCPs about childhood T1DM and its presenting features and future HCP training should focus on improving these competencies (12,17).
This study showed that there was a significantly higher rate of misdiagnosis in children < 5 years of age, as compared to those children aged > 10 years. These findings are similar to the study by Munoz et al. (10) which reported a misdiagnosis rate of 21% in the 0-6 year age group; 15% in 7-12 year and 14% in the 13-17 year age groups. Usher-Smith et al. (7) also reported a higher risk of diagnostic error in younger children with T1DM and a mean age 5.4 years. HCPs in our system need to have an increased awareness of the symptoms of diabetes in younger children and a higher index of suspicion for children who present with symptoms that are overlap between common childhood illnesses and T1DM. This may be achieved through continuing medical education on childhood diabetes topics or increasing clinical exposure to childhood diabetes cases.
The rate of DKA in the misdiagnosed group was higher (48.8%) as compared to the correctly diagnosed group. DKA was also 5.3 times more likely to occur in cases that were misdiagnosed. Though no significant differences in the severity of DKA were found we did show a significantly higher frequency of PICU admission from the misdiagnosed group. Munoz et al. (10) has also demonstrated that DKA was diagnosed in 68% of children with T1DM who were misdiagnosed as compared to only 42.8% in those who were correctly diagnosed. Similarly, a large systematic review by Usher-Smith et al. (7) reported that misdiagnosis is associated with a 3-fold increased risk of developing DKA. The relationship between misdiagnosis of childhood T1DM and the increased risk of DKA may be explained by various factors such as parental and HCP unfamiliarity with diabetes symptoms (7,8,11). Diminished (1,9,11). In our study, the referral patterns indicate that there is general lack of awareness of childhood T1DM amongst HCPs in Malaysia, as there was an overrepresentation of misdiagnosed cases being referred from the primary care setting as compared to hospitalist referrals. Furthermore, a large proportion of misdiagnosed children were seen by ≥2 HCPs prior to referral. A study by Pawlowicz et al. reported a similar finding in Poland, whereby 79.1% of erroneous diagnoses in childhood T1DM were made by primary care physicians as compared to 16.7% made by hospitalist doctors (5). This lack of awareness may be explained by factors such as limited exposure to childhood T1DM cases, which may either be during the post-graduate training or subsequent clinical exposure during practice. At present, shared care of paediatric diabetes patients between paediatricians and family medicine physicians does not exist in Malaysia. Furthermore, low regional prevalence rates of childhood T1DM may also be contributory factor for misdiagnosis. Hong et al. (3) has shown that DKA rates at initial presentation of paediatric T1DM in Malaysia is as high as 64%, much higher than in Northern European countries where prevalence rates of T1DM are higher (19)(20)(21). Our data suggests that curbing DKA rates and admission to costly intensive care services in childhood T1DM hinges on making an accurate and early diagnosis. This can be achieved by raising public awareness and improving the competencies of HCPs in relation to childhood T1DM. With increased awareness, parents are more likely to seek medical attention early and HCPs to seek early referral. Public awareness campaigns, by means of education sessions, information posters, newspaper adverts and radio adverts have proven effective at reducing rates of DKA in Italy, the UK and Australia (22)(23)(24)(25). The Parma study, showed that a large scale public awareness campaign can reduce the incidence of paediatric DKA from 78% to 12.5% (22). Similarly, the 4T's campaign in the UK has also contributed to reducing paediatric DKA rates (24,25). It would stand to reason that similar interventions would reduce the rate of misdiagnosis of T1DM in Malaysia.
Though, in the opinion of the authors, this is the first study to report on the prevalence of misdiagnosis in T1DM in Malaysian children, we acknowledge its limitations. Firstly, the study is retrospective in nature. Secondly, the study is from a single centre which mainly covers an urban catchment area, and thus not necessarily reflective of national rates. Thirdly, we also acknowledge the small sample size. Future studies should capture data on larger numbers at a national level and include analyses on family and social factors that can influence rates of misdiagnosis.
In conclusion, misdiagnosis of T1DM in Malaysian children is a common phenomenon. Children under the age of 5 years are particularly at risk of being misdiagnosed. Misdiagnosis of T1DM predisposes to an increased risk of presenting in DKA and the need for increased healthcare resources. A significant number of these referrals are from the primary care setting highlighting the need for improving awareness of childhood T1DM amongst HCPs and the public in a region where T1DM is not a common childhood condition.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
This study was approved by the University Malaya Medical Centre (UMMC) institutional ethics board MREC Ref: 2019325-7251. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. | 2021-03-08T14:08:48.253Z | 2021-03-08T00:00:00.000 | {
"year": 2021,
"sha1": "090e02877467480f5f1976dc1c4de8285e63c053",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.606018/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "090e02877467480f5f1976dc1c4de8285e63c053",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222071024 | pes2o/s2orc | v3-fos-license | Treatment of International Economic Trade in Intergovernmental Panel on Climate Change (IPCC) Reports
Climate change presents significant risks to the international trade and supply chain systems with potentially profound and cascading effects for the global economy. A robust international trade system may also be central to managing future climate risks. Here, we assess the treatment (or lack thereof) of trade in a selection of recent Intergovernmental Panel on Climate Change (IPCC) assessment and special reports using a quantitative text analysis. IPCC reports are considered the preeminent source of relevant climate change information and underpin international climate change negotiations. Results show that international trade has not had substantial coverage in recent IPCC assessments. Relevant keywords associated with trade appear in very limited ways, generally in relation to the words “product” and “transport.” These keywords are often referring to emissions associated with transportation and the movement of food and global food systems. The influence of trade is given larger consideration with respect to the costs and trade-offs of climate mitigation policies, especially the interactions with food availability, that appear in Working Group III reports compared with the risks to trade from climate change impacts in Working Group II. Trade in relation to other economic sectors is largely absent as well as risks from potential climate-related trade disruption. There is almost no treatment of the potential impacts, risks, and adaptation strategies to manage the climate related-implications for international trade. Given the importance of trade to economic growth, we recommend that additional attention be paid to trade and related economic issues in future IPCC assessment and special reports, specifically on the interactions of climate impacts and risks on trade and the potential for trade to moderate these risks. To achieve this, there must be efforts to increase the base of scientific literature focused on climate change and international trade as well as increased effort made among IPCC lead authors to review trade literature that may lie outside conventional climate change scholarship.
Introduction
International trade is the exchange of goods and services across national borders and currently accounts for approximately 40% of the entire global economy [1]. Allowing countries to leverage their competitive advantage, by reducing trade barriers and expanding access to global markets, has led to substantial economic growth and poverty alleviation worldwide since 1990 [2,3]. Over time, the benefits are observed in improved social conditions and reduction in risks to human health, although the gains and losses observed have not all be shared equally or equitably [4]. While international trade has indeed guided considerable economic, social, and health gains, the environmental impact of trade liberalization is more ambiguous. For example, trade-induced growth in the economy has resulted in higher emissions of carbon dioxide (CO 2 )-the main pollutant that drives climate change [5]. This itself has been shown to lead to increases in ill health as well as social and economic inequity [6,7]. Complicating this further is the fact that a substantial portion of the decreases in CO 2 emissions observed in developed countries, as a result of globalization and technological modernization enabled by trade, are often offset by increases in international trade flows and the related increases in emissions [8]. It is crucial to develop strategies to promote sustainable global trade patterns while limiting the impacts on future climate change. Some experts have suggested that linkages between different aspects of international trade and climate policy may actually foster a willingness to engage in climate mitigation, with the potential to facilitate co-benefits among related adaptation, and mitigation strategies should they be considered congruently [9][10][11][12].
There is a growing literature on the costs associated with climate change mitigation and trade, but the literature on trade and adaptation is much more limited. Computable general equilibrium (CGE) models that are used to evaluate trade patterns and climate mitigation have not generally incorporated climate change impacts [13]. Agriculture has been the most widely investigated climate-related impact in terms of its integration with trade [14]. Specifically, the food price shocks for 2008-2009 prompted a substantial literature on how stresses to agricultural production, indicative of climate change impacts, may be transmitted through trade patterns, including the role of trade distortions in amplifying those shocks [15,16]. The effects of trade have also been indirectly included in other economic valuations of climate impacts. For example, the economic costs of sea level rise in Europe have also been evaluated in a CGE framework that allows for the reallocation of resources through trade [17].
More common in the literature, and certainly most prevalent in gray literature, is analysis of the role of extreme events, including storm surge, coastal erosion, hurricanes, and flash floods, in the destruction of key tradeand transportation-related infrastructure that causes major disruptions within international and inter-regional supply chains [18][19][20]. For example, in the USA, Superstorm Sandy shut down the Port of New York/New Jersey for 8 days resulting in major disruptions to shipping [21]. Washouts of parts of the railroad connecting Churchill to southern Manitoba, Canada, in the summer of 2017, resulted in a complete shutdown of rail transport of goods to Churchill, including the re-supply for remote northern communities [22]. However, many of these studies are found in different disciplinary literatures and tend to focus either on climate impact drivers and their changing severity or on the economic or societal implications of infrastructure damage from extreme events. This lack of integration, or perhaps the absence of systems-based analysis of climate change and trade disruption (observed or projected), challenges but does not limit the possibility of properly assessing the role of climate change in global economic trade.
In addition to creating risks related to trade disruption via infrastructure damage, climate change is also likely to influence a transformation of global trade routes and patterns. For example, reductions in sea ice extent are making Arctic maritime trade routes a new possibility [23][24][25][26], and shipping in that region has been increasing rapidly over the past decade [27,28]. The opportunity for maritime trade through the Arctic has captured the imagination of global nations for centuries because of the economic benefits related to shorter distances and the relative political stability of the region compared with existing trade corridors in the Southern Ocean. As over 80% of all goods traded internationally move at some point by ship [29], the transformation in maritime trade routes of the Arctic could create a suite of cascading impacts related to geopolitics, international power dynamics, and environmental and cultural sustainability with major policy relevant implications [30][31][32].
Given the ever-increasing importance of international trade and the wide range of potential interactions with climate change impacts, risks, and climate policy, it seems timely and appropriate to evaluate the extent to which trade is considered and assessed within Intergovernmental Panel on Climate Change (IPCC) reports. IPCC reports provide a comprehensive assessment of the state of scientific knowledge on climate change and play a critical role in outlining our scientific understanding of observed impacts and future risks associated with a changing climate while offering insight on response options related to mitigation and adaptation [33,34]. These reports play a fundamental and critical role in shaping the way that climate change is viewed and understood by society and how international climate policies and agreements are negotiated [35][36][37]. Since 2012, the IPCC has released their Fifth Assessment Report (AR5) as well as special reports focused on (1) lands, (2) oceans and the cryosphere, and (3) the special report on 1.5°C warming [26,[38][39][40][41][42][43], among others [44]. In this paper, we conduct a quantitative text analysis of recent IPCC assessment and special reports to identify trade-related content and the extent to which and how trade has been assessed.
Methods
The analysis used to identify trade-related content and the extent to which and how trade has been assessed within recent IPCC reports involves several steps (Fig. 1). The first step involved scoping the analysis. Due to time and resource constraints, a selection of assessment and special reports from the 2012-2019 time period was evaluated including the AR5 Synthesis Report: Climate Change 2014 (SYR) 151 pages [38]; AR5 WGII: Impacts, Adaptation, and Vulnerability (AR5 WGII) 1820 pages [39,40]; AR5 WGIII: Mitigation of Climate Change (AR5 WGIII) 1435 pages [41]; the Ocean and Cryosphere in a Changing Climate (SROCC) 755 pages [26]; the Special Report on Climate Change and Lands (SRCCL) 864 pages [42]; and the Special Report on Global Warming of 1.5°C (SR15) 616 pages [43] (Scoping- Fig. 1). AR5 WG1 (Physical Basis) and SREX (Special Report on Extreme Events and Disasters) have been excluded from the analysis. An initial scan of the AR5 WGI revealed limited trade related content, and the SREX was released prior to 2012.
A scan of relevant global trade literature is conducted in order to identify trade relevant keywords (Indexing- Fig. 1). The selected keywords identified for the analysis included "cargo," "commodity," "export," "freight," "globalization," "goods," "import," "product," "shipping," "trade," and "transport". Keyword searches, using the identified words, were conducted in each of the six analyzed reports. Once keywords were found, the surrounding paragraph or section of figure or table was extracted, and the extracted text was quality controlled to ensure the meaning and intention reflected the focus of our analysis (Contextualizing- Fig. 1). This was necessary to avoid keywords with multiple meanings, such as "transport" in the SROCC report meaning both the transport of goods and also the transport of water, organisms, and other environmental and physical parameters. In such cases, paragraphs containing keywords that were not contextualized to the focus of this study (i.e., international economic trade) were excluded. We also included possible variations in word terminations (e.g., transport to represent transportation, transporting, etc.). Certain non-relevant content was also excluded from the analysis, notable examples being "trade-offs" and "cap-and-trade." Finally, chapter reference lists were also excluded from the text.
Relevant text from all reports is combined in order to conduct quantitative text analysis, using a tool called Quanteda (Quantitative analysis- Fig. 1) that functions through packages supported by the software platform R [45]. Quanteda is an open-source software that has been used to conduct similar quantitative text analysis [46]. The document containing all text extracted from the IPCC reports was imported into R, where the text was then organized and cleaned using scripts available in Quanteda and adapted for our specific analytical context (see Supplementary Material A). At this point, a series of simplification and quality control steps were taken, including the elimination of unnecessary words, numbers, punctuation, acronyms and symbols, the creation of a text corpus, tokens objects (words), and a document-feature matrix. For this study, we utilized three specific statistical analysis, scaling, and classification tools including simple frequency analysis, feature co-occurrence matrix (FCM), and document feature similarity (cluster dendrograms).
Simple frequency analysis is used to count the number of times keywords are mentioned in each report. During this analysis, word frequencies were not normalized by the length of reports, and therefore, results display the total additive keywords and not the average. FCM is used to record the number of co-occurrences of tokens (i.e., keywords) in each report and to identify the most frequently co-occurring words. This analysis informs similarities in meaning between word pairs and meaning within word patterns. It also reveals latent structures of mental and social representations [47][48][49]. To improve the visualization of the co-occurrence graph patterns, we adjusted the matrix to consider only words with the highest number of co-occurrences, eliminating any "noise" caused by unnecessary tokens. Cluster dendrograms are used to calculate the similarities among features of interest within documents [50], such as among the preselected keywords used in this analysis and their surrounding paragraphs. The height of a keyword in the plot of the cluster dendrogram is proportional to its similarity or dissimilarity to other keywords found in the report. The more dissimilarity the keyword has, the more scattered the term is in the text. The more similarity the keyword has with other keywords, the more interlinked they are. Below we present the results and observations that emerged from the analysis using simple frequency analysis, FCM, and cluster dendrograms (Interpretation- Fig. 1).
Results
Trade-related keywords occurred very rarely in IPCC reports especially compared with other high frequency words (e.g., "emissions," "energy," "mitigation") (see Supplementary Material B and C). In total, trade-related keywords appear just under 5000 times for all of the IPCC reports analyzed. When isolating just for trade-related keywords, the word that occurred most frequently in all IPCC reports analyzed is "transport" (n = 1692). The second highest occurring preselected keyword is "trade" (n = 1269), followed by "shipping" (n = 352), and then "product" (n = 322) (Fig. 2). The keyword with the lowest frequency was "export" (n = 170). The IPCC report with the highest number of trade-related keywords was AR5 WGIII (n = 2786). In the WGIII report, the word "transport" appears more times than all of the trade-related keywords combined in any of the other reports. Of all of the traderelated keywords, only "product," "transport," and "shipping" appear on any of the ranked lists for all IPCC reports (Table 1). When further analyzing trade-related keywords in the reports using an FCM approach, it was revealed that certain words tend to dominate the discourse. These words form clusters of information that tend to occur in a similar linguistic context and resemble each other in meaning. This approach allows for a deeper evaluation of the meaning among keywords of interest that extends the simple frequency analysis. It allows one to make inferences about the importance and relation of the keywords in each document and observe some of the general patterns that emerge. For example, the majority of trade-related keywords did not come up as salient or wellconnected words in any of the documents (Table 1; Fig. 3). The only words that occurred with any regularity in an FCM matrix table are "product" and "transport" (Table 1). Within the IPCC reports, the word "product" had the highest co- Fig. 2 Simple frequency analysis graph outlining the number of times a keyword is mentioned in each report occurrence among the trade-related keywords, indicating it is a topic that is prevalent in all reports and that the concept is related to many subjects covered in the texts (Fig. 3). However, upon closer contextual analysis, the word "product" usually does not co-occur with any of the other trade related keywords. This indicates that, although the word "product" is linked to many words in the documents, it is likely not occurring in the context of international economic trade. That is, when the word is used in the reports, it is not referring to the movement or trading of "products." The FCMs also identified areas where subjects are more closely related and emphasized. For example, in the SROCC report, the highest co-occurrence of trade-related keywords focused on the impact and changes occurring as a result of increased shipping and transportation activities in the Arctic (Fig. 2d). However, trade as a general theme did not emerge in the SROCC report beyond this specific example. The FCM analysis also revealed that in the SRCCL report, there are clear word linkages between food production, land use and management of crops, agriculture and forests, and the relations of mitigation and carbon emissions (Fig. 3e), but again limited attention is given to any of the trade-related words or to the concepts associated with global economic trade generally. The SR1.5 report exhibited a larger spectrum of word cooccurrences overall compared with the other reports, likely due to the diverse set of subjects covered by this document, but, again, there was no evidence of trade words co-occurring (Fig. 3c).
The final approach we took to understand the extent to which global economic trade is treated within recent IPCC reports is the feature similarity analysis (cluster dendrograms) (Fig. 4). Results of this analysis demonstrate that the clustering of trade-related keywords show similar, but not identical, results among all of the reports. The keywords "transport" and "product," again, appear the highest for most of the reports, but are not clustered with other trade related words (Fig. 4). This indicates that although the words "product" and "transport" appear more frequently in the documents than other words in general, they are not related to, and are not occurring in the context of, trade and transportation. The smallest height and distance between keywords forming clusters of crosslinked topics indicate which keywords are being discussed together. For example, in AR5 SYR, a cluster of the words "globalization," "cargo," and "commodity" appear under the same tree and within shorter distances, indicating that these words occur in closer proximity to each other in the text analyzed (Fig. 4e). However, as the SYR is a much shorter synthesis report, many words and concepts will appear closer together compared with the much longer and more detailed AR5 WGII or WGIII reports.
Discussion
Trade-related keywords, including "cargo," "commodity," "export," "freight," "globalization," "goods," "import," "product," "shipping," "trade," and "transport," appeared 4861 times in recent IPCC reports that collectively total over 5500 pages (Fig. 2). When trade-related keywords appear in the text, they are not generally used in the context of international economic trade. Rather, these words are used to cover topics such as food production, food security, energy use, emissions, and migration (Table 1, Fig. 3
, Supplementary
Material B and C). For example, the FCM analysis shows that there were numerous mentions of the keywords "product" and "transport" (Figs. 2 and 4, Table 1), but in the majority of these cases, they did not co-occur with other trade related keywords, suggesting that the focus of the text is not on international economic trade (Table 1, Fig. 3, Fig. 4). Other tradefocused keywords, such as "trade," "import," "freight," and "cargo," do appear to cluster together, meaning that they are related and that the text is likely referring either implicitly or explicitly to global (and or regional) economic trade (Fig. 4). However, these words co-occur infrequently (Fig. 3), suggesting that international economic trade has not been treated with any intentionality or in any comprehensive way within recent IPCC assessments. Despite the finding that international economic trade does not seem to appear with any great frequency in recent IPCC reports, the concept of trade is by no means entirely ignored. When keywords appear related to the topic of economic trade, the text is generally focusing on either: (1) total emissions resulting from transportation (some of which is attributable to global economic trade) or (2) the idea that climate change will negatively impact the global trade system through an increase in extreme weather events, specifically focusing on Fig. 3 Frequency co-occurrence matrix graphs for word co-occurrences in each IPCC report how drought impacts global food systems. The general trend observed within these specific cases is that there tends to be a focus on negative impacts. However, in some reports, in particular the SROCC [26], specific examples are given where climate change is described as a potential benefit to the trade system-specifically, through increased accessibility to maritime Arctic trade routes from decreased sea ice extent (also see Table 1, Fig. 3d).
A major finding of this study is that trade-related keywords such as "transport" and "trade" frequently co-occur with the term "emissions" (Table 1, Fig. 2). This suggests that any treatment of trade in recent IPCC reports tends to focus on the role that the transportation sector, within the trade system, amplifies climate change. Thus, not surprisingly, the report that exhibits the highest number of trade-related keywords is AR5 WGIII (Mitigation). For example, the WGIII report specifically includes a section on the role of mitigation (i.e., reducing emissions in this case) on the global trade system (Section 13.8), which is titled "Interactions Between Climate Change Mitigation Policies and Trade" [41]. Trade accounts for up to 1/4 of global greenhouse gas emissions, so reducing emissions from the trade sector would require significant changes to the system and with coincident and substantial climate-related benefits [8]. Cluster dendrograms including all preselected keywords for each IPCC report. The higher the height of the keyword in the cluster dendrogram, the more dissimilar (distant) it is to the other keywords in the report. The closer the height and the connecting gap between the keywords the more similarity (proximity) they have in the report In the WGIII report, and also in the SRCCL report, there was a heavy emphasis on food production, with specific sections dedicated to emissions related to the movement of food (e.g., WGII Chapter 7, Chapter 9, and Chapter 21) [11,12,38,42]. The word "food" occurs very frequently in WGIII and in the SRCCL and cooccurs with many other common words, indicating that "food" in general is a central theme of those documents ( Fig. 3b and e, Table 1) and that treatment of the concept of trade most typically occurs under the guise of food production and transportation. One of the key reasons for this is the concern that bioenergy with carbon capture and storage and afforestation-two critical components of pathways that attain stringent 1.5°C end of century targets-may interact with the food system through the diversion of agricultural inputs and land for climate mitigation [51].
Another important finding from the study is that traderelated keywords do not often co-occur and are rarely mentioned in the context of impacts, risk, and adaptation, which is the focus of WGII. Although there are some sections of the AR5 WGII report that focus on regional trade (Chapters 24 and 25) [39,40] and others that focus on the impact of climate change on shipping (e.g., Section 30.6.2.3) [39,40], overall, trade-related keywords appear very infrequently (Fig. 2). There is very little discussion of the impacts or risks of climate change on different transport modes (air, rail, road, shipping, multimodal), despite a large body of recent literature outlining the importance [20, 21, 52, e.g.]. As mentioned above, and similar to trends observed in the AR5 WGIII report, when trade is being discussed, the focus is often on food production and the movement of food products while largely ignoring other sectors (Fig. 2). However, in Canada, for example, exports from the agri-food industry made up only 7% of the total value of exported goods in 2019, with energy and motor vehicles making up 19% and 15%, respectively [53]. It is possible that the limited treatment of trade within the WGII report occurred because of an abundance of literature on food and fiber products and the more limited number of published studies that explicitly examine climate change impacts on manufacturing, construction, energy and natural resources, and other economic sectors [54,55].
Indeed, the majority of literature on economic trade, and in particular that which is focused on different economic sectors, may exist outside the expertise and scope of the literature that IPCC authors tend to review. Often, these studies on economic trade do not specifically or explicitly mention climate change, making them more difficult to find, consider, and assess. Further, economic modelingbased studies are often conducted by researchers without climate expertise, and it is indeed extremely difficult to model economic trade in tandem with climate change. Many of the existing models that are used to understand climate policies are of a partial equilibrium nature so cannot easily capture trade. CGE models can do this but are a more specialized subset of integrated assessment models (IAM) and face more challenges capturing the longer-term time horizons of climate policy and also have trade-offs on the explicit representation of technologies. Further, integrating climate damages into models that focus on climate mitigation costs remains a longer term goal of the IAM community. There is certainly a pressing need to address these modeling gaps and to consider the impact and risks that climate change has, and could have, on global economic trade within the wider academic literature. Other factors that could limit the full assessment of trade (and other commerce related subjects) could be based on author team composition, which may have led to certain topics being underrepresented within the IPCC reports where subject matter experts are not present [56,57]. Despite these challenges, relevant literature on international economic trade, including environmental-related disruptions to supply chains and changing trade patterns and routes, does exist and should be properly assessed for inclusion within IPCC reports [58]. Because there is indeed a limited number of papers that explicitly examine climate change and international economic trade, it will be more challenging, but still possible, to review and assess relevant peer reviewed literature that indeed covers aspects of trade and climate change (separately or in some case implicitly) for inclusion in future IPCC reports.
Conclusion
Society as we know it is based upon the international trade of goods and services, and climate change is expected to affect all aspects of the global economic trade system. Despite the importance of trade and the transport of goods on the global economy and society, little attention has been paid to this topic in recent IPCC assessments and special reports. Using preselected keywords related to trade, we were able to evaluate the overall treatment of trade within six recent (2012-2019) IPCC reports, determining that, overall, trade is not intentionally nor comprehensively covered in any of the recent reports. The WGIII (mitigation) report does the best job, discussing trade through a focus on emissions resulting from transportation, while WGII (impacts and adaptation) has neglected to consider the impacts, risks, or adaptation options that will be needed to ensure a safe, secure, and efficient global economic trade system. The lack of attention paid to trade within the IPCC reports is surprising considering the global importance of trade for sustainable economic development and the focus that the United Nations Framework Convention on Climate Change (UNFCCC) places on climate action within an equitable and climate-resilient development framework [59]. Given the recent shift in the goals of the IPCC toward solutions-oriented reporting [60,61], the urgency in intentionally and comprehensively assessing the role of climate change on global economic trade, including adaptation options focused on securing important supply chains, has only increased.
Based on the results of this study and others [19,20,52,54,55], it is recommended that specific attention be paid to the impacts of climate change on trade and transport. This should signal a call to the academic community to concentrate research efforts on this topic to ensure that ample literature is in circulation for future assessment by the IPCC. It could be achieved through a variety of efforts within the IPCC ecosystem, for example, by devoting a cross-chapter box to climate change and economic trade, organizing a workshop or special meeting, or at some point in the future, by dedicating a special report to the global trade system or to climate change and the economy more generally. There is a large body of literature that outlines the impacts of climate change on transport infrastructure, and this needs to be discussed and treated more intentionally and more comprehensively in future IPCC assessment and special reports. To achieve this in the near term (i.e., for AR6), IPCC authors will be required to review relevant literature that exists outside of the climate change context, and which addresses the importance of trade generally, as well as for key trade routes, potentially vulnerable infrastructure, and key gaps in existing trade models. This will need to involve collaborations between social and physical scientists, as well as economic and IAM modelers. There is an important opportunity within the upcoming IPCC AR6 assessment report to explicitly link the importance of global economic trade by outlining the co-benefits of mitigation and adaptation around this theme through synergies between WGII and WGIII and within the synthesis report. Increased explicit treatment of global economic trade by the IPCC overall is imperative to fully understand the impacts, risks, adaptation options, and mitigation needs related to climate change on the trade system, which are very likely to affect the global economy and all of society.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2020-10-01T13:53:36.659Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "c203e515680919e794bb80c2687ff7c2036afd91",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40641-020-00163-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "c203e515680919e794bb80c2687ff7c2036afd91",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
221859796 | pes2o/s2orc | v3-fos-license | A rare case of Mycoplasma-induced rash and mucositis in a 44-year-old female patient
EM: erythema multiforme MIRM: mycoplasma-induced rash and mucositis PCR: polymerase chain reaction SJS: Stevens-Johnson syndrome TEN: toxic epidermal necrolysis INTRODUCTION Mycoplasma-induced rash and mucositis (MIRM), also referred to as ‘‘reactive infectious mucosalpredominant eruption,’’ is a relatively newly described entity. Differential diagnosis includes erythema multiforme, Stevens-Johnson syndrome (SJS), and toxic epidermal necrolysis. It is characterized by clinical symptoms of pneumonia, including fever and cough, and bymucosal lesions that usually affect 2 or more sites. In about half of the reported cases, it is also accompanied by a cutaneous eruption that is often vesiculobullous and sometimes presents with typical and atypical target lesions. It has been mostly reported in children and young adolescents with a mean age of 12 years, and it affected males in two-thirds of the reported cases. Here, we report an unusual case of MIRM occurring in a middle-aged female.
INTRODUCTION
Mycoplasma-induced rash and mucositis (MIRM), also referred to as ''reactive infectious mucosalpredominant eruption,'' is a relatively newly described entity. Differential diagnosis includes erythema multiforme, Stevens-Johnson syndrome (SJS), and toxic epidermal necrolysis. It is characterized by clinical symptoms of pneumonia, including fever and cough, and by mucosal lesions that usually affect 2 or more sites. In about half of the reported cases, it is also accompanied by a cutaneous eruption that is often vesiculobullous and sometimes presents with typical and atypical target lesions. It has been mostly reported in children and young adolescents with a mean age of 12 years, and it affected males in two-thirds of the reported cases. 1,2 Here, we report an unusual case of MIRM occurring in a middle-aged female.
CASE REPORT
A 44-year-old Caucasian woman presented to the emergency department complaining of productive cough with fever for the previous month and painful mucosal lesions for the previous week, with no other pertinent past medical history. She had been taking lamotrigine and levothyroxine for 20 years without interruption, with no other recent medications. Physical examination revealed marked bilateral conjunctival injection (Fig 1), with 360-degree ulcerations of the bulbar conjunctiva and palpebral margins. There was no epithelial deficit on the forniceal and palpebral conjunctiva or on the cornea. She had several erosions and ulcers, 1e2 cm in diameter, on the entire oral mucosa (Fig 2) and, to a lesser extent, on the labia minora of the vulva. Hemorrhagic crusts were also seen on the nasal mucosa. A dozen erythematous papules and vesicles, a few millimeters in diameter, were scattered on the limbs (Fig 3), and there was one atypical target lesion on the left arm (Fig 4). Laboratory studies showed a WBC count of 1042 3 10 9 /L (N = 420e1000) with neutrophil count of 7230 3 10 9 /L (N = 1900e7000) and C-reactive protein level of 254 mg/L (N \ 10 mg/L). Renal function and liver enzyme levels remained normal. Herpes simplex polymerase chain reaction (PCR) was negative for both the lesions of the lips and the conjunctiva. Serum IgM for M pneumoniae was positive, and nasopharyngeal swab for M pneumoniae PCR was also positive. PCR for Chlamydophila pneumoniae, Bordetella pertussis, and common viruses, including severe acute respiratory syndrome coronavirus 2, were all negative. Chest X-ray revealed a pulmonary infiltrate in the lower portion of the right lung lobe. Skin biopsy revealed interface dermatitis with numerous basal necrotic keratinocytes.
Based on these findings, MIRM was diagnosed. In addition to dermatologists, a multidisciplinary team was involved in the treatment, including infectious disease experts, ophthalmologists, gynecologists, otolaryngologists, and nutritionists. The patient was admitted and successfully treated with clarithromycin, 500 mg orally twice daily for 14 days, and oral prednisone that was started at 50 mg daily (1 mg/kg/ day) and then tapered over 10 days. Local treatments included mouthwash with lidocaine, dexamethasone elixir, and prednisolone 1% eye drops. Amniotic membrane grafts were also placed over the entire conjunctival mucosa of both eyes in the form of biological bandages with local anti-inflammatory and antiscarring effects. The patient was hospitalized for a total of 13 days until complete healing of all mucous membranes and skin lesions was achieved.
DISCUSSION
M pneumoniae infection that was ascertained using PCR and an extensive involvement of the ocular, nasal, oral, and genital mucosae was confirmed as classic MIRM due to the sparse and mainly distally distributed cutaneous vesiculobullous lesions. MIRM sine rash and severe MIRM, which are the two other types of MIRM, present with no significant cutaneous rash and widespread non-mucosal blisters or flat, atypical target lesions. 1 Although only recently described, MIRM is now a well-recognized distinct entity. It is almost exclusively seen in the pediatric population, probably because M pneumoniae infections tend to be more symptomatic in this age group. 3 Only four case reports of MIRM in adults were found in the literature, including three men (26-, 27-, and 42years-old) with classic MIRM and one 46-year-old man with MIRM skin rash. 4,5,6,7 There is a known male predominance in MIRM, with 66% of the identified cases occurring in males, and this predominance may even be more pronounced in the adult population. There is some evidence that men develop more severe lung disease in response to M pneumoniae infections as compared to women. 8 The question is whether this also applies to the skin. To our knowledge, our patient represents the first case of an adult female diagnosed with classic or any other type of MIRM.
Regardless of the age and sex of the patient, MIRM is an important diagnosis to consider when a patient presents with an acute mucocutaneous eruption. Indeed, the main differential considerations for MIRM are erythema multiforme, SJS, and toxic epidermal necrosis, and early diagnosis is important for its appropriate management. 1 Caution must be exercised when interpreting the histopathology report because there is nothing pathognomonic for MIRM and it can encompass well-described features of EM and SJS/TEN. 1 MIRM patients generally have a good prognosis overall and respond well to oral antibiotics (e.g., azithromycin or clarithromycin) and immunosuppressive therapy, such as systemic corticosteroids. 1,2 However, the prevention of severe longterm ocular sequelae, such as those seen in SJS, warrant early and aggressive ophthalmological intervention. In the current case, the medical team kept a low threshold of suspicion for MIRM, despite the patient being an adult woman, which allowed early diagnosis and rapid implementation of the appropriate treatment. | 2020-09-24T13:06:47.791Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "69acf848cf8885bef3fea683741139cd249e22fd",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jaadcasereports.org/article/S2352512620306858/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d43f9c05409dec61e08cdf07d40834e5ed9bb0fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46796980 | pes2o/s2orc | v3-fos-license | Modified method for distinguishing the intersegmental border for lung segmentectomy
This paper analyzed the results of a modified and simpler technique for distinguishing the intersegmental border during lung segmentectomy surgery. From January 2013 to December 2015, 539 patients with screening‐detected lung nodules <2 cm in maximum diameter underwent anatomic segmentectomy. With the guidance of preoperative three‐dimensional computed tomography bronchography and angiography, the bronchus, artery, and intrasegmental vein of the targeted segment could be precisely dissected under unilateral differential ventilation, and then intersegmental demarcation was confirmed by the modified inflation‐deflation method. The demarcation presented by this method was highly coincident with the real intersegmental border. Dissection along the border between the collapsed and inflated segments using either electrocautery or staples was safe, with almost no air leak or bleeding. This technique is a simple and effective alternative to previously described intersegmental border marking methods.
Introduction
With the guidance of three-dimensional computed tomography bronchography and angiography (3D-CTBA), preoperative anatomical identification of the lung structure is becoming more and more accurate, and precise segmentectomy has gradually turned into the standard approach for small, early-stage lung cancer. 1 However, because of individual variation, identification of the intersegmental border still presents difficulties during surgery. Several methods for identifying the intersegmental border have been reported, most of which fall into two categories aiming to create a demarcation of the inflation-deflation line or dyeing the border. 2,3 Herein, we report a modified and simpler technique of distinguishing the intersegmental border by combining the guidance of preoperative 3D-CTBA and intraoperative discernment without any additional auxiliary materials.
Methods
This retrospective study was reviewed and approved by The First Affiliated Hospital of Nanjing Medical University Review Board, and individual patient consent was waived according to institutional guidelines.
From January 2013 to December 2015, 539 patients with screening-detected lung nodules <2 cm in maximum diameter underwent anatomic segmentectomy (Table 1). The inclusion criteria for all surgical patients were adjusted according to the revision of National Comprehensive Cancer Network Clinical Practice Guidelines in Oncology: Non-Small Cell Lung Cancer. 3D-CTBA was applied to identify targeted segment bronchus and intrasegmental and intersegmental vein and artery, and the simulated surgical path and resection range were confirmed before surgery (Fig 1a,b).
After endotracheal intubation, in a state of unilateral differential ventilation, the targeted segment bronchus, artery, and intrasegmental vein were identified and dissected by ligation or stapler cutting, and then the collapsed lung was re-expanded completely with controlled airway pressure under 20 cmH 2 O, with the bronchus of the operation side open to atmosphere while continuing ventilation of the contralateral lung. Five to 12 minutes later, an irregular demarcation developed naturally between the inflated targeted segment and the deflated surrounding segments, which represented the intersegmental border to be operated on (Fig 1c). Combining the anatomical intersegmental vein orientation and the demarcation between the collapsed and inflated segments, a cone-shaped dissection using either electrocautery or staples was performed safely to complete the anatomical segmentectomy (Fig 1d). 4
Results
In 532 cases, there was a distinct demarcation between the targeted and surrounding segments, and the time required for this demarcation to develop ranged from 5 to 12 minutes (median 8 minutes). In the other seven cases, a clear line at the first inflation-deflation process did not develop as the involved intrasegmental veins, which had not been reconstructed in the 3D-CTBA models, were not adequately preserved. After cutting these intrasegmental veins and re-attempting the inflation-deflation process, satisfactory demarcation was observed in all seven cases. The cutting surface provided by this method was highly Right: S 1 , apical segment; S 2 , posterior segment; S 3 , anterior segment; S 4 , lateral segment; S 5 , medial segment; S 6 , superior segment; S 7 , medial-basal segment; S 8 , anterior-basal segment; S 9 , lateral-basal segment; S 10 , posterior-basal segment. Left: S * , subsuperior segment; S 1+2 , apico-posterior segment, S 3 , anterior segment; S 4 , superior lingular segment; S 5 , inferior lingular segment; S 6 , superior segment; S 8 , anteriorbasal segment; S 9 , lateral-basal segment; S 10 , posterior-basal segment. coincident with the real intersegmental border. Dissection along the border between the collapsed and inflated segments using either electrocautery or staples is safe, with almost no air leak or bleeding.
Discussion
The method developed in our center has a prominent advantage over other methods and does not require additional auxiliary materials. Tsubota reported a similar technique, but with his method, accurate identification and dissection of the targeted bronchus, artery, and intrasegmental venous under guidance of preoperative 3D-CTBA was impossible, and the incorrect division of the main intersegmental vein may cause poor border development. 5 Oizumi et al. also reported a modified Roeder knot technique for bronchial ligation and to visualize the anatomic plane during lung segmentectomy. 6 However, we optimized the operational process by dissecting the bronchus, artery, and intrasegmental vein of the targeted segment under unilateral differential ventilation, which saved the steps of making an additional knot and tying this knot to dissect the targeted segmental bronchus after re-expansion of the whole lung. Furthermore, in the above methods, the pressure to reflate the lung after dissection of the targeted bronchus, artery, and intrasegmental vein and the time required for the inflation-deflation line to developed were not clearly defined. The whole lung including the bronchus-dissected segment should be confirmed with thorough re-expansion, and we suggest 20 cmH 2 O as a moderate pressure, which can balance the need for re-expansion and avoid possible pressure trauma. Only under given pressure can the bronchus-dissected segment be inflated by the airstream passing through the pores of Kohn. After removal of the targeted segment, airway pressure under 20 cmH 2 O was applied to re-expand the remnant lung to decrease the possibility of injury to the cutting edge. Waiting time was another critical factor for the emergence of ideal demarcation, although it varied from case to case. With the precise dissection of targeted tissue, the quality of demarcation was positively correlated to the waiting time to a certain extent.
As the prerequisite for obtaining a good demarcation between the targeted and surrounding segments, accurate identification and dissection of the targeted bronchus, arteries, and intrasegmental veins should be guaranteed by combining the guidance of 3D-CTBA and intraoperative discernment. The seven cases that did not initially develop an ideal marking border at the first inflation-deflation process demonstrate this necessity, because the preoperative 3D-CTBA model is still not fully identical to intraoperative actual anatomy. The conditions of the lung during preoperative CT detection and intraoperatively are slightly different, thus experience is required for accurate identification. In addition, the reconstruction result is closely related to pulmonary parenchyma quality and the degree of bronchus inflation. When there is poor contrast, this has a detrimental effect on the reconstruction, which can cause a reduction in the quality of surgical guidance. In this case series, there were 21 patients with varying degrees of emphysema diagnosed via radiographic evidence but alack of clinical signs, and the waiting time for demarcation to appear was universally prolonged.
Development of the inflation-collapse line was postulated to involve at least two mechanisms. First, without the segment bronchus as the gas outlet, the gaseous exchange between the targeted segment and the atmosphere was blocked. Second, the blood vessels were ligated simultaneously, so the gas transfer between the pulmonary alveoli and the blood vessels was blocked.
Another tip worth discussing is the method of cutting the intersegmental border. Because inflation-deflation demarcation can wane from the hilum to the costal surface, caused by a decrease in the connective tissue of the septum lobulae, the hilum of the targeted segment should be dissected sharply, extending to the peripheral parenchyma as much as possible, while one third of the peripheral parenchyma with obscure intersegmental border can be tailored with a stapler. This method can eliminate the curling effect caused by linear cutting of the stapler on the convex of the lung.
In conclusion, this technique is a simple and effective alternative to previously described intersegmental border marking methods. However, the concrete physiological mechanisms and factors affecting inflation-deflation development time still require further study. | 2018-04-03T02:43:13.935Z | 2017-10-23T00:00:00.000 | {
"year": 2017,
"sha1": "22f0af91cd65a0e7f76a8de0662c0f144c2f4e52",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.12540",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22f0af91cd65a0e7f76a8de0662c0f144c2f4e52",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198241872 | pes2o/s2orc | v3-fos-license | Effects of Vascular Endothelial Growth Factor (VEGF) on Dental Pulp Stem Cells (DPSC)
Corresponding Author: Karl Kingsley Department of Advanced Education in Orthodontics, University of Nevada, Las Vegas-School of Dental Medicine, 1700 West Charleston, Las Vegas, Nevada, 89106, USA Tel: (702)774-2623 Email: Karl.Kingsley@unlv.edu Abstract: Dental Pulp Stem Cells (DPSCs) are non-embryonic, mesenchymal stem cells that may have significant potential for therapeutic and regenerative biomedical applications. Studies of DPSC differentiation have demonstrated the potential to form many tissue types, including neural, osteogenic and vascular precursors using cytokines and growth factors, such as Vascular Endothelial Growth Factor (VEGF). Eight previously isolated Dental Pulp Stem Cell (DPSC) isolates were grown in culture and treated with VEGF to evaluate any effects on growth, viability or biomarker expression. Administration of VEGF at 10 ng/mL significantly inhibited growth in two rapidly dividing or rDT DPSC isolates, with no other measurable effects noted among the intermediate (iDT) or slow (sDT) growing DPSC isolates. In addition, administration of VEGF had no significant effects on viability of the sDT or iDT DPSC isolates, however, all three of the rapidly dividing or rDT DPSC isolates exhibited significantly increased viability. Finally, mRNA expression of osteogenic biomarkers Alkaline Phosphatase (ALP) and Dentin Sialophosphoprotein (DSPP) was observed among the rDT isolates with specific combinations of DPSC biomarkers expressed (NANOG in combination with Sox-2 or Oct-4 but not both). The results of these data suggest that VEGF administration may be sufficient to induce partial differentiation of DPSC isolates, although this may be dependent upon the MSC biomarker expression of the DPSCs. These preliminary data may further research into the potential for tissue regeneration and bioengineering.
Introduction
Dental Pulp Stem Cells (DPSCs) are non-embryonic, mesenchymal stem cells than can be obtained, isolated, cultured and cryopreserved with relative ease compared with other potential sources, which has driven recent scientific research into their potential for therapeutic applications (Ferro et al., 2014;Gronthos et al., 2011;Collart-Dutilleul et al., 2015). Harvested from the dental pulp of primary teeth, extracted teeth, or avulsed teeth, DPSCs are multi-potent stem cells that may be useful to facilitate advanced regenerative therapies (Kabir et al., 2014;Aurrekoetxea et al., 2015). These studies have provided a better overall understanding of the capabilities of Mesenchymal Stem Cells (MSCs) and DPSCs, with recent evidence demonstrating that differentiation potential may depend, in part, on the tissue of origin used in MSC harvesting (Masthan et al., 2013;Isobe et al., 2016;Hernández-Monjaraz et al., 2018).
Studies etaldone on DPSC differentiation have demonstrated the potential to form many tissue types, including neural, osteogenic and vascular precursors (Gonmanee et al., 2018;Kim et al., 2012;Zhang et al., 2016). Much progress has been made towards the in vitro and in vivo differentiation of DPSC towards specific cell lineages (Zhang et al., 2008;Kanafi et al., 2013). In fact, some evidence now suggests that individual growth factors, such as Vascular Endothelial Growth Factor (VEGF), may be sufficient to induce partial differentiation of DPSC-although this may be more dependent upon specific biomarkers or DPSC characteristics (D' Alimonte et al., 2011;Janebodin et al., 2013;Silva et al., 2017).
It has been demonstrated that DPSCs can be stimulated using VEGF through the canonical Wnt-βcatenin pathway into differentiating into blood vessels that resembled embryonic vasculogenesis, revealing the importance of this growth factor (VEGF) in angiogenesis as well as its potential for regenerative vasculogenesis (Zhang et al., 2016;2008;Silva et al., 2017). However, the majority of studies to date have examined how the family of VEGF ligands act on specific tyrosine kinase receptors to create intracellular responses in differentiated vascular endothelial cells, while the cellular responses to and intracellular effects of, VEGF on various lineages of multipotent DPSCs remain relatively unknown (Aksel and Huang, 2017).
Differentiation potential and stem-ness may be linked with specific intracellular MSC biomarkers such as the expression of Sox-2, Oct-4 and NANOG, which have been found to be highly associated with the pluripotency of cells, including DPSC Alraies et al., 2017;Ferro et al., 2012). The presence or absence of these biomarkers in cultured DPSCs may determine the ability of the isolates to differentiate and self-replicate (Martens et al., 2012;Xiao et al., 2014;Bakkar et al., 2017). Based upon this understanding, the primary objective of this study was to evaluate the effects of VEGF on several DPSC isolates and to further evaluate the expression of specific biomarkers that may indicate pluripotency, as well as differentiation.
Study Approval
The protocol for this study was reviewed and approved by the Office for the Protection of Research Subjects (OPRS) and Institutional Review Board (IRB) OPRS#763012-1 "Retrospective analysis of Dental Pulp Stem Cells (DPSC) from the University of Nevada Las Vegas (UNLV) School of Dental Medicine (SDM) pediatric and adult clinical population". The original protocol for the collection and isolation of DPSC was approved by the IRB and OPRS#0907-3148 "Isolation of Non-Embryonic Stem Cells from Dental Pulp".
Study Design
This retrospective study involved the analysis of DPSCs previously isolated from clinical patients, recruited at random from the UNLV-SDM pediatric clinic. Inclusion criteria included adult patients or pediatric patients aged seven (7) or older with their parents or guardian's permission who agreed to participate and were scheduled for a tooth extraction of health (vital) intact teeth prior to the initiation of orthodontic treatment. Pediatric assent and Parental permission to consent for voluntary participation were obtained at the time of study enrollment. Exclusion criteria included any patient, parent or child that was not a patient of record at UNLV-SDM, any patient or guardian who declined to participate and any patients having teeth extracted due to injury (fracture), infection or other disease.
DPSC Collection (Initial)
In brief, the overwhelming majority of patients who agreed to participate were scheduled for tooth extractions of third molars. Once extracted, each tooth was sectioned at the Cemento-Enamel Junction (CEJ) to allow extraction of the dental pulp with an endodontic broach for transfer into a sterile microcentrifuge tube containing 1X Phosphate-Buffered Saline (PBS).
Samples were stored on ice until transfer to a biomedical laboratory for processing and screening. To prevent research bias and prevent any patient identifying information from being disclosed, a randomly generated, non-duplicated number was assigned to each sample and concurrent patient demographic information collected.
No patient-specific identifying information was subsequently available to any research team member.
Culture and Propagation
Briefly, cells were cultured and propagated for ten passages to determine the rate of growth and Doubling Time (DT). Passage (or split) for each DPSC isolate was 1:2 and confluence determined with trypan blue and BioRad TC20 automated cell counter (Hercules, CA), using the manufacturer recommended protocol. Data collected included total and live cell number and the resulting percentage of viable cells for analysis. Doubling Time (DT) was categorized as rapid or rDT (~2 days), intermediate or iDT (4-6 days) and slow or sDT (10-12 days).
Experimental Protocol
To determine any effects on DPSCs, the cells were plated into 96-well tissue culture treated plates at a concentration of 1.2×10 4 cells/mL. Negative (nontreated) control cells were compared with cells treated with Vascular Endothelial Growth Factor (VEGF) from ThermoFisher Scientific (PCH9394) at a concentration of 10 ng/mL. Eight replicates were performed in each experiment for all DPSC isolates, which were repeated for a total of three experimental trials (n = 24).
RNA Isolation
To assess any changes to differentiation, total RNA was isolated from each isolate using the Total RNA Isolation Reagent (TRIR) from Molecular Research Center (Cincinnati, OH) using the protocol recommended by the manufacturer. RNA was subsequently screened for quality and quantity using ratio measurements of absorbance at 260 and 280 nm (A260/A280 ratio).
Polymerase Chain Reaction (PCR)
Screening for changes to mRNA expression in each DPSC isolated was accomplished using the ABgene Reverse-iT One-Step RT-PCR protocol and reagent kit with specifications that included an initial reverse transcription at 47C for 30 min, followed by 30 dpsc-3882, annealing for 30 sec at the appropriate temperature for each primer set and final extension at 60C for one minute. Primers synthesize from Eurofins MWG Operon (Huntsville, AL) were:
Statistical Analysis
Basic proliferation and viability information regarding the DPSC isolated were compiled and presented using simple descriptive statistics (counts and percentages). Comparisons of change to viability or proliferation were calculated and compared using two-tailed t-tests, which are appropriate for parametric data analysis. Due to the potential for Type I error, all analyses were subsequently confirmed using analysis of variance or multiple (ANOVA).
Results
To determine any effects on DPSC phenotypes, vascular endothelial growth factor (VEGF) was administered in 96-well assays (Fig. 1). These results demonstrated that the majority of DPSC isolates were not significantly affected by VEGF administration, p>0.05. However, two DPSC isolates (dpsc-3882, dpsc-5653) had significant measurable decreases in proliferation under VEGF administration, p = 0.038 and p = 0.041 respectively. In addition, dpsc-3882 and dpsc-5653 were both categorized as having rapid doubling times or rDT.
To evaluate if the observed changes in proliferation and cellular growth correlated with any changes to other DPSC phenotypes, cellular viability was also measured under VEGF administration (Fig. 2). Although no significant changes to viability were noted among the iDT or sDT DPSC isolates under VEGF administration, all three of the rDT DPSC isolates demonstrated significant measurable increases to viability over the 72 time course, p<0.05.
To determine if any of the changes to cellular growth or viability induced by VEGF administration among the DPSC isolates were associated with changes to DPSC biomarkers for osteoblastic differentiation, RT-PCR screening of RNA was performed (Fig. 3). In brief, primers specific for Alkaline Phosphatase (ALP) and Dentin Sialophosphoprotein (DSPP) were used to screen for mRNA expression of these biomarkers. These results demonstrated that VEGF administration was sufficient to induce mRNA expression of ALP in two DPSC isolates (dpsc-3882, dpsc-5653). In addition, VEGF administration was also sufficient to induce DSPP mRNA expression in one DPSC isolate (dpsc-3882).
Finally, an evaluation of the MSC biomarkers for each DPSC isolate was performed to determine if there were any associations with VEGF responsiveness (Fig. 4). This analysis revealed that MSC biomarkers Sox-2, Oct-4 and NANOG were differentially expressed by the DPSC isolates (Fig. 4A). For example, the rDT DPSC isolates each had a distinct expression profile (dps-3882: Oct-4, NANOG; dpsc-5653: Sox-2, NANOG; dpsc-7089: Sox-2, Oct-4, NANOG). In contrast, none of the iDT DPSC isolates expressed Oct-4, while none of the sDT expressed either Sox-2 or Oct-4.
When combined with the results of VEGF assay, these data demonstrated that only the rDT DPSC isolates that expressed a combination of NANOG with either Oct-4 or Sox-2 (but not both) were responsive to VEGF administration (Fig. 4B). More specifically, the rDT DPSC isolate expressing a combination of Oct-4 and NANOG exhibited the most robust VEGF response, producing both ALP and DSPP (dpsc-3882). The rDT DPSC isolate expressing the combination of Sox-2 and NANOG exhibited some response to VEGF, producing ALP but not DSPP (dpsc-5653). However, the rDT isolate that expressed all three MSC biomarkers (Sox-2, Oct-4, NANOG) did not exhibit an osteogenic response to VEGF administration-similar to the negative response of the iDT DPSC isolates (Sox-2, NANOG) and sDT DPSC isolates (NANOG only). Fig. 1: Effects of VEGF administration on DPSC growth. Administration of VEGF at 10 ng/mL had a significant effect on two rapidly dividing (rDT) DPSC isolates, dpsc-3882 and dpsc-5653-which were significantly lower than the negative controls (p = 0.038 and p = 0.041, respectively). No other measurable effects were noted among the intermediate (iDT) or slow (sDT) DPSC isolates
Discussion
Research that has evaluated DPSC differentiation has demonstrated the potential to form many cell types, including neural, osteogenic and vascular precursors using cytokines and growth factors, such as vascular endothelial growth factor (Zhang et al., 2016;2008). However, the phenotypic and cellular effects of VEGF on various lineages of multipotent DPSCs remains relatively unknown, therefore this study sought to evaluate these effects on several DPSC isolates with distinct markers of pluripotency (D' Alimonte et al., 2011;Janebodin et al., 2013;Silva et al., 2017). The results of this study demonstrated that VEGF has distinct and specific effects on DPSC phenotypes, although these were not observed uniformly among all DPSC isolates.
For example, cellular growth and viability were markedly affected by VEGF only among the rapidly growing (rDT) DPSC isolates, which mirrors similar findings of VEGF effects on rapidly dividing MSC from other tissues (Chen et al., 2018;Healy et al., 2015;Yuan et al., 2011). In addition, VEGF appears to induce osteogenic biomarker expression in a subset of rDT DPSC isolates, a finding that appears to support observations of VEGF osteogenic effects in other MSCs (Zavan et al., 2017;Leegwater et al., 2017;Murakami et al., 2017). To understand these observations more thoroughly, an analysis of MSC biomarker expression and the associations with osteogenic marker induction may be necessary (Bakkar et al., 2017;Bakopoulou et al., 2017;Karamzadeh et al., 2012).
New evidence has suggested that MSC biomarker expression in DPSCs may determine, in part, their differentiation potential and responsiveness to external stimuli (Xie et al., 2018;Zhang et al., 2017;Xie et al., 2017). The results of this study support these findings, with observations that rapidly dividing DPSC isolates expressing NANOG in combination with either Sox-2 or Oct-4 were responsive to VEGF administration. This research may also provide a potential explanation for the observation that DPSC expression of Oct-4, Sox-2 and NANOG were not responsive to VEGF administration, noting that "stemness" and pluripotency are correlated with MSC biomarker expression and that DPSC expression of more MSC biomarkers may indicate more than one stimulus or induction factor may be needed to facilitate differentiation (Pisal et al., 2018;Lee et al., 2017).
Conclusion
The results of these data suggest that VEGF administration may be sufficient to induce partial differentiation of DPSC isolates, although this may be dependent upon the MSC biomarker expression of the DPSCs. In addition, the phenotypic changes to these DPSC isolates (decreased growth, increased viability) support these observations and may provide preliminary data to further research into the potential for osteogenic differential of DPSC. This may contribute to the overall, long-term goals of DPSC use for tissue regeneration and bioengineering. | 2019-07-26T08:08:03.388Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "de69e817f5a76f04c0a63c0bb88c99172e4e2272",
"oa_license": "CCBY",
"oa_url": "https://thescipub.com/pdf/amjsp.2019.1.8.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a3fddb5b3663c938eeef2252c340467e7a931fe0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
225424595 | pes2o/s2orc | v3-fos-license | Developing an assessment framework of multidimensional scientific competencies
Received Feb 18, 2020 Revised Sep 25, 2020 Accepted Oct 30, 2020 The study aimed to develop and validate an assessment framework of multidimensional scientific competencies for seventh-grade students in the northeastern region of Thailand. A total of 289 samples with three different scientific competency levels were randomly selected to participate as testtakers. The design-based research encompassing four phases of the construct modelling approach, namely construct maps, item design, outcome space, and Wright map. Multidimensional Random Coefficient Multinomial Logit model was employed to examine the quality of the created assessment framework of multidimensional scientific competencies. The results showed that scientific competence is comprised of three dimensions, namely explain phenomena scientifically, evaluate and design scientific inquiry, and interpret data and evidence scientifically. Each dimension can be further categorized into four levels. The assessment framework consists of 16 items. The results revealed that there is validity evidence regarding internal structure based on the comparison of the model fit and Wright map. Moreover, results also indicated that the reliability evidence and item fit are compliance with the quality of the assessment framework as revealed in the analysis of standard error of measurement and infit and outfit of the items. It can be concluded that the assessment framework is currently prevalent to assess the scientific competencies of seventh-grade students.
INTRODUCTION
Science learning is a discovery process to find out about the nature systematically and it is an activity to learn the universal forms and events [1]. Science learning aims to develop skills and creativity based on scientific knowledge relevant to everyday life and decision-making for problem-solving [2,3]. Therefore, scientific competency requires not just knowledge of the science concepts but also knowledge of common procedures and practices associated with scientific inquiry, and how these enable science to advance [4,5]. Many of the challenges of the 21st-century science education will require teachers to provide innovative solutions that have a basis in scientific thinking and scientific discovery in their teaching and assessments [5]. A scientific competency assessment framework is significantly relevant to the challenges because it can provide criteria to judge the progress of students' learning [6]. The Program for International Student Assessment (PISA) science framework defined scientific competencies as abilities to engage with science-related issues, and with the ideas of science, as a reflective individual. A scientifically literate student is willing to engage in reasoned discourse about science and technology, which requires the competencies in three dimensions, namely, explain phenomena scientifically (ES), interpret data and evidence scientifically (IE), and evaluate and design scientific inquiry (ED) [7]. ES dimension means students can recognize, offer, and evaluate explanations for a range of natural technological phenomena. ED dimension is defined as students can describe and appraise scientific investigations, and propose ways of addressing questions scientifically. IE dimension refers to students who can analyze and evaluate claims and arguments in a variety of representations and draw appropriate scientific conclusions [7].
An assessment framework of scientific competency is important to science teachers because they can use the results to find a congruent line between the learning activities that students carry out in the classroom and how they are being evaluated. A common mistake is the gap may exist between what the teachers expect students to learn and the assessment framework to ensure the students acquire the competencies defined [8]. This study aimed to develop a sound assessment framework to examine sevengrade students' scientific competencies in three dimensions adopted from the PISA science framework, namely ES, ED, and IE. The study is timely and important to discover because the results will provide evidence of the quality assessment framework in terms of its validity and reliability in the actual science classroom context.
RESEARCH METHOD
Researchers espoused the construct modeling approach [9][10][11] and design-based research method [12][13][14] to develop the assessment framework so-called Multidimensional Scientific Competencies Framework. Hence, Multidimensional Random Coefficients Multinomial Logit Model (MRCMLM) was used to validate the quality of the Multidimensional Scientific Competencies Framework.
Samples of the research
Owing to researchers utilized an MRCMLM to analyze the quality of Multidimensional Scientific Competencies Framework sufficient samples should be taken into consideration [15,16]. The required sample size for estimation of item parameters in the Rasch-family models is 100 to provide accurate parameter estimates [17][18][19][20]. A total of 289 seventh-grade students with three different scientific competency levels from schools in the northeastern region of Thailand were selected using stratified random sampling techniques as test-takers. The three different scientific competency levels were identified using Pure Substance Concept Test.
Multidimensional random coefficients multinomial logit model (MRCMLM)
MRCMLM is a multidimensional Rasch-type item response model which is an emphasis on the interaction technique between test-takers and test [15]. The MRCMLM is developed in a form that permits generalization to the multidimensional case of a wide class of Rasch models. The examination result of an answer is to estimate the student's ability parameters in multiple dimensions [21]. By developing an item can measure more than one characteristic to show the various abilities of students. Another key aspect of the MRCMLM is the measurement model for exmining the the quality instrument in terms of its validity and reliability.
Research procedure
The research procedure consisted of four phases. Researchers started to investigate seventh-grade students' scientific competencies after having several comprehensive discussions with science teachers about the core curriculum in the basic education 2008 (revised edition in 2017) in the first phase. The main emphasis of the discussions was related to the conceptual understanding, issues, and discrepancies of scientific competencies compared to the standards of the Thai national core curriculum. Data was collected using the interview method and relying on Think-aloud techniques. The interview results from the first phase revealed the amount of scientific competency achievement at the end of each unit in general, teachers' essential feedback on students' scientific competency levels that contributing to science learning, and the problems of creating assessment tool. Researchers found that one of the most effective tools used by teachers in assessing the quantity and quality of students' scientific competencies is a test.
Based on the results from the first phase, researchers continued to collaborate with science teachers to create the construct map consisting of four construct levels (Low, Basic, Intermediate, and High) for each dimension of scientific competencies (ES, IE, and ED) to fit the actual science classroom context in the [7]. Based on the results of second phase, researchers continued to design the tasks and items to develop an assessment framework. This assessment framework is a prototype consisting of 16 items, namely seven items of ES dimension, four items of ED dimension, and five items of the IE dimension. The assessment framework was developed referring to Thailand's basic education core curriculum 2008 (revised 2017) of physical science standard and employed polytomous scoring. Besides, each item can be measured only onedimension concept was taken into account. Figure 1 is an example of a task in the assessment framework. The meaning of each level in each dimension was interpreted according to the learning outcome grading or so-called outcome space. This outcome space was set so that it would be consistent with the construct map of learning outcomes and student's responses in the actual context.
Figure 1. Sample test item following IE construct
In the third phase, researchers tried out the prototype and followed by making the necessary improvement and modification if any. This phase was aimed to validate the developed assessment framework. Firstly, researchers used the prototype to assess students' responses and adjust those items or tasks to be more consistent not only to the assessment framework but also in line with students' learning context. Secondly, researchers tried out again the prototype to investigate its accuracy, reliability, and quality using the multidimensional test response theory [21]. Finally, researchers examined the quality of structural straightness of the prototype using a multidimensional analysis method, reliability test of alpha coefficient, and reliability of the Expected-A-Posteriori and Separation (EAP/PV). This is followed by analyzing the internal structure of the assessment framework using the Wright map. The Wright map can provide a picture of the items in the assessment framework by placing the difficulty of the items or tasks on the same measurement scale as the ability of the test takers. This helps the researchers with a comparison of students and items, to better understand how appropriately the assessment tool measured the students' scientific competencies. After researchers examined the quality of the prototype, the researchers identified transition point of scientific competency levels. Researchers made the necessary improvement at any level according to interpretation set by [22,23]. Besides, researchers utilized ACER Conquest 2.0 to analyze the quality of the assessment framework [24] through between-item multidimensional model.
The final phase was to report the development of the assessment framework of multidimensional scientific competencies, namely, explain phenomena scientifically, evaluate and design scientific inquiry, and interpret data and evidence scientifically. This assessment framework includes the transition point for each scientific competency level of each dimension as well.
Learning Standard S 2.1: Understanding the properties of matter, composition of matter the relationship between the properties of matter and structure, and the inter-particle bonding force. Understanding about principles and nature of the state of matter changes, solution formation and the occurrence of chemical reactions.
RESULTS AND DISCUSSION
The results of this study are presented by following the study aim specified above. The preliminary result was the development of the assessment framework to measure seventh-grade students' scientific competencies in three dimensions. This is followed by developing the construct maps for each scientific competency dimension. Next, the validity and reliability of the developed assessment framework were examined. Finally, researchers reported the quality of the assessment framework by examining the item fit.
Developing assessment framework
Assessment framework consists of three construct maps of scientific competencies level, as shown in Table 1 to Table 3. Each dimension consists of a name and description for respondent in each level. The 3 dimensions consist of 4 levels which are low level, basic level, intermediate level, and high level. • Can draw or describe the relationship between scientific ideas and concepts from substance and properties of substances to make predictions. • Can construct explanations of novel and unfamiliar phenomena, events, and processes that may involve several steps. • Can demonstrate the use of knowledge beyond the standard of science curricula and use procedural and epistemic knowledge appropriately.
Intermediate
• Can recall or use the given scientific ideas and concepts from substance and properties of substances to construct explanations. • Can explain relative complex phenomena or less familiar events and processes.
Basic
• Can recall, apply simple scientific facts, and draw upon complex scientific ideas moderately. • Can construct simple explanations with relevant cues and support appropriately.
Low
• Can recognize scientific terms.
• Can use single scientific facts that close to their personal experience and the given simple scientific concept to identify an appropriate scientific explanation. • Can explain a familiar event or process that is consistent with the given information. • Can interpret data drawn from more complex or less familiar contexts.
• Can draw appropriate conclusions that go beyond the data.
• Can use data from less familiar contexts to identify trends and make predictions.
Basic
• Can draw on procedural and basic content knowledge for simple experimental design.
• Can collect and interpret data to answer questions that require only simple or daily content knowledge. • Can distinguish between a non-scientific and scientific question. 1 Low • Can follow simple instructions to answer a question.
• Can conduct a scientific procedure. • Can evaluate the supporting data for hypothesis.
• Can construct and justify a conclusion using science concepts.
• Can discriminate between relevant and irrelevant information.
• Can derive outside knowledge to construct an explanation.
Intermediate
• Can interpret and manipulate a moderately complex data set.
• Can express in several formats.
• Can justify appropriate conclusions.
• Can identify sources and effects of uncertainty in scientific data.
Basic
• Can identify, interpret, and transform data.
• Can identify evidence to support a scientific claim. 1 Low • Can identify simple patterns in data to support a claim or conclusion.
Results of validity evidence
Three methods were employed to validate the assessment framework. Firstly, researchers evaluated the validity evidence based on the test content to come from expert judgments of the relationship between parts of the test items and the construct maps of scientific competencies [23]. The five expertists rated the domain represent as following: there are item content validity index: I-CVI in between 0.50 -1.00 and scale content validity index: S-CVI = 0.90. Researchers tried out the assessment framework followed by interviewing those students about their understanding of the contents and the relevancy of the tests in the assessment framework.
Secondly, we examined the validity evidence based on response processes by protocol interpretation from the think-aloud process was found that there were students who understood and did not understand exactly as construct map. Therefore, the researcher has improved the questions and construct map to be clearer. Besides, researchers also utilized their feedback to improve the tests and scoring before conducting in the actual classroom context.
Finally, validation was conducted on the internal structure of the assessment framework in terms of its accuracy of the assessment framework's construct by comparing the two-model fit, namely unidimensional and multidimensional models. The unidimensional model means the composition of all the tests into one dimension while the multidimensional model means separation of the items into the respective three dimensions. The results revealed that multidimensional model was statistical fit significantly better than unidimensional model through the Likelihood Ratio Chi-Squared G 2 ( 2= 37.26, df=5) [25] as well as the Akaike Information Criterion (AIC) [26] and Bayesian Information Criterion (BIC) [27] had lower value in multidimensional constructs for assessing scientific competencies, as elucidated in Table 4. Therefore, it can be concluded that the assessment framework is suitable for multidimensional measurements because the lower the AIC and BIC index values reflecting that the model is compliance with the test results [28]. Moreover, we examined internal structure followed by analyzing the structural validity of the construct map and the criterion zone of the Wright map to determine the transition points, as shown in Figure 2. The transition point was computed from the mean of item thresholds in each level and dimension, as shown in Table 5. Each dimension has four competency levels with a scoring scale from 1 to 4 points, ranged from low, basic, intermediate, and high levels. The scores of transition points for ES dimension from Level 1 to 2, Level 2 to 3, and Level 3 to 4 are equal to -0.12, 0.88, and 2.09 logit, respectively. The scores of transition points for ED dimension from Level 1 to 2, Level 2 to 3, and Level 3 to 4 are equal to -0.41, -0.11, and 1.55 Logit, respectively. The scores of transition points for IE dimension from Level 1 to 2, Level 2 to 3, and Level 3 to 4 are equal to 0.28, 0.44, and 0.55 logit, respectively.
ES
ED IE The Wright map is a graphical representation that links the item difficulties and student ability estimates on the common scale as the quality evidence. In other words, the Wright map is comprised of a distribution of item difficulties, distribution of student ability estimates, and how well the item difficulty distribution is matching with the student ability estimates. Therefore, the items should match with the student ability estimates to justify the test is maximally informative. The Wright map results showed that each dimension of the assessment framework can be used as direct evidence of the test content because the difficulty of each item (item threshold) is to the right but not yet covering the competency range of the students on the left side of the Wright map. Results from the Wright map shows that items 3, 5, 6, 7, 10, 11, 13, 14, and 15 are moderately difficult. On the other hand, items 2, 4, 8, 9, and 12 are easy and items 1 and 16 are difficult. Nevertheless, the latent distribution and threshold results show that item 8 just has three instead of four competency levels. It can be concluded that the assessment framework is compliance with the definition of scientific competencies in terms of three dimensions.
Correlation dimension, moreover, the correlation coefficient between ES and ED, between ED and IE, and between ES and IE are equal to 0.74, 0.76, and 0.92 respectively. This implies that most students' scientific competencies are in the same direction, and there is high corretation between ES and IE. This may be due to studets' competency for describing, expressing or justifying the relationship between scientific ideas and concepts from substance and properties of substances to make predictions need to discriminate, interpret and evaluate a data set for supporting hypothesis or their reasons [29]. This result suggests that ES and IE dimensions can be collapsed to one new dimension as "explain and interpret data scientifically" thus remaining the four competency levels.
Reliability evidence
Researchers used three methods to assess the reliability of the assessment framework. The first method was analyzing the reliability coefficient using MRCML model by identifying Expected-A-Posteriori and Separation (EAP/PV) value. The EAP/PV values of ES, ED, and IE dimensions were 0.78, 0.58, and 0.76 respectively. This implies that only two dimensions (ES and IE) are considered as suitable precision to use as an assessment framework which is consistent with the criteria set by [30] who suggested that the precision of the measuring coefficient should be greater than 0.70. However, ED dimension is found below the acceptance criteria. The separation reliability equals 0.98.
Secondly, we examined the reliability of the assessment framework using the standard deviation graph SEM by identifying the standard error of measurement (SEM). When the multidimensional model was separated into three related sub-dimensions, namely SEM (θES), SEM (θED), and SEM (θIE) dimensions, the latent parameter of each student would have a different SEM.
Finally, researchers utilized the standard deviation graph SEM to assess the reliability of the assessment framework by examining the SEM. When the multidimensional model was separated into three related sub-dimensions, namely ES (θES), ED (θED), and IE (θIE), the latent parameter of each student would have a different SEM. Results showed that the mean score in ES (θES) is higher than IE (θIE) and ED (θED) in that order. Therefore, it was found that ES (θES) and IE (θIE) are having the same characteristics as those obtained from the 3D model. The above analysis is the third reliability evidence. Figure 3 illustrate the SEM for the three separated sub-dimensions. These results suggest that assessment framework is found appropriate for a student in the intermediate level of sciencetific competencies more than the low and high level. This is because the lowest MP level of students showed the highest error of SEM value.
Quality of item fit
Based on the results of the first phase, statistical analysis of the appropriateness of each item of the MRCML uses the Multidimensional Form of Partial Credit Model by Conquest 2.0 [12]. The criteria to determine the suitability of the outfit MNSQ and infit MNSQ values should be between 0.75 to 1.33 [11]. If both of these values are in this range, then the assessment framework will be appropriate to the data. Results revealed that the item difficulties were appropriate because the assessment framework has its difficulty ranged from -1.24 to 1.22 and the statistics consistency of infit MNSQ ranged from 0.81 to 1.23 was at the acceptable range [11]. Table 5 shows the details of the item fit result.
CONCLUSION
The main objective of the Multidimensional Scientific Competencies Framework to evaluate the seventh-grade students' scientific competencies in the northeastern region of Thailand. Results showed that scientific competency is better measured using a multidimensional model rather than a unidimensional model. Besides, the results of the study imply that ES and IE dimensions can be combine to one new dimension as "explain and interpret data scientifically" thus remaining the four competency levels. Moreover, result found that the ED dimension has a low precision because the number of items was inadequate for estimating students' scientific competencies. Consequently, this dimension should be added more items to cover both easy and hard levels. In addition, considering the determination of the transition point of the IE dimension, it was found that the transition point values (0.28, 0.44, and 0.55) were very adjacent to all of the values. As a result, this framework should not classify in each level clearly. In the future, the researchers should revise the scoring guide, especially level 2, and level 3. The implication is that the assessment tool can provide rich information of those students who are at the intermediate level. This is reflected in the results of SEM value for estimating latent ability in ES, IE, and ED dimensions was at the lowest range of logits. | 2020-10-30T07:05:19.903Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "8f1b32ec19fd6115d7acf99e7c32b233a1767a9e",
"oa_license": "CCBYNC",
"oa_url": "http://ijere.iaescore.com/index.php/IJERE/article/download/20542/13062",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "634ffaf31efb29b37aae04bcc448a98903810d94",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17579929 | pes2o/s2orc | v3-fos-license | P-Odd Asymmetry in the Deuteron Disintegration by Circularly Polarized Photons
We calculate P-odd difference of the total cross-sections of the deuteron disintegration by left and right polarized photons. The relative magnitude of this difference in the threshold region is about 10^{-7}. Its experimental measurement would give valuable information on the weak nucleon-nucleon interaction at short distances.
To the memory of Volodya Telitsin, our friend and enthusiastic advisor on nuclear physics and physics in general
Introduction
The deuteron, being the simplest nuclear system, in many cases allows for a relatively reliable theoretical analysis. That is why the problem of parity nonconservation (PNC) in the deuteron for a long time attracts attention of both experimentalists and theorists. Unfortunately, PNC effects in the deuteron are tiny, so that up to now only upper limits on them have been obtained experimentally [1][2][3].
At present, however, new prospects have arisen here due to creating intense sources of polarized photons, electrons, and neutrons. On the other hand, now the experimental investigations of PNC effects in the deuteron have become of great interest. One may hope that they will resolve a contradiction which exists at present in the problem of P-odd nuclear forces. The point is that recently the nuclear anapole moment (AM) of 133 Cs was discovered, and measured with good accuracy in atomic experiment [4]. The result of this experiment is in a reasonable quantitative agreement with the theoretical predictions, starting with [5,6], if the so-called "best values" [7] are chosen for the parameters of P-odd nuclear forces. However, the results of some nuclear experiments indicate that the P-odd πNN constantḡ is much smaller than its "best value" (see, e.g., [8,9]). An obvious possibility to reconcile the results of the atomic experiment and the nuclear ones, is to assume that the magnitude of another, short-distance contribution to P-odd nuclear forces is essentially larger than its "best value".
We discuss in the present paper the P-odd asymmetry in the deuteron disintegration by circularly polarized γ-quanta. As will be demonstrated below, the component of the weak interaction which conserves the total spin, does not contribute to the effect. In particular, the weak π-meson exchange, which is (relatively) long-distance one, is not operative here. The discussed asymmetry is due to the short-distance P-odd interaction. Unfortunately, there is a serious problem with the theoretical description of the short-range P-odd effects. They are commonly described by means of ρ-and ω-exchanges. However, the range of these potentials, 1/m ρ,ω ∼ 0.3 fm, is much smaller than the proton mean-square radius, < r 2 p > 1/2 ∼ 0.8 fm. Therefore, all calculations of P-odd effects based on weak ρ-, ω-potentials (as well as using ρ-, ω-potentials for the description of strong interactions) have no sound theoretical grounds. Our treatment of the short-distance weak interactions in the deuteron differs from that adopted in previous papers, but it is no exception in this respect. Still, there is an observation indicating that with our procedure the magnitude of nuclear P-odd effects at least is not overestimated. The point is that this procedure was used previously in [6] to derive the constant of the effective P-odd contact interaction of the valence nucleon with the nuclear core. Thus calculated value of the anapole moment of 133 Cs in [6] is close to (in fact, even somewhat lower than) its experimental value obtained recently in [4].
Theoretical studies of PNC effects in the deuteron were started in [10][11][12][13][14]. Papers [10,11] concentrated on the P-odd asymmetry in d(γ, n)p reaction caused by linearly polarized photons. For the photon energies of several MeV this asymmetry is very small as compared to that due to circularly polarized γ-quanta, the latter being of interest to us. A phenomenological treatment of PNC effects in the deuteron was adopted in [12]. Later it was supplemented in [13] with quantitative estimates made in the dispersion approach and in the pion exchange model. PNC effects in ed-scattering were considered in [15], but for a very special kinematics only.
The general problem of parity nonconservation in ed-scattering was investigated in [16][17][18][19], with detailed numerical estimates in [16,18]. However, in this process the effect of the nuclear parity violation is masked by the direct P-odd ed interaction due to weak neutral currents. After [12][13][14], P-odd effects in the deuteron disintegration by circularly polarized γ-quanta (and in the inverse reaction) were addressed in [16,[20][21][22][23][24]. Though our results are in a qualitative agreement with most of previous ones (see below), we believe that the present independent investigation of the important and interesting problem is worth efforts.
Wave functions, transition matrix elements, and cross-sections
The deuteron ground state is 3 S 1 (a small 3 D 1 admixture to it will be neglected throughout the paper). In the zero-range approximation (ZRA) its wave function is Here κ = √ m p ε, where m p is the proton mass, and ε = 2.23 MeV is the deuteron binding energy.
To the same approximation, the 3 S 1 and 1 S 0 wave functions of the continuous spectrum, ψ St and ψ Ss , respectively, are (see, for instance, [19]) Here α t = 5.42 fm and α s = −23.7 fm are the triplet and singlet scattering lengths, respectively. At last, in the spirit of the zero-range approximation, for the P state of the continuum we will use the free wave function 1 p d dr sin pr pr .
Near the threshold the photodisintegration cross-section is dominated by M1 transition. Since the radial wave function of the deuteron is orthogonal to that of the 3 S 1 state of the continuous spectrum, the M1 transition goes into the 1 S 0 state. The expressions for the M1 matrix element and total cross-section, as calculated in the same zero-range approximation, are well-known (see, for instance, [25]): Here p is the relative momentum of final nucleons, α = 1/137, µ p and µ n are the proton and neutron magnetic moments, respectively. For higher energies the cross-section is dominated by E1 transitions into the 3 P 0,1,2 states. The corresponding matrix element and total cross-section are, respectively: The origin of the factors (1 − κr t ) −1/2 and (1 − κr t ) −1 in (5) and (6), respectively, is as follows. Large distances dominate in the matrix element 3 P | r| 3 S . In this asymptotic region the naïve ZRA expression (1) for the deuteron wave function must be augmented by a correction factor (1−κr t ) −1/2 (see [25,26]), which obviously results in factor (1−κr t ) −1 in σ E1 . Here r t = 1.76 (1) fm is the effective radius of the triplet state.
On the other hand, wave functions (1) and (2) have incorrect behaviour for r → 0. It is of no importance for the above formulae, since short distances are inessential for σ E1 , and even for σ M 1 . However, the situation is different for matrix elements of the P-odd weak interaction for which short distances are crucial. Here we will use model wave functions with more realistic properties. For the deuteron we choose [27] This wave function has the correct asymptotics at r → ∞ (see above), tends to a constant at r → 0, and at least is continuous everywhere. The numerical value r 1 = 1.60 fm is chosen in such a way that the wave function is normalized correctly. In fact, it is quite natural that this value is close to that of the triplet effective radius r t . Let us note that the unphysical cusp of ψ d at r = r 1 is harmless for our problem since this wave function will enter integrands only (see formulae below).
As to the 1 S 0 wave of the continuous spectrum, the potential for the singlet state is rather shallow, and the effective radius r s = 2.73(3) fm is larger than the triplet one. Thus, the variation of the wave function in the internal region is even more important here. We choose the model wave function for this state as ψ Ss = A sin p 2 + p 2 0 r p 2 + p 2 0 r , r < r s ; ψ Ss = sin pr pr − a s e ipr r , r > r s .
Requiring the continuity of the wave function and its first derivative at r = r s , we obtain p 0 r s = 1.5 ; A(p) = p 2 + p 2 0 r s sin p 2 + p 2 0 r s sin pr s − pα s cos pr s pr s (1 + ipα s ) .
Short-distance P-odd interaction
We are interested first of all in the close vicinity of the threshold where the deuteron disintegration is dominated by the regular M1 transition 3 S 1 → 1 S 0 . In the admixed E1 transition the total spin is conserved. Therefore here we need the P-odd weak interaction which does not conserve the total spin (and conserves the isotopic spin). This interaction admixes 1 P 1 state to the initial one 3 S 1 , and 3 P 0 to the final state 1 S 0 . This interaction is of a short-range nature, and we will use its common description by a potential, corresponding to the ρ-, ω-exchange.
The numerical values of the parameters entering this expression are presented in Table 1. The values of the constants corresponding to strong vertices, g ρ,ω , χ ρ,ω , are reasonably reliable. For the P-odd constants, h 0,1,2 ρ , h 0,1 ω , we use the "best values" [7]. The last term in (9), which conserves the total spin I = (σ p + σ n )/2, will be considered below, together with the P-odd pion exchange. Here we treat other terms in (9) which do not conserve I (but conserve the isotopic spin).
Let us start with the correction to the deuteron wave function ψ d , using the common stationary perturbation theory. In the ZRA approximation the admixed P states of the continuous spectrum are free. Moreover, we can choose plane waves as the intermediate states since the perturbation (9) selects by itself the P state from the plane wave. Thus obtained correction can be written as the last transformation being possible due to the short-range nature of W (r) (κ ≪ m ρ,ω ).
Here χ t is the triplet spin wave function of the deuteron (previously we omitted it for brevity). Simple algebra transforms this expression into where Σ = σ p − σ n . Though this P-odd admixture of the 1 P 1 state to 3 S 1 is expressed conveniently through the ZRA wave function, the constant λ t introduced in (11), depends in fact on the true deuteron wave function ψ d as follows: In the same way we calculate the P-odd 3 P 0 admixture to the wave function of the 1 S 0 state of continuous spectrum: Here χ s is the singlet spin wave function, and the constant λ s is expressed via the wave function ψ Ss of the 1 S 0 state (see (8)) as follows: We will need also the P-odd 1 S 0 admixture to the 3 P 0 state of continuous spectrum. It can be easily found from the requirement that the perturbed 1 S 0 and 3 P 0 wave functions should remain orthogonal. We obtain δψ P = −2iλ s a s p e ipr r χ s .
A general formula comprising all three cases, (11), (12), (13), was given previously in [12]. Straightforward calculations with wave functions (11) and (12) give An analogous formula for the deuteron disintegration by longitudinally polarized electrons was derived in [19]. Wave functions (11) and (13) induce a P-odd asymmetry in one more way. Here the regular amplitude is E1: 3 S 1 → 3 P 0,1,2 and the admixed amplitudes are M1: 1 P 1 → 3 P 0,1,2 and 3 S 1 → 1 S 0 . This contribution to the P-odd cross-section difference is While the above derivation of expressions (14), (15) is a relatively simple procedure, the problem of calculating the constants λ s and λ t is quite different. In the ZRA the calculation of these constants is rather straightforward and results in However, these naïve ZRA numbers for the effective constants λ s,t certainly strongly overestimate true values of these constants. The first reason is that the ZRA wave functions of the S-states are singular at r → 0, while their correct wave functions are finite at the origin. Therefore, here we will use instead of naïve ZRA wave functions (1) and (2), model functions (7) and (8) which are finite at the origin. By the same reason of the short-range nature of vector exchanges, one more suppression factor is essential here. We mean the Jastrow repulsion between nucleons at small distances. Following [29][30][31], we will take it into account by a factor φ 2 (r) in the weak matrix elements, where With these modifications, we arrive at more realistic (and much smaller!) estimates for the constants λ s and λ t : The P-odd asymmetry of the deuteron photodisintegration due to the cross-section differences (14) and (15), as calculated with the constants (18), is plotted in Figs. 1a,b, respectively. We have chosen different vertical scales in Figs. 1a,b to be able to reproduce details of the effects differing by two orders of magnitude. Obviously, in the whole range of energies considered, the contribution corresponding to the regular E1 transition is very small, and can be safely neglected.
Spin-conserving P-odd interaction
It was mentioned already that M1 transition from the ground state proceeds only to the 1 S 0 state of the continuous spectrum. Then, it can be easily seen that the P-odd exchange, which conserves the total spin I, operates in our problem as follows. In the regular E1 transition from the ground state 3 S 1 into 3 P 0,1,2 , it admixes 3 P 1 state of the continuous spectrum to the initial one, and 3 S 1 state to the final 3 P 1 one. To contribute to the admixed P-odd M1 amplitude, this last admixed 3 S 1 state should be the ground state of the deuteron (recall the mentioned orthogonality of the 3 S 1 radial wave functions of different energies). With the account for this P-odd mixing the mentioned states of J = 1 can be written as where iβ is purely imaginary weak mixing coefficient. Then straightforward standard calculations with 3j and 6j symbols demonstrate that the sums of the reduced E1 and M1 amplitudes for each of the electromagnetic transitions 3S 1 → 3 P 0 , 3P 1 , 3 P 2 look as follows (up to a factor common to all three transitions): Here λ = ± is the sign of the photon circular polarization, and ρ is the ratio of the reduced E1 and M1 amplitudes. Now, the total probability of the γd → np reaction is proportional to and obviously independent of the circular polarization λ. Thus, the P-odd weak interaction which conserves the total spin does not contribute at all to the discussed asymmetry in the deuteron photodisintegration. It refers both to the weak pion exchange 2 and to the corresponding part of the short-distance contribution (the last line in formula (9)).
Conclusions
Our final result for the total asymmetry A practically coincides with the curve plotted in Fig. 1a, in the whole region of the energies discussed (the second of the short-distance contributions, plotted in Fig. 1b, is negligibly small, as compared to the first one, for these energies). The maximum value of the asymmetry, at the threshold, is about 10 −7 .
Unfortunately, the magnitude of the short-distance effect cannot be accurately predicted. Our result for it is higher, by a factor of 2 to 5, than those of previous works, which also differ considerably among themselves. Different approaches and lack of details of calculations in those papers preclude elucidation of the exact origin of this disagreement. However, at least in the case of our discrepancy with [16] there is a plausible explanation for it. The cut-off adopted in [16] for the description of the short-range nucleon-nucleon repulsion (the shortrange correlation factor therein turns to zero for r < r c , r c = 0.43 fm or 0.56 fm) is much more steep than the cut-off adopted by us (see (17)). The argument in favor of our approach was presented already in Introduction. It is the good agreement between the experimental value of the anapole moment of 133 Cs and the theoretical prediction for it obtained within the approach used here.
Our last remark refers to the relation between the P-odd asymmetry A and the degree of circular polarization P of γ-quanta in the inverse reaction np → dγ: In this expressionσ λ is the production cross-section for a photon with circular polarization λ(= ±). In virtue of the principle of detailed balancing (which is valid here since the interactions considered are T-even), A = P. | 2014-10-01T00:00:00.000Z | 2000-10-10T00:00:00.000 | {
"year": 2000,
"sha1": "f1ec4e0fcb9d3cad5c4da07ce9471467810d9741",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "61bb1291aeedc6332e512e851adae5c110637547",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
121391257 | pes2o/s2orc | v3-fos-license | Comparisons of the nonlinear and the quasilinear model for the bump-on-tail instability with phase decorrelation
The dynamics of discrete global modes in a toroidal plasma interacting with an energetic particle distribution is studied, and in particular when the dynamics of the system using the nonlinear and quasilinear descriptions are macroscopically similar. The dynamics can be described with a nonlinear bump-on-tail model in a two-dimensional phase space of particles. A Monte Carlo framework is developed for this model with an included decorrelation of the wave- particle phase, which is used to model extrinsic stochastisation of the wave-particle interactions. From this description, a quasilinear version of the model is also developed, which is described by a diffusive process in energy space due to the added phase decorrelation. Due to the reduced dimensionality of phase space, the quasilinear description is typically less computationally demanding than the nonlinear description. The purpose of the studies is to find conditions when a quasilinear model sufficiently describes the same phenomena of the wave-plasma interactions as a nonlinear model does. Via numerical and theoretical parameter studies, regimes where the two models overlap macroscopically are found. These regimes exist above a given threshold of the strength of the decorrelation, where coherent phase space structures are destroyed on time scales shorter than characteristic time scales of nonlinear particle motion in phase space close to the wave-particle resonance. Specifically for the quasilinear model, a theoretical value of the time scale of quasilinear flattening is derived and numerically verified.
Introduction
Wave-particle interaction plays an important role in plasma physics for heating with waves and for transport caused by microinstabilities. Instabilities appear when the distribution function in energy increases along the characteristic describing the wave-particle interactions. The dynamics is either studied with a quasilinear approximation or with a fully nonlinear model, and comparisons are made to evaluate the applicability of the quasilinear approximation. The nonlinear bump-on-tail model for the modeling of discrete global modes was developed by Berk et al [1,2], and has been extensively studied [1][2][3][4][5][6] (to name a few references). A parameter quantifying extrinsic stochastisation of the wave-particle interactions has been introduced to this model. Above a certain threshold of this stochastisation parameter, the kinetic equation of the wave-particle interaction can be replaced by a diffusion equation in particle energy, independent of the wave-particle phase. Such a description, with a diffusion coefficient similar to the standard quasilinear diffusion of weakly turbulent plasmas [7,8], is identified as the quasilinear limit of the nonlinear wave-particle interaction model.
Model equations 2.1. Nonlinear Monte Carlo model
Based on an action-angle description of the guiding center Hamiltonian in an axisymmetric toroidal system with slowly varying electromagnetic fields [9], it was shown by Berk, Breizman and collaborators [1,2] that the Alfvén eigenmode-particle system can be described in a region of phase space locally around the wave-particle resonance using a one-dimensional bump-on-tail model. Following these derivations, using proper variable substitutions, and adding an ad hoc collision operator acting on the energetic particles, the wave-particle system can be reduced to the following set of equations: where f (φ, u, τ ) is the near-resonance energetic ("bump") distribution, (φ, u) is the particle position-energy phase space, τ is a parametrization of the time, A(τ ) is the complex amplitude of the eigenmode (arg(A) − φ is the relative wave-particle phase), df /dτ | coll is the collision operator acting on the bump distribution, and γ d is an explicit wave damping rate, e.g. due to interactions with a thermal background distribution of particles. Assuming that the amplitude evolves much slower than the phase space evolution of particles and taking df /dτ | coll = 0, the system approximately becomes a pendulum equation, with particles deeply trapped by the wave field oscillating at a frequency ω B = |A| and a trapped region within |u| < 2(|A| − Im[Ae iφ ]).
In the absence of sources and sinks, applied by the conditions df /dτ | coll = 0 and γ d = 0, the total energy, expressed as is a conserved quantity of the wave-particle system. For this considered case, given an initial low-amplitude perturbation of the eigenmode and a positive derivative of the particle distribution with respect to energy around the wave-particle resonance, the evolution of the amplitude perturbation becomes exponentially growing in time after some initial mixing of the phase space distribution of particles [10]. The linear growth rate of the amplitude is given by In the nonlinear simulations presented in this paper, a sinusoidal perturbation in φ is applied to the initial distribution function, since an initial flat φ distribution is in unstable equilibrium, for which the dynamics depend critically on statistical noise in the phase space distribution. An applied perturbation makes the results less dependent on statistical fluctuations. The quasilinear model is based on the assumption that wave-particle interactions are extrinsically decorrelated, such that coherent interactions only occur on linear time scales. In order to gradually go from the fully nonlinear description to the quasilinear one, a collision operator of the form is introduced, where D φ ≥ 0 is a constant, quantifying the strength of phase decorrelation of the particles. This specific form of the collision operator linearly preserves the particle energy distribution, which is typically not the case for a physical stochastisation process. However, its energy conservation property facilitates the comparisons with quasilinear theory a lot, which is explained in Sec. 2.2. Using the Kolmogorov forward equation, the system described by eqs. (1), (2) and (4) can be expressed as a system of stochastic differential equations (SDEs) according to where the bump distribution f (φ, u, τ ) is described by a set of discrete entities (markers) with a phase φ k , energy u k and weight w k , and W τ,k are individual independent Wiener processes in τ . An added common weight factor to all particles, such that w k → αw k , α > 0, can be transformed away using the set of substitutions A discrete time approximation is used for the numerical simulations. Assuming that the wave amplitude evolves on time scales much longer than the individual φ k and u k , A can be treated as an independent variable in eq. (5b), splitting the complete system into individual two-dimensional systems of SDEs (one for each particle), and two ODEs for the wave amplitude (the real and the imaginary component). Using an Itō-Taylor numerical scheme with a strong convergence of order 1.5 [11], the discrete stepping of particles in phase space is described by where ∆W k and ∆Z k are Itō integrals of the Wiener processes. These can be sampled as where ξ i,k are independent normally distributed random variables of unit variance and zero mean. For the finite time stepping of the wave amplitude A, the standard fourth order Runge-Kutta method is used.
Quasilinear Monte Carlo model
The Brownian motion of the phase, in eq. (5a), induces a decorrelation of the wave-particle interaction in eq. (5b). When this phase decorrelation is strong, the evolution of the energy may be approximated by a random walk with a quasilinear diffusion coefficient, defined as This diffusion coefficient is similar to the standard quasilinear diffusion of a weakly turbulent plasma [7,8]. Since the quasilinear wave-particle interaction is independent of φ and arg(A), the dimensionality of particle phase space is reduced from two to one, and the amplitude is a real quantity in the quasilinear description. The corresponding kinetic equation of the quasilinear description is given by The specific form of the added phase decorrelation preserves the total energy of the system of the nonlinear model, as given by eq. (3). Assuming this is true also for the purely quasilinear model, it is straightforward to derive the quasilinear amplitude equation from eqs. (3) and (8): On time scales much shorter than the time scale for quasilinear flattening of the distribution (see eq. (20)) the distribution function evolves slowly, and can be approximated with a constant in time. Inserting this approximation into eq. (9) yields an effective growth rate of the wave amplitude, that includes effects of a finite width of the energy distribution, which was not considered in the derivation of γ L in the nonlinear description.
The system of equations given by eqs. (8) and (9) can be written as a system of SDEs: and the 1.5 order strong Itō-Taylor numerical scheme [11] yields the following stepping algorithm in u: where 3 , and ξ i,k are independent and normally distributed random variables of unit variance.
As apparent from eq. (10), the growth rate of the amplitude in the quasilinear model theoretically coincides with the growth rate in the nonlinear model in the limit D φ → 0. However, as D φ approaches zero, the maximum allowed ∆τ for convergence of the numerical algorithm is reduced. This limitation for low D φ is not present in the nonlinear model. The first order term in eq. (12) differs from the half order term with a factor ∼ Λ k D φ √ ∆τ ∼ A/D 2 φ × D φ ∆τ at most, which must be 1 for good convergence.
Analytic solutions to the quasilinear model
Solving the system of equations of the quasilinear model, eqs. (8) and (9), is possible in principle by first separating the spatial and temporal parts of the kinetic equation. The eigenfunction solutions of the spatial part are integrals over parabolic cylinder functions. The non-trivial eigenfunctions diverge as χ → +∞ or χ → −∞, which makes them impractical to use as spatial basis functions for the distribution function. When inserting a general set of eigenfunction solutions into the quasilinear amplitude equation, eq. (9), an infinite dimensional system of equations results, which might also be impractical. An alternative approach is to use an approximate form of the diffusion coefficient that is valid on limited time scales. A model with a parabolic form of the diffusion coefficient, where u w ∼ D φ is the effective width of the diffusion in u, turns out to be practically soluble from an analytical point of view. From now on, the diffusion coefficient in eq. (13) is referred to as parabolic diffusion, whereas the coefficient in eq. (7) is referred to as the Lorentzian diffusion.
Inserting the parabolic form of the diffusion coefficient into eq. (8) yields where x ≡ u/u w . It should be noticed that the eigenfunctions of the right hand side are the Legendre polynomials, which form an orthogonal set in |x| ≤ 1. Decomposing the distribution function in the region |u| ≤ u w into Legendre polynomials according to inserting into eqs. (14) and (9), and using the orthogonality property of the Legendre polynomials, it can be shown that each Q n and the amplitude satisfy the ODEs By solving the closed system of equations given by eq. (16) for n = 1 and eq. (17), and solving eq. (16) for general n using the obtained A(τ ), one finds the complete analytical solutions: where η ≡ A 2 (0)/D φ u 2 w and ψ ≡ 4Q 1 (0)/3D φ u w . One limitation of the parabolic diffusion model is that it only acts on the distribution function within the region |u| < u w , whereas the Lorentzian diffusion model acts on the whole u space. On time scales where regions |u| u w of the distribution function are affected by diffusion in the Lorentzian model, the numerical and analytical solutions are expected to diverge. One can select u w such that the initial growth rates of the wave amplitude match in the numerical and the analytical solutions. By doing this, one can obtain similar solutions of the two models on time scales τ τ QL , where τ QL is the characteristic time scale for quasilinear flattening of the distribution function. Therefore, the analytical solutions can be used to obtain an analytical expression for τ QL .
The time scale of quasilinear flattening can be characterized as the time scale at which ∂F/∂u| u=0 is significantly reduced due to quasilinear diffusion. Only P n for odd n contribute to the derivative of the distribution function at the resonance. The decay rate of Q 2n+1 is of the order 2n 2 (η + ψ) according to eq. (18). Hence, the term with the slowest decay that contributes to the derivative is Q 1 , which is then expected to be the dominant contribution to ∂F/∂u at (u, τ ) = (0, τ QL ). The analytical τ QL can be defined as the time at which Q 1 (τ )/Q 1 (0) essentially differs from unity. Setting this ratio e.g. to 1/e results in Assuming that dF 0 /du ≈ dF 0 /du| u=0 on energy scales much larger than D φ , and assuming η ψ The distribution function around the resonance at three chosen times. The distribution of the NL model is integrated over φ.
Quasilinear flattening
Comparisons between the evolutions of the wave amplitude and of the energy distribution functions for simulations of the nonlinear model (NL), the quasilinear (QL) and the analytical model are presented in Fig. 1. An initial triangular bump distribution in energy is used, defined as F 0 (u) = F 0 (0)(1 + u/ū) for |u| ≤ū, and F 0 (u) = 0 for |u| >ū, which is chosen to minimize possible effects from higher order derivatives of the energy distribution around the resonance. Using eq. (20) it was found that γ L τ QL = 6.98 for the specific case in Fig. 1. Unlike the quasilinear model, the immediately initial amplitude evolution of the nonlinear model is not exponential. Rather, there is an initial phase mixing state, with a faster growth of the amplitude due to the added sinusoidal perturbation in φ-space of the initial distribution function, as discussed in Sec. 2.1. To resolve this discrepancy, the time is shifted for the nonlinear model, such that it matches the wave amplitude of the numerical quasilinear model at τ = τ QL . Then the distribution functions of the three models can be compared at given times, which is done in Fig. 1.c -1 The saturation level of the analytical model is much lower than that of the nonlinear and the quasilinear model. This is due to the fact that the wave can only exhaust energy from a localized region |u| < u w around the resonance using the parabolic diffusion model. Although differences are large between the analytical and numerical solutions during the saturation phase, they approximately agree for times up to the analytical time scale of quasilinear flattening, τ QL (aside from the initial phase mixing state of the nonlinear solution), which can be seen in Fig. 1.a. For τ > τ QL , the exponential evolution of the wave amplitude gradually ceases for both numerical solutions. This conclusion is consistent with the results presented in Fig. 1.c -1.e. For τ = 6/γ L < τ QL , the energy distribution deviates from the initial distribution with a few percent at most. For τ ≥ τ QL , deviations from the initial distribution start to become significant around the resonance, which here corresponds to the process of quasilinear flattening. b) The relative difference of ∆ between nonlinear simulations and corresponding quasilinear simulations as a function of D φ / |A| at the end of the interval. A triangular initial energy distribution is used, with a full width of 2ū, and γ d = 0.
Comparisons between the nonlinear and the quasilinear model
In order to conclude in which parameter regimes the numerical nonlinear and the numerical quasilinear model macroscopically agree, one has to find a quantity that primarily depends on the nonlinear dynamics of the system and compare this quantity for a wide set of nonlinear and quasilinear simulations. One nonlinear process is the saturation of the wave amplitude, which can be characterized by a saturation time scale and a value of the saturated amplitude. The latter is however trivial to determine in the presence of phase decorrelation. Turning off the wave damping (γ d = 0), the saturated amplitude corresponds to a state where the energy difference of the initial particle distribution and a final state of a symmetric energy distribution around the resonance is absorbed by the wave. In the presence of wave damping, the saturated state is simply zero, since the chosen collision operator lacks sources. The quantity that has been chosen for comparison is the saturation time ∆, here defined as the time between the two states with amplitudes |A| = 0.1A sat and 0.6A sat (see Fig. 2.a). In Fig. 2.b, the relative difference between ∆ using the nonlinear and the quasilinear numerical models is shown. Effects of the width of the distribution function are studied by performing simulations with different values ofū/ √ 0.6A sat , which is the ratio between the initial full width of the particle distribution in energy and the width of the trapped particle region by the wave field at the end of the ∆-interval. The quantity on the x-axis of Fig. 2.b, D φ / √ 0.6A sat , compares the bounce time of particles deeply trapped by the wave field (ω −1 B = |A| −1/2 , referred to as the nonlinear time scale) with the time scale of phase decorrelation (D −1 φ ) at the end of the ∆-interval. As shown in Fig. 2.b, the quasilinear model is able to predict the saturation time scale when the decorrelation time is shorter or similar to the nonlinear time scale (D φ / √ 0.6A sat 1). However, when the decorrelation time is long in comparison, e.g. when D φ / √ 0.6A sat 0.1, the measured relative difference of ∆ is larger than 20 %. The decrease of the relative error for increasing decorrelation strength ceases for shorter or similar decorrelation times, which might depend on numerical errors. From Fig. 2.b, one may also observe a better agreement between the models for distribution functions with a wider initial distribution around the resonance relative to the width of the trapped region, i.e. largerū/(2 √ 0.6A sat ). One interpretation is that a large fraction of the complete structure of the distribution function becomes nonlinearly displaced by the wave field when the initial particle distribution is narrow, such that δf /f becomes large on short time scales. Another interpretation could be that the discontinuity of the triangular distribution on the positive edge (at u =ū) becomes visible for the wave when the width of the trapped region is similar toū, which might strongly affect the nonlinear behavior of the system.
Conclusions
In this paper, a nonlinear Monte Carlo model and a corresponding quasilinear model for describing the dynamics of discrete global modes interacting with energetic particles in a toroidal plasma in the presence of phase decorrelation have been compared. This study is performed mainly by computing macroscopic quantities in selected parameter regimes using a quasilinear approximation and a fully nonlinear description. There exist parameter regimes where the nonlinear and the quasilinear descriptions approximately coincide macroscopically. These regimes are mainly when the time scale for the destruction of macroscopic phase space structures (due to the added phase decorrelation) are much shorter than the characteristic time scale of phase space evolution of particles around the wave-particle resonance. However, due to the reduced dimensionality of phase space relative to the nonlinear model, there are certain phenomena depending on nonlinear phase space structures that the quasilinear model cannot describe. Two partly related phenomena common for both the nonlinear and the quasilinear descriptions have been studied for comparison: quasilinear flattening (i.e., local flattening of the energy distribution around the resonance due to quasilinear energy diffusion) and saturation time scales of the wave amplitude in the presence of phase decorrelation. Analytical solutions to a problem similar to the quasilinear description were derived to obtain a theoretical value of the time scale of quasilinear flattening. When compared with numerical simulations using the quasilinear and the nonlinear descriptions, they were found to approximately match the theoretical time scale, both when looking at the deviations from an exponential growth of the wave amplitude and at the flattening of the energy distribution.
The saturation time scale was studied by comparing the time difference (referred to as ∆) between the states where the wave amplitude had reached 10 % and 60 % of the saturated amplitude in the presence of phase decorrelation, using the nonlinear and the quasilinear numerical model. It was found that the value of ∆ is similar for the two models when the phase decorrelation is faster than the nonlinear bounce time (from the trapping of the wave field). However, when the decorrelation time scale is reduced, the differences between the quasilinear and the nonlinear model are significant. The energetic distribution of the presented numerical models has a finite width in energy, which affects the macroscopic behavior of the system. Wider initial distributions relative to the trapped particle region in the nonlinear model also give better agreement with the quasilinear model in general. This can be explained by the fact that a large fraction of the complete structure of the distribution function becomes nonlinearly displaced by the wave field when the initial particle distribution is narrow relative to the trapped particle region, such that δf /f becomes large on short time scales for these cases. | 2019-04-19T13:04:19.177Z | 2014-11-27T00:00:00.000 | {
"year": 2014,
"sha1": "64cdaa3209b4681e0c4b783433642a32831d1b08",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/561/1/012019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "da5e6c2c30ed728dfcaa3318c93cd5129c9b53e1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
12489899 | pes2o/s2orc | v3-fos-license | Comparative genome analysis of central nitrogen metabolism and its control by GlnR in the class Bacilli
Background The assimilation of nitrogen in bacteria is achieved through only a few metabolic conversions between alpha-ketoglutarate, glutamate and glutamine. The enzymes that catalyze these conversions are glutamine synthetase, glutaminase, glutamate dehydrogenase and glutamine alpha-ketoglutarate aminotransferase. In low-GC Gram-positive bacteria the transcriptional control over the levels of the related enzymes is mediated by four regulators: GlnR, TnrA, GltC and CodY. We have analyzed the genomes of all species belonging to the taxonomic families Bacillaceae, Listeriaceae, Staphylococcaceae, Lactobacillaceae, Leuconostocaceae and Streptococcaceae to determine the diversity in central nitrogen metabolism and reconstructed the regulation by GlnR. Results Although we observed a substantial difference in the extent of central nitrogen metabolism in the various species, the basic GlnR regulon was remarkably constant and appeared not affected by the presence or absence of the other three main regulators. We found a conserved regulatory association of GlnR with glutamine synthetase (glnRA operon), and the transport of ammonium (amtB-glnK) and glutamine/glutamate (i.e. via glnQHMP, glnPHQ, gltT, alsT). In addition less-conserved associations were found with, for instance, glutamate dehydrogenase in Streptococcaceae, purine catabolism and the reduction of nitrite in Bacillaceae, and aspartate/asparagine deamination in Lactobacillaceae. Conclusions Our analyses imply GlnR-mediated regulation in constraining the import of ammonia/amino-containing compounds and the production of intracellular ammonia under conditions of high nitrogen availability. Such a role fits with the intrinsic need for tight control of ammonia levels to limit futile cycling.
Background
The assimilation and re-distribution of nitrogen within a cell is essentially controlled within the central metabolic conversions between alpha-ketoglutarate, glutamate and glutamine ( Figure 1A). The enzymes that catalyze these conversions are glutamine synthetase (GS), glutaminase (G), glutamate dehydrogenase (GDH) and glutamine alphaketoglutarate aminotransferase (GOGAT). On a short timescale, the enzyme activity is controlled via activating and inhibitory molecular interactions. For instance, the activity of GS is suppressed via feedback inhibition (FBI-GS) by the product glutamine and by AMP [1]. Under conditions of nitrogen limitation a high GS activity is maintained to ensure a sufficient level of glutamine [2,3]. On a longer timescale, the enzyme levels are controlled via the activity of a limited number of transcription regulators.
Marked differences exist in the transcription control of the genes encoding the enzymes involved in central nitrogen metabolism across the bacterial kingdom. In the Grampositive model organism Bacillus subtilis, the expression of these genes is mediated by four major transcription factors: CodY, GlnR, TnrA [4] and GltC [5,6]. Of these, GlnR, TnrA and GltC are specific for nitrogen metabolism whereas the global regulator CodY is linked to both carbon and nitrogen metabolism [7]. GltC is specifically associated with the control of the genes encoding glutamine alphaketoglutarate aminotransferase. The transcription factor GlnR is active during growth with excess nitrogen, whereas TnrA is active during nitrogen-limiting growth [4]. The change in activity of these transcription factors is affected directly by GS and feedback inhibition (FBI) of the enzyme ( Figure 1B). In B. subtilis GlnR is activated in the presence of FBI-GS [8] and TnrA is inhibited through a physical interaction with FBI-GS [2,9]. It was also shown that TnrA binds to the PII-like regulatory protein GlnK, which is sensitive to ATP, Mg 2+ and alpha-ketoglutarate [9,10]. The two proteins become tightly associated with the ammonium permease AmtB at a low level of ATP. In Streptococcus mutans cross-linking and pull-down assays demonstrated that GlnR also interacts with GlnK and that the interaction enhances the binding of GlnR to its cognate site upstream of the glnRA operon [11].
In B. subtilis and many other low-GC Gram-positives the genes encoding GlnR (glnR) and GS (glnA) constitute the operon glnRA [12]. In B. subtilis GlnR was reported to repress the transcription of the glnRA operon (negative autoregulation), and of tnrA [13] and the urease gene cluster (ureABC) [14]. On the other hand, TnrA was reported to affect the transcription of a larger set of genes/operons [15], for instance activating glnQHMP (encoding a glutamine ABC transport system [16]), amtB-glnK (i.e. nrgBA; encoding an ammonium permease and the regulatory protein GlnK [17]), nasA and nasBC/DEF (encoding proteins related to nitrite reduction [18]), gabP (encoding a gamma amino butyrate transporter [19]) and pucR (encoding the purine catabolism regulator [20]), while repressing alsT (encoding an H + /Na + amino acid symporter [15]), gltAD (encoding glutamate synthase [21,22]) and ilvBHC-leuABCD (encoding branched-chain amino acid biosynthesis proteins [23]). Similarly, in the oral Streptococci S. pneumonia and S. mutans GlnR was reported to repress the transcription of the glnRA operon and of the glnPQ operon (encoding another glutamine ABC transport system) in both organisms and of gdh (encoding glutamate dehydrogenase) in the former, and the amtB-glnK and citBZ-idh operons (encoding aconitate hydratase, citrate synthase and isocitrate dehydrogenase [24]) in the latter organism [25,26]. Comparative genome analyses have shown that GlnR, TnrA and CodY are characteristic for the low-GC Grampositive species although their distribution is not uniform. For instance, whereas GlnR is found in almost all Bacillus species, TnrA has been identified only in a few. It is an intriguing question whether in the absence of one of these main regulators the others take over its role. We therefore decided to extend (i.e. from 16 to 173 genomes) a previous comparative analysis reported by [27] to identify the presence of the regulators and the genes they regulate in the low GC Gram-positive species of the class Bacilli. This class includes the well-studied families Bacillaceae, Listeriaceae, Staphylococcaceae, Lactobacillaceae, Leuconostocaceae and Streptococcaceae.
We have redefined the binding motifs of GlnR and TnrA on basis of the available experimental and sequence data and used them to identify their respective regulons anew. For that purpose we have applied a footprinting approach formulated earlier by us [28] and a similar motif search procedure [29]. The difference in composition of the GlnR regulon was compared for the various taxonomic families within the class Bacilli and for species having only GlnR or also additional regulators. For most families we found a rather stable composition of the GlnR regulon and some species-specific connections, independent of the presence or absence of the other two regulators. The data imply that GlnR-mediated regulation serves predominantly to limit the import of ammonia/amino-containing compounds and, at the same time, to limit the production of intracellular ammonia.
Results and discussion
Presence/absence analysis of the genes encoding the central enzymes and regulators We identified the orthologs of the genes encoding the enzymes of central nitrogen metabolism (G, GS, GDH and GOGAT), the related transport systems and the regulators CodY, GlnR and TnrA, in the sequenced genomes of species related to the class Bacilli on basis of BLAST searches with the sequences of experimentally verified proteins (see methods for details). In Tables 1 and 2 the analysis results for representative species of the orders Bacillales and Lactobacillales, respectively, are presented; the results for the complete set of analyzed species are given in Additional file 1. We observed a clear distinction in gene content between the two orders and between the different taxonomic families within the orders.
Remarkably, within the family Lactobacillaceae, Lactobacillus acidophilus and its close relatives lack all three regulators. There are only three other species, Bacillus halodurans, Bacillus clausii and Bacillus selenitrireducens that lack a gene encoding GlnR. The global regulator CodY is present in most species except for those of the families Lactobacillaceae and Leuconostocaceae. TnrA is only present in species of the order Bacillales within the families Bacillaceae, Paenibacillaceae and the genus Exiguobacterium with the exception of the species and strains of the Bacillus cereus group, Alicyclobacillus acidocaldarius, Brevibacillus brevis and Lysinibacillus sphaericus.
Similarly, we observed a large variation in the presence of the enzymes of central nitrogen metabolism, but much less so in the related transport systems. The set of enzymes is complete within the family of the Bacillaceae and mostly reduced in the other families; in many of the Lactobacillaceae, Leuconostocaceae and Streptococcaceae only glutamine synthetase and one of the other enzymes is present. In the case of transport, at least one ammonium transporter AmtB (Amt-family; 1.A.11 in TCDB classification [42]), also referred to as NrgA [40], is present in most species, although the transporter is absent in more than half of the analyzed Streptococcaceae, in three Bacillus anthracis strains, in the gutrelated Lactobacilli (e. g. L. johnsonii and L. gasseri) and in some meat-related species (e.g. Lactobacillus sakei and Staphylococcus carnosus, Macrococcus caseolyticus). It was recently put forward that transport of ammonia (NH 4 + ) should be active and tightly regulated to limit futile cycling [43]. This control was suggested to be exerted by the small PII-like regulator GlnK, earlier referred to as NrgB (and as GlnB in e.g. L. lactis); the corresponding genes are indeed found genetically associated to amtB in many of the analyzed species. However, at the same time, it is absent in many others, including all analyzed Lactobacillaceae. Moreover, GlnK was shown to interact with TnrA in B. subtilis [9,10] and GlnR in S. mutans [11].
GlnR-mediated regulation. Genes/operons that have a clear upstream GlnR binding site are marked by dark grey boxes (similarity score >87%), whereas genes/operons that are preceded by a less clear site are marked light grey (similarity score 80-87%). In case more genes are present encoding the same function then the number of genes with a clear binding site is indicated between brackets and if in addition the other gene(s) are preceded by a less clear site then the cell is marked light grey. a) A GlnR-binding site is present but downstream of tnrA because the gene has opposite direction when compared to all other species. b) For one sequence no ORF was called. However the gene could be identified using tBLASTN. c) One of the sequences in two fragments. d) Seems part of an operon that includes a gene with DUF294 domain and the gene dnaQ. e) The glnR and glnA duplicates in B. pseudofirmus OF4 are not part of a single operon and located at different positions on the genome.
The table summarizes the data retrieved for all species and strains which can be found in Additional file 1. The number of orthologs/homologs in every genome is indicated. For Gene content and how GlnR-mediated regulation is indicated see the legend of Table 1. Every species is represented by one strain only. Species abbreviations: L., Lactobacillus; Ln., Leuconostoc; Pc., Pediococcus; S., Streptococcus. a) L. lactis has a codY paralog named codZ [41]. b) There is a second glnP H gene located next to the glnP H ,Q operon. c) The glnP H gene encodes two H domains. d) In O. oeni the gene glnP H is in one operon with a gene encoding an asparaginase, whereas a glnQ homolog is missing. e) In L. salivarius a second glnP H gene is not part of an operon and is located elsewhere on the genome with respect to the glnP H Q operon.
system has been related to high-affinity glutamine transport [16], whereas the L. lactis system was shown to transport both glutamine and glutamate [44]. Recently, the E. coli-type system present in Streptococcus mutans was proven also to be involved in the transport of glutamate [45]. Remarkably, most of the species of the order Lactobacillales carry a copy of both types (Table 2). These Lactobacilli lack the genes encoding a glutamate dehydrogenase (gdh) or glutamate synthetase (gltAB). Therefore these species are unable to synthesize glutamate, which makes it essential to have a glutamate transport system. Most of the analyzed species encode one or more transporters of the DAACS-family (2.A.13) and AGCS-family (2. A.25), with the exception of the species within the families Listeriaceae and Leuconostocaceae, some Lactobacillaceae and L. lactis. These transporter-protein families have been related to the cation symport of dicarboxylates and amino acids. The former family is represented by GltP (glutamate/ aspartate [46]), GltT (glutamate [47]), DctA (C4-dicarboxylates including aspartate [48]), YhcL (or TcyP; cystine [49]) and Nqt (putative glutamate in B. subtilis), whereas the latter family is represented by GlnT (glutamine [50]), AlsT (amino acid [15]), YrbD (putative amino acid) and YflA (putative amino acid).
Identification of a GlnR and TnrA specific binding motif
The protein sequences of GlnR and TnrA are highly similar and their reported DNA binding sites show little difference [27]. The palindromic consensus sequence has been defined as TGTNA-N7-TNACA [13,15,[51][52][53][54][55]. Gel mobility shift assays indicated that TnrA and GlnR indeed bind to the same sites upstream of the tnrA gene and the glnRA operon, albeit with different specificity [13]. To achieve a separation of the predicted sites we have employed a genomic footprinting strategy that we formulated previously [28,56] to identify the GlnR-specific binding motif anew. The strategy involved the definition of Groups Of Orthologous Functional Equivalents (GOOFEs) on basis of conserved genomic context. Within these GOOFEs we assumed conservation of binding motif. In all analyzed species that contain glnR, the genetic association with glnA has been conserved. Moreover, for several species GlnR was shown experimentally to be autoregulatory and therefore the upstream region of the glnRA operon within all genomes was scanned for a conserved GlnR binding site. In line with earlier published observations we found a clear and strongly conserved binding site 3-6 nucleotides upstream of a putative-35 region (i.e. TTGAC) of the promoter in all analyzed species and a second binding site overlapping the promoter in many of the Bacillus species ( Figure 2).
It was shown in a cross-regulation study that the binding site upstream of the promoter of the glnRA operon in B. subtilis is only involved in GlnR-mediated regulation [13]. Therefore, to tract potential differences between the GlnR and TnrA binding motifs, we used the conserved GlnRbinding sites upstream of the promoter to generate a family specific position frequency matrix (see methods). It appeared that the frequency representations of the motif varied slightly between the Streptococci and the other Bacilli ( Figure 3A and B). Both motifs that were generated for GlnR adhered to the consensus motif [13,15,[51][52][53][54][55] and were similar to the motif that was previously defined by [27]. Then, a TnrA-specific motif was created on basis of the TnrA sites upstream of amtB, ansZ, gapP, glnQ, nasA, nasB, nasD, oppA, pucJ, pucR, ykzB and ywrD. These binding sites were reported to relate to transcription activation in B. subtilis ( [15,19,38,52,[57][58][59] and raw data file 1) and are supposed to be TnrA-specific as GlnR has not been reported to activate transcription. The frequency representation of the TnrA-specific motif is given in Figure 3C. A comparison of the GlnR and TnrA specific motifs shows that there is limited difference. Yet the TnrA motif clearly lacks the conserved A and T at the 3′ and 5′ end, as was noted before. In fact, mutation of the conserved T at the 5′ end to a C or a G (but not A) was reported to abolish GlnR-mediated repression of the glnRA operon in B. subtilis [55], although [13] did not observe such an effect. Our new motifs also suggest that there is a slight preference for a G at position 7 and a C at position 13 which is less pronounced in the GlnR motif in Streptococci.
The predicted GlnR and TnrA regulon of B. subtilis
The specific GlnR and TnrA motif were used to search the B. subtilis genome for similar sites using the Similar Motif Search (SMS) procedure described in the methods. The results of this search can be found in Additional file 2. Although the differences in GlnR and TnrA motif did not appear strong in first instance, the results of the motif search in B. subtilis suggest they are large enough to bring about some separation between GlnR and TnrA binding sites, in line with the observed variable affinities of these transcription factors for the same sites [13].
In principle the highest scoring sites are likely to be genuine binding sites and by using a relatively high cut-off score of 0.89 the majority (>70%) of experimentally validated sites was indeed captured for GlnR as well as for TnrA. Moreover, most other true binding sites scored just below the cut-off. Only 4 out of 22 reported TnrA binding sites were not recovered in this way. Some of the sites were actually found at a relatively large distance from the translation start (e.g. in the case of ilvB [60]) and many sites were found located in the shared regulatory region of neighboring genes located on opposite strands (so-called divergons). Although many sites were found in both searches, the similarity score was mostly clearly better for one than for the other. Genes/operons predicted to be controlled by both regulators included the known genes/operons glnRA (glutamine synthesis) and tnrA. Additional shared sites were found upstream of alsT, pucH, pucJKLM and the amtB-glnK operon. Although these sites have not been attributed to GlnR earlier and were described to be activated by TnrA [15,57], the relatively high simililarity score and the evolutionary conservation, also among organisms that lack TnrA, suggest they are true binding sites. In the case of the amtB-glnK operon (import of ammonia) it was formerly concluded that it is not repressed by GlnR on the basis of a singular observation. It was found that the amtB-glnK operon remained repressed in a GlnR deletion mutant (i.e. glnR57 [12]) in the presence of glutamine, similar to the wild-type [61]. However this observation does not exclude repression by GlnR in case additional regulators are at play. In fact, in L. lactis it was shown that expression of the amtB-glnK operon is controlled by GlnR but also by CodY [62]. In S. mutans it was shown by electrophoretic mobility shift assay that GlnR binds to the promoter region of both the glnRA and amtB-glnK operon [11]. The same study identified GlnK as an activator of GlnR DNAbinding. Besides, the data in Tables 1 and 2 indicate that a putative GlnR-binding site upstream of amtB is present across almost all species of the class Bacilli. The conservation of these putative binding sites, including the conservation of the flanking A and T nucleotides (see Additional file 3), suggests GlnR represses the amtB-glnK operon in all analyzed species, and thus also in B. subtilis.
It was proposed that GlnR lacks the capability to recruit RNA polymerase and therefore acts solely as a repressor [13]. Given this lack of activating/recruiting capacity, it is to be expected that GlnR will only act on the expression of one gene in various divergons, like for instance on tnrA but not ykzB [58] and on pucH but not pucR [59]. Moreover, in various cases where TnrA was shown to activate transcription our analysis suggests the binding site is TnrA-specific, like for gabP, oppABCDF and glnQHMP.
The predicted GlnR regulon in oral Streptococci
GlnR binding-site predictions were performed for the oral Streptococci S. pneumoniae and S. mutans on basis of the Streptococci-specific motif (results in Additional file 4). For S. pneumoniae D39 and S. mutans UA159, the genes/operons predicted to be controlled by GlnR were compared to the genes/operons whose transcription was most affected in a GlnR mutant [25,26]. We found good agreement between prediction and experiment for both organisms (see Table 3). In the case of S. pneumoniae D39, the most significantly up-regulated genes/operons, glnP HH Q and gdh were represented by the best hits in our analysis. The analysis also revealed the presence of a clear binding site in front of 2 other genes/operons, in line with the predictions of [27]. These included the second glutamine ABC transporter (glnQHMP) and an operon containing enzymes of the urea cycle (arcAB). The clear regulatory connection between GlnR and the arcAB operon (encoding arginine deiminase and ornithine carbamoyltransferase) was found in all sequenced S. pneumoniae strains, but was absent in the other species. The absence of a change in arcAB and glnQHMP expression upon inactivation of glnR may be explained by the presence of additional regulatory interactions.
In the case of S. mutans UA159 the genes and operons found to be mostly affected in the knockout mutant [26] were the nrgA-SMU_1657c operon, coding for the ammonium transporter AmtB and its nitrogen regulatory protein GlnK; the citB-citZ-idh operon coding for aconitate hydratase, citrate synthase and isocitrate dehydrogenase; the glnQHMP and glnP HH Q operons encoding glutamine ABC transporters; and Smu.807, coding for a putative membrane protein, which is in a divergon with glnP HH Q. The best hits resulting from our analysis are also located upstream of the same operons. Moreover, we found a clear binding site preceding the genes gdh and thrC. The citB-citZ-idh operon has been shown to be essential for glutamate biosynthesis in S. mutans [24].
Conserved genetic associations of GlnR and the effect of the other regulators
GlnR binding-site predictions were performed for selected genomes that represented all sequenced species of the class Bacilli. We then collected the function annotation of all proteins encoded by genes/operons downstream of a putative GlnR-binding site that fitted the selection criteria (see methods) to generate an overview of the regulatory connections that are conserved between more than three species (accumulated in Additional file 4). The results are summarized in Tables 1, 2 and 4. As expected, we found a conserved regulatory connection between GlnR and the glnRA operon in all analyzed species and with tnrA in all Bacilli. Only in a few species of the order Bacillales the related GlnR-binding sites deviated from the consensus (e.g. in some Geobacillus species). Another connection that was conserved in almost all of the analyzed species was that with amtB (often amtB-glnK).
Various additional conserved connections were found, although these appeared far more family-specific. For instance, in the order Lactobacillales a genomic association with the genes of the two glutamine ABC transporter encoding variants glnP H Q or glnQHMP were identified, whereas this association appeared to be replaced by one with the sodium/proton amino acid symporters encoded by gltT (glutamate, [47]) and alsT [15] in various species within the family of the Bacillaceae. The AlsT protein is very similar to GlnT, a cation-glutamine symporter, i.e. showing a high degree of sequence conservation and having about the same length and the same number of predicted transmembrane helices. Although the protein is sometimes referred to as an alanine transporter, AlsT could well be a cation-glutamine or asparagine symporter.
We also found a clear GlnR-binding site upstream of several genes involved in regulation, for instance of mcp (chemotaxis, found in several Geobacillus species) and of the ycsFGI-kipIAR-ycsK operon (cellular development, found in several Bacillus species). In the initial description of the ycsFGI-kipIAR-ycsK operon [65], ycsF was related to the lactam (e.g. 2-pyrrolidinone) utilization gene lamB of Aspergillus nidulans [66], and kipA (orf12) was related to a urea amidolyase of yeast. Later, KipI was identified as a protein inhibitor of auto-phosphorylation of kinase A, the sensor histidine kinase responsible for processing post-exponential phase information and for providing phosphate input to the phosphorelay that activates developmental transcription via phosphorylated Spo0A, and KipA as a protein that counteracts the inhibition [63]. YcsG showed similarity to BraB (branched chain amino acid transport system II) of Pseudomonas aeruginosa [67]. The operon was found repressed upon growth on good nitrogen sources like ammonia and glutamine and derepressed on poor nitrogen sources [68], in line with repression mediated by GlnR. The association with the ycsFGI-kipIAR-ycsK operon connects GlnR-mediated regulation to the regulation of sporulation in some Bacilli.
) -
The GlnR-binding site identifications were made as described in the methods for all strains with a published genome (data in Additional file 4). The composition of the regulon appeared identical between strains although the similarity scores of particular binding sites varied slightly. The table lists the numbers obtained with strains TIGR4 and UA159, respectively. Column 4: The ranking is based on the scores obtained with the similar motif search procedure, which provides various sites with identical scores and thus identical ranking. The absence of certain high scoring sites (e.g. ranked 4) was caused by the conservative criteria we applied for a site to qualify as a putative binding site. Column 5 gives the observed ranking on basis of the transcriptional response towards a glnR knock out mutation (k.o.) as derived from [25] and the ranking of operons that are downregulated in a S. mutans glnR knock out after exposure to acid stress for 30 minutes [26]. The ranking score was calculated by dividing the fold change in the glnR mutant by the fold change in the wild-type. Column 1 (a): In some strains the arcA and glnQ ORFs have not been called. The published sequence of strain R6 suggests arcA and glnQ are truncated in this strain. Column 4 (b): The composition of the binding site upstream of arcAB and glnQHMP varies between strains. * Although [25] studied the effects in strain D39, the used microarrays were based on strains TIGR4 and R6. For reasons of comparison we have therefore listed the locus tags in strain TIGR4.
Another important finding was that in B. subtilis many operons related to the purine degradation pathway are controlled by GlnR and/or TnrA, like pucABCDE, pucH, pucI, pucJKLM and ureABC. The relation between purine catabolism and control by TnrA was established before experimentally. It was observed that a tnrA mutant strain could not use purines or its metabolic intermediates as a nitrogen source during nitrogen limited conditions [20]. Nevertheless, the extent to which both GlnR and TnrA are connected to the related operons is surprising.
We observed no clear dependency between the composition of the predicted GlnR regulon and the presence or absence of the other nitrogen-related regulators CodY and TnrA. For instance, there are only a few differences between the predicted GlnR regulon of B. subtilis and B. cereus suggesting that GlnR does not take over regulatory roles of TnrA. Similarly, the presence or absence of CodY does not seem to affect the size of the GlnR regulon in the Lactobacillaceae. In L. lactis, a species that has CodY, it was shown experimentally that at least three genes/operons (amtB, glnRA and glnP HH ,Q) are repressed by GlnR [62]. We indeed identified clear GlnR-binding sites in the upstream region of these three genes/operons in L. lactis. In L. plantarum and L. monocytogenes, two species that lack CodY, the same genes/operons appear to be preceded by a GlnR-binding site and only a few additional genes were found connected to GlnR indicating that GlnR does not take over the role of CodY in these species. The predicted GlnR regulon was smallest, consisting of only glnRA, in the meat isolates Macrococcus caseolyticus (a CodY and GlnR containing Staphylococcus) and L. sakei (a GlnR containing Lactobacillus).
Conclusions
We have analyzed all sequenced Bacilli for the presence of genes encoding central nitrogen metabolism and transport of the related metabolites, and identified their connection to the nitrogen metabolism regulator GlnR. Although there is a considerable variety in the presence of the central enzymes GS, G, GDH and GOGAT, and in the number of available transport systems for the central nitrogen-related metabolites, the composition of the GlnR regulon is relatively invariable between species. Moreover, we hardly found an effect of the absence or presence of the other regulators CodY, TnrA and GltC on the size of the predicted GlnR regulon. We made an initial conservative regulon prediction by restricting the regulatory association to those connections that are conserved between at least three species. In general, our findings are also in line with previous comparative in silico analysis performed on a limited number of species [27]. Careful redefinition of a specific GlnR-binding and a specific TnrA-binding motif caused a slight but clear separation in the predicted regulons. It is likely that the conserved A/Ts at the 3′ and 5′ end of the GlnR motif, which are absent in the TnrA motif, contribute significantly to the separation. For B. subtilis, S. pneumoniae, S. mutans and L. lactis our predictions complied with the available experimental data. Moreover, within the Bacilli we identified several new potential members of the GlnR regulon, including the ywoCD operon and the ycsFGI-kipIAR-ycsK operon.
Our analysis confirmed that for most species the size of the GlnR regulon is relatively small. The main regulatory associations in the species of the class Bacilli are with the incorporation of ammonium into central metabolism (or with the production of ammonium at high glutamine concentrations!) via glutamine synthetase (glnRA operon), and with ammonium (amtB-glnK) and glutamine/glutamate transport (i.e. via glnQHMP, glnPHQ, gltT, alsT). At the same time, the lesser conserved associations point to a somewhat broader role. Many of the conserved associations include genes that are either directly (e.g. ansA, arcA, aspA, gdh, nasDEF, ureABC) or more indirectly (by controlling intermediate steps, e.g. citBZ-idh, pucH, thrBC) relate to the intracellular production of ammonia or are related to the import of aminated compounds (e.g. gabP, opp-dpp, pucI, ywoCD). Thus, it appears that the main conserved role of GlnR is to prevent the influx and intracellular production of glutamine and ammonium under conditions of high nitrogen availability. The connection of GlnRmediated repression with the control of intracellular ammonia concentration is interesting. Such a role fits with the intrinsic need for tight control of ammonia levels as put forward by [43], who argue that transport of ammonia (NH 4 + ) should be tightly regulated to limit futile cycling by diffusion of ammonia out of the cell.
Data and tools
Complete genomic sequences and initial annotations were obtained from NCBI ( [68]; June 2011). Multiple sequence alignments were made with ClustalX [69], and BioEdit [70] was used to analyze sequences and alignments. Specific bootstrapped neighbor-joining trees, with 'correction for multiple substitutions' , were created using ClustalX and the trees were analyzed using LOFT [71] and Dendroscope [72]. The Microbial Genome Viewer 2.0 (http://mgv2.cmbi. ru.nl) was used to examine the function information within the genomic context. Frequency representations of aligned sequences were created with Weblogo [73]. Microarray data from glnR gene knockouts in Streptococcus pneumoniae and Streptococcus mutans used in this research were extracted from the Gene Expression Omnibus from NCBI [74].
The raw data resulting from the various analyses can be found at http://www.cmbi.ru.nl/bamics/supplementary/GrootKormelinketal_2012_GlnRregulon/. Data file 1: GlnR and TnrA motifs used for SMS; data file 2: GlnR motif search in Bacilli (w/o Streptococci); data file 3: GlnR motif search in Streptococci; data file 4: GlnR and TnrA motif search in Bacillus subtilis.
Classification and annotation of protein sequences
To obtain all proteins of a certain family, a prominent representative was chosen (listed in the legend of Table 1) and a BLAST search [75] was performed (cut-off < e −5 ) on all publicly available sequenced Bacilli genomes. Then the list of collected sequences (given as Additional file 1) was inspected. For all enzymes, the sequences could be grouped into specific clusters based on BLAST e-value only. In practice we found a group of sequences with comparable (very) low e-values (<e −30 ) separated from the rest of the sequences with considerably higher e-values (separation < e −15 ). In the case of the transcription regulators the separation remained clear, however with higher e-values due to a short length of the regulator protein sequence. In the case of the transporters, for the Amt-family (1.A.11), the DAACS-family (2.A.13) and the AGCS-family (2.A.25) an e-value cut-off also sufficed to collect all family members, whereas for the PAAT-family; 3.A.1.3 (ABC transport) the coding sequences of the putative glutamine/glutamate substrate binding domains were aligned, the alignment was inspected by eye, deviant sequences were removed, and a bootstrap neighbour joining tree was generated (see [76]). The tree was divided into clusters on basis of the branching. For each cluster, single species representatives were considered orthologous.
Motif definition and Similar Motif Scoring (SMS)
The upstream region of the conserved glnRA operon was retrieved for all species and the promoter region was aligned (see Figure 2). Then the conserved sequence upstream of the promoter was collected. For B. subtilis it was shown that this site is only involved in GlnR-mediated regulation [13]. The collection was used to generate a osition frequency scoring matrix for each taxonomic family (raw data file 1). It appeared that the frequency matrix was very similar for all species except for the Streptococci, where it was slightly different (illustrated in Figure 3). For the definition of a TnrA-specific binding site the upstream regions of genes whose transcription was shown to be activated by TnrA in B. subtilis (raw data file 1) were retrieved and the binding site was identified on basis of the published characteristic GlnR/TnrA motif and the short distance upstream of the promoter. The collection of sites was then used to generate a position frequency scoring matrix.
The position frequency matrices were used to identify potential binding sites in the analyzed genomes using a similarity search method formulated before by us [29]. The method relies on the fact that one of the most common practices observed in literature to reconcile prediction with experiment is to minimize the number of differences between the target and the query (or the 'consensus'). In fact, this criterion can be captured in a straightforward scoring using only the position frequency matrix: Given any number of input sequences of size i, the nucleotide frequency f N(j) (where N 2 A,C,T,G; and frequency is in terms of fraction) at every position j = 1 to i can be used directly to provide all target sequences of size i with a score by just adding up the input-based frequencies that relate to the nucleotide composition of the target. Division of the score by the length of the sequence i results in a 'similarity' score that can range from 0 to 1. Dividing this number by the highest attainable score given the input matrix then yields a relative 'similarity' score. In case the input sequences are representative for high-affinity sites, the ranking of target sequences according to score should approximately correspond to a ranking based on affinity. The method was tested and appeared at least as good to identify putative regulatory elements on basis of known input motifs as the commonly used tool MAST [77], yet providing a similarity score that is far easier to interpret and use. We identified putative GlnR regulon members for all species on the basis of two simple criteria: i) a relative similarity score >87%; and ii) a position between 250 and 0 bases upstream of the predicted translation start. In some cases experimentally verified more distant sites were also included as well as known intergenic sites. The results are given in Additional file 4. | 2017-06-20T17:13:46.653Z | 2012-05-18T00:00:00.000 | {
"year": 2012,
"sha1": "dad9aa92d3b76488c14dceb92e9c58e6b107e900",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-13-191",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92d82c85953295a48302bd038e4260b4662bcdf2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15505159 | pes2o/s2orc | v3-fos-license | Molecular composition of the submicrosomal membrane lipid of rat brain.
Rough-surfaced and light and heavy smooth-surfaced microsomes were isolated from rat brain by means of discontinuous sucrose gradient centrifugation. Electron microscopically, the rough-surfaced microsomes were characterized by vesicles with ribosomes and the light and heavy smooth-surfaced microsomes by fairly homogeneous membrane features without ribosomes. The rough-surfaced microsomal membranes were distinguished by the absence of glycolipids, such as ganglioside, cerebroside, and sulfatide. Cerebroside was exclusively recovered in the light smooth-surfaced microsomal membranes. Ganglioside and Na,K-ATPase were contained in larger amounts in the heavy smooth-surfaced microsomal membranes than in the light smooth-surfaced microsomal membranes in terms of protein content. Among the three submicrosomal membranes, cholesterol and phospholipid were found in the largest amounts in the light smooth-surfaced microsomal membranes, where the molar ratio of cerebroside-cholesterol-phospholipid was about 1:10:10. The membranes of rough- and smooth-surfaced microsomes were very similar in regards to the composition of phospholipid classes, although the fatty acid composition of the former contained a greater proportion of unsaturated fatty acids than that of the latter. When the membrane proteins were analyzed by sodium dodecyl sulfate gel electrophoresis, some differences were observed between the light and heavy smooth-surfaced microsomal membranes.
Although differences in the properties of biological membranes can be linked to differences in lipid composition, our knowledge of the biological significance of lipid constituents of membranes is very limited. In particular, no convincing studies on the distribution of glycolipids in neuronal membranes have been made, due to the inherent anatomical complexity of the brain. Postmitochondrial particles, though heterogeneous (3), have so far been referred to as "microsomes" of the brain, and most studies of their neurobiological significance have utilized such heterogeneous fractions (5,7,12,21,35). Thus, to clarify unequivocally the distribution of glycolipids in neuronal THE JOURNAL OF CELL BIOLOGY • VOLUME 63, 1974 • pages 749-758 membranes and their significance, it is a prerequisite to prepare pure neuronal and membrane fractions. From this point of view, we have studied the chemical composition of the isolated neuronal perikaryon, which clarified some chemical characteristics of the neuron (28,29). In contrast to the presence of abundant endoplasmic reticulum and well-preserved plasma membrane, neither cerebroside nor suifatide was detected in the cell body; an unexpectedly small amount of ganglioside was present, though these lipids had previously been shown to be constituents of microsomes in the brain. These observations imply that cerebroside and sulfatide are not neuronal constituents and, moreover, that they may be nonmicrosomal lipids in many kinds of cells, and that ganglioside may be diffusely localized over the whole neuronal plasma membrane. On the other hand, significant amounts of cerebroside and sulfatide have been observed in isolated neuronal perikarya by Norton and Poduslo (18) and Hamberger and Svennerholm (I 1).
These experimental findings led us to investigate the distribution of iipids, including ganglioside, cerebroside, and sulfatide, as well as Na,K-ATPase, in submicrosomal membranes, using electron microscopically well-defined materials. Some unusual chemical features ofsubmicrosomal membranes of the rat brain are reported.
Isolation of Submicrosomal Membranes
The basic procedure followed the slightly modified method of Rothschild (20) and Peters (19), originally developed for the isolation of smooth-and rough-surfaced microsomes of the liver. Five to ten Wistar male rats, weighing 200 g on average, were used in each experiment. Under ether anesthesia, the brain was perfused through the left ventricle with 50 ml of saline until the red color of the eyes faded. The rat was then decapitated, and the whole brain was rapidly removed and placed in ice-cold saline. The cerebrum, after being freed from the cerebellum and the brain stem, was minced with tweezers and then homogenized in 4 vol of 0.88 M sucrose solution with a Teflon-glass homogenizer having a clearance of about 0.25 mm. This homogenization procedure was carried out carefully at a constant rate of one stroke per minute to ensure consistent yield and quality of the membrane fractions. The homogenate was centrifuged at 25,000 g for 20 rain. The supernate was mixed with an equal volume of 1.76 M sucrose, and 2 ml of the mixture was carefully overlaid with 7 ml of 1.23 M sucrose and 1.5 ml of 0.15 M sucrose successively, then centrifuged at 105,000 g for 16 h in Beckman fixed angle rotors. After 16 h, a cloudy upper phase at the gradient boundary between 0.15 and 1.23 M sucrose, a slightly opalescent intermediate phase between the upper phase and the-pellet, and a clear yellow pellet were observed. These two phases and the pellet were designated as light smooth-surfaced, heavy smooth-surfaced, and rough-surfaced microsomal membranes, respectively. Light and heavy smooth-surfaced microsomal membranes were removed separately with a J pipette, diluted with 3 vol of cold water, and sedimented as pellets by centrifugation at 105,000 g for 2 h. The inner walls of the test tubes which contained the three different microsomal membranes as pellets were rinsed with cold water and wiped with soft paper. Then, the membrane fractions were homogenized in cold water and centrifuged at 105,000 g for 90 min. This washing procedure was repeated two times. The pellets thus obtained were suspended in a given volume (5 ml) of water by thorough homogenization and subjected to chemical analyses.
Electron Microscopy
Submicrosomal membranes were obtained as pellets, as described above, except that water was replaced by 0.32 M sucrose for washing. The microsomal pellets were fixed for 8 h in 1% OsO, in Millonig's phosphate buffer at pH 7.3 in the cold, then dehydrated by increasing the concentration of ethanol. After immersion in propylene oxide, the pellets were embedded in Epon. Ultrathin sections were obtained from the top, middle, and bottom parts of the pellets and stained with uranyl acetate and lead citrate. After carbon impregnation in vacuo, specimens were examined under a Hitachi HU-11B electron microscope.
Chemical Analyses
The analytical data presented in this paper are averages of more than ten different preparations of submicrosomal membranes, unless otherwise indicated. Protein content was determined by the method of Lowry et al. (16). RNA was extracted by the procedure of Fleck and Munro (9) as modified by Steele et al. (26) and was determined by the orcinol reaction (6). Na,K-ATPase activity was assayed in a medium containing 5 mM Tris-ATP, 100 mM NaCI, 20 mM KCI, 6 mM MgC12, 30 mM Tris-HCl, pH 7.4, in the presence or absence of 1.5 mM ouabain. The reaction was terminated by the addition of trichloroacetic acid at a final concentration of 6%. Released inorganic phosphate was determined by the method of Fiske and Subbarow (8). The difference between the values in the absence and in the presence of ouabain was designated as the Na,K-ATPase activity and expressed in micromoles Pi released per hour per gram wet weight of tissues.
Lipid was extracted with 20 vol of chloroformmethanol (2:1, by volume) in a Teflon-glass homogenizer. After filtration, the extract was evaporated to 750 THE JOURNAL OF CELL BIOLOGY -VOLUME 63,1974 dryness under an N~ stream in a rotary evaporator. The dried material was dissolved in a given volume of chloroform-methanol (2:1) and substances soluble in this solvent were partitioned against water as described by Folch et al. (10). The upper and lower phases thus obtained were adjusted to 2 and 4 ml, respectively, and subjected to chemical analysis. Silica GeI-G (Merck, Darmstadt, W. Germany) plates (0.25 or 0.4 mm in thickness) were used for thin-layer chromatography (TLC) after activation for 90 120 min at 120°C. Twodimensional TLC of total lipid classes in the lower organic phase was carried out in a mixture of chloroformmethanol-concd ammonia (13:7:1) in the first dimension followed by chloroform-acetone-methanol-acetic acidwater (10:4:2:2:1) in the second dimension. Cholesterol was measured by the method of Searcy and Bergquist (22), and the phosphorus of the total lipid by Bartlett's method (2). The individual phospholipids were separated on TLC plates as described by Skipski et al. (24), and determined as described by Keenan et al. (13), with materials scraped from the TLC plates. N-Acetylneuraminic acid (NANA) in the upper aqueous phase, taken as an indicator of ganglioside, was measured as described by Warren (32). Cerebroside was determined by a photodensitometric method as follows: Three different concentrations of total lipid in the lower organic phase and known concentrations of kerasine (cerebroside with nonhydroxy fatty acid) purified from bovine brain were carefully spotted in 5-mm bands on a Silica GeI-G plate and were developed in a mixture of chloroform-methanol-water (65:25:4). After development, the plate was sprayed with 3 ml of 50% H~SO4 and charred on a 2-kW hot plate at maximum temperature for 40 min. The densities of the charred spots were scanned with a Schoeffel spectrodensitometer model SD 3000 (Schoeffel Instrument Corp., Westwood, N. J.) at 565 nm with a slit width of 0.5 mm.
For gas-liquid chromatographic analyses of the fatty acid composition of phosphoglyceride, lipid was freshly extracted from submicrosomal membranes as described above, and fatty acids were methylated with sodium methoxide in dry methanol as described by Svennerholm (27). Chromatography was carried out at 160°C with a Shimazu model GC-4BM gas chromatograph using a glass column 1.5 m in length packed with 15% ethylene glycol succinate on Celite 545. The individual esters were identified by comparison with authentic samples or with the aid of a plot of log retention times.
Sodium Dodecyl Sulfate (SDS)-Polyacrylamide Gel Electrophoresis of the Membrane Proteins
The membrane fractions were dissolved in 10 mM sodium phosphate buffer, pH 7.0, which contained 1% each of SDS and /3-mercaptoethanol. After dialysis against 10 mM sodium phosphate buffer, pH 7.0, which contained 0.1% each of SDS and B-mercaptoethanol, electrophoresis was carried out in 7.5% acrylamide gel containing 1% SDS by the method of Weber and Osborn (33). Gels were stained with Coomassie brilliant blue (33). The density of the destained gel was traced with a Joyce-Loebl microdensitometer equipped with a 620-nm filter (Joyce, Loebl & Co., Ltd., Gateshead, England).
Identification of Submicrosomal Membrane Fractions
Representative photographs of the three submicrosomal membrane fractions are shown in Fig. 1. The rough-surfaced microsomal membrane fraction ( Fig. 1 c) was characterized by vesicles with ribosomes and free ribosomes. Larger particles, more electron dense, were also seen. In the light and heavy smooth-surfaced microsomal membrane fractions (Fig. 1 a, b), ribosomal particles and the electron-dense large particles were hardly present, but vesicular elements of various sizes were seen. The heavy smooth-surfaced mierosomal membrane fraction appeared more homogeneous in regards to membrane structure than the light smooth-surfaced microsomal membrane fraction. In the latter fraction, mitochondria, synaptosomal debris, and myelin were seen as minor contaminants.
Components Analysis and Enzyme Activity of Submicrosomal Membranes
The amounts of protein, RNA, Na,K-ATPase, cholesterol, phospholipid, and lipid-bound NANA are summarized in Table I. The amount of protein recovered in the smooth-surfaced microsomal membranes was about three times as high as that in the rough-surfaced microsomal membranes. RNA was found in the highest amount in the rough-surfaced microsomal membranes, whereas a small amount was found in both types of smoothsurfaced microsomal membranes. This result was compatible with the electron microscope feature of the distribution of ribosomal particles in submicrosomal membrane fractions.
Na,K-ATPase activity was observed preponderantly in the smooth-surfaced microsomal membranes, whereas only a low activity was observed in the rough-surfaced microsomal membranes. Lipidbound NANA was distributed in the same way as Na,K-ATPase activity, and was practically absent in the rough-surfaced microsomal membranes.
TAMAI I~'r AL. Molecular Composition of the Submicrosomal Membrane Lipid FIGURE 1 a Electron micrograph of light smooth-surfaced microsomes. These microsomes consist of membranous vesicles of a more heterogeneous appearance than those of heavy smooth-surfaced microsomes in regards to shape, size, density, and their content in the vesicles, x 50,000. Both cholesterol and phospholipid were recovered in large amounts in the smooth-surfaced microsomal membranes, and in the light smooth-surfaced microsomal membranes, especially, a,considerably higher amount of cholesterol was recovered.
Content of Lipid Classes in the Submicrosomal Membranes
Lipid classes in each submicrosomal membrane are demonstrated by thin-layer chromatography in Fig. 2. Cerebroside and sulfatide were barely detectable in the rough-surfaced microsomal membranes. In a striking contrast, these glycolipids were major constituents of the light smooth-surfaced microsomal membranes. In the heavy smooth-surfaced microsomal membranes, those glycolipids were occasionally detected in small amounts. Cholesterol, phosphatidylethanolamine, phosphatidylcholine, and phosphatidylserine were detected in all three submicrosomal membranes. In addition, sphingomyelin was observed in the smooth-surfaced microsomal membranes.
The content of lipid-bound NANA, an indicator of ganglioside, is shown in Table II. The average value per nanomole of phospholipid phosphorus was 4 x 10 -~ nmol in the heavy smooth-surfaced microsomal membranes and 2 x 10 -2 nmol in the light smooth-surfaced microsomal membranes. Lipid-bound NANA was negligible in the roughsurfaced microsomal membranes. The value of 0.1 x 10 -2 nmol in preparations 3 and 4 of the rough-surfaced microsomal membranes was nearly at the lowest limit of spectrophotometric measurement with the amount of the material used. Thus, it is clear that the ganglioside content of the rough-surfaced microsomal membranes is less than ½o and ¼0 of those in the light and heavy smooth-surfaced microsomal membranes, respectively. It should be noted that the content of lipid-bound NANA per phospholipid phosphorus was approximately two times higher in the heavy smooth-surfaced microsomal membranes than in the light smooth-surfaced microsomal membranes. Table III shows that a large amount of cerebroside was found in the light smooth-surfaced microsomal membranes. Despite the large amount of lipid subjected to analysis, as indicated in the footnote of Table III, no cerebroside was detected in the rough-surfaced microsomal membranes. The trace amounts of cerebroside detected in a few preparations of the heavy smooth-surfaced micro-TAMAI ET AL. Molecular Composition of the Submicrosomal Membrane Lipid FIGURE 2 Two-dimensional thin-layer chromatograms of submicrosomal membrane lipid of rat brain. (Fig. 2 a) Light smooth-surfaced microsomal membranes. (Fig. 2 b) Heavy smooth-surfaced microsomal membranes. (Fig. 2 c) The values represent mean values of three separate submicrosomal membrane preparations (about 30 brains); three samples were analyzed in each case.
mine accounted for one-half and one-fourth, respectively, of the total phospholipid in all submicrosomal membranes. The fatty acid composition of glycerophospholipid is shown in Table V. Palmitic, stearic, and oleic acids were major components in all three submicrosomal membranes. The content of arachidonic acid was higher in the rough-surfaced microsomal membranes than in the smooth-surfaced microsomal membranes. On the whole, the fatty acids of the former membranes appeared to be more unsaturated than those of the latter membranes.
Smooth-Surfaced Microsomal Membrane Proteins
As shown in Fig. 3, two minor bands (indicated by arrows) observed with the light smooth-surfaced microsomal membranes were faint or absent in the heavy smooth-surfaced microsomal membranes. Microdensitometric tracing of the gels revealed further dissimilarities. The proportions of each protein band in the two membranes differed significantly (Fig. 4 a, b).
DISCUSSION
To isolate the submicrosomal membranes from rat brain, the procedure of Rothschild (20) and Peters (19), who worked with rat liver, has been applied in the present study. Three membrane fractions, light and heavy smooth-surfaced and rough-surfaced microsomal membranes, were obtained, and each of them was found to be fairly homogeneous by electron microscope observation. Table VI and Fig. 5 show the distribution of individual lipids, the content of RNA, and the Na,K-ATPase activity in terms of micromoles per milligram of protein. It is clear that cerebroside is associated with the light smooth-surfaced micro° somal membranes, but not with ribosome-bound membranes and the heavy smooth-surfaced micro-somal membranes. Ganglioside is distributed only in ribosome-free membranes; the heavy smoothsurfaced microsomal membranes contained 1.5 times more of this lipid than the light smooth-surfaced microsomal membranes (Table VI). The present results provide the first evidence that glycolipids do not exist in ribosome-bound membranes. The view that membranes of rough-and smooth-surfaced endoplasmic reticulum are continuous has been proposed by some investigators (17,34). If this concept is accepted, the present results on lipid compositions in submicrosomal membranes support our earlier suggestions (14,28,29) that glycolipids including ganglioside are not present inside the nerve cell perikaryon and that ganglioside is distributed on the neuronal plasma membrane. The amounts of lipid-bound N A N A found in both smooth-surfaced microsomal membranes are within the range of values for unfrac- (1,4,15,23,25,36,37). Na,K-ATPase activity was also found exclusively in ribosome-free membranes. The activity in the heavy smooth-surfaced microsomal membranes is 1.5 times greater than that in the light smooth-surfaced microsomal membranes, in parallel with the distribution of ganglioside (Table VI). In our preparations the molar ratio of cholesterol to phospholipid was about 0.2, 0.6, and 1 in the rough-surfaced and in the heavy and light smooth-surfaced microsomal membranes, respectively (Table VI). Considering that cholesterol may contribute to the stabilization of molecular architecture by strong van der Waals forces (30,31) and that the fatty acids were less unsaturated in the smooth-surfaced microsomal membranes than in the rough-surfaced microsomal membranes (Table V), the membranes of the smooth-surfaced microsomes might be less fluid than those of the rough-surfaced microsomes. The phospholipid composition was not significantly different among the three submicrosomal membranes (Table IV). This kind of proportion of phospholipid may be a basic requirement for the biological functions of the membranes. In our previous work (28) the content of phosphatidylcholine was observed to decrease in the order of nerve cell perikarya, gray matter, and white matter. The present study has t TAMAI ET AL. Molecular Composition of the Submicrosomal Membrane Lipid 757 The Hoaglanci ;",c6;cal Llbrar shown that more phosphatidylcholine is contained in microsomal membranes than in nerve cell perikarya, suggesting that this lipid exists in greater quantities in the microsomal membranes than in the membranes of other cellular elements.
The results of SDS gel electrophoresis of the light and heavy smooth-surfaced microsomal membrane proteins revealed further dissimilarities in the chemical compositions of these microsomal membranes of the brain. These dissimilarities in protein components may be related to the differences in the lipid components discussed above.
Thus, in the present work we have determined the characteristic biochemical compositions of the individual submicrosomal membranes of the brain. In particular, the specific distribution of glycolipids has been determined. To clarify the biological significance of the lipids in submicrosomal membranes, further experiments are in progress.
The valuable advice and assistance of Mr. J. Egawa in the electron microscope studies and the excellent technical assistance of Miss H. Kojima in the spectrodensitometric measurement of glycolipids are greatly appreciated.
A part of this work was presented at the Fourth International Meeting of the International Society for Neurochemistry, Tokyo, 1973. Received for publication 27 December 1973, and in revised form 18 July 1974. | 2014-10-01T00:00:00.000Z | 1974-12-01T00:00:00.000 | {
"year": 1974,
"sha1": "562cc2878fe129be576c6ad7faf6276083d807ff",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/63/3/749/1266591/749.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "562cc2878fe129be576c6ad7faf6276083d807ff",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
9227663 | pes2o/s2orc | v3-fos-license | Laminarin from Irish Brown Seaweeds Ascophyllum nodosum and Laminaria hyperborea: Ultrasound Assisted Extraction, Characterization and Bioactivity
Ultrasound assisted extraction (UAE), purification, characterization and antioxidant activity of laminarin from Irish brown seaweeds Ascophyllum nodosum and Laminarina hyperborea were investigated. UAE was carried out using 60% ultrasonic power amplitude and 0.1 M hydrochloric acid for 15 min. Separately, solid-liquid extraction was carried in an orbital shaker using 0.1 M hydrochloric acid at 70 °C for 2.5 h. UAE with hydrochloric acid resulted in the highest concentration of laminarin, 5.82% and 6.24% on dry weight basis from A. nodosum and L. hyperborea, respectively. Purification of all extracts was carried out using molecular weight cut off dialysis at 10 kDa. Characterization of the laminarin fraction was carried out using matrix assisted laser desorption/ionization time-of-flight mass spectrometry. Antioxidant activity of A. nodosum and L. hyperborea extracts had 2,2-diphenyl-1-picrylhydrazyl (DPPH) inhibition levels of 93.23% and 87.57%, respectively. Moreover, these extracts have shown inihibition of bacterial growth of Staphylcoccus aureus, Listeria monocytogenes, Escherichia coli and Salmonella typhimurium.
Laminarin and Phenolics Content
In this study two seaweed species harvested from the west coast of Ireland were selected for the extraction of laminarin. Laminarin was extracted using ultrasound assisted extraction and conventional solid liquid extraction using water and 0.1 M HCl (Table 1). The highest laminarin content was measured in the extract of L. hyperborea and A. nodosum obtained using ultrasound and 0.1 M HCl. In previous studies, 0.1 M HCl was also found to give higher extraction yields compared to water [14]. Moreover HCl at higher temperatures is more effective than at room temperature for laminarin extraction [7]. Laminarin content varies with species, with L. hyperborea having a higher level of laminarin compared to A. nodosum. Laminarin content also varies with factors such as harvesting season and geographical location. Laminarin is absent during the period of fast growth in spring, but in autumn and winter, it may represent up to 35% of the dried weight of the fronds [15]. Ultrasound was found to give higher extraction yields of laminarin. High power ultrasound treatment for 15 min achieved a higher extraction yield than conventional solid and liquid extraction for both seaweeds investigated. Laminarin yield for L. hyperborea extracted using ultrasound was 36.97% and 91.76% higher using water and 0.1 M HCl solvents respectively, whereas for A. nodosum laminarin yield was 15.02% and 35.62% higher using water and 0.1 M HCl solvents respectively. This can be attributed to the bubble cavitation phenomena generated by ultrasound waves. The implosion of cavitation bubbles generates macroturbulence, high-velocity interparticle collisions, and perturbations in microporous particles of the biomass. Cavitation near liquid-solid interfaces directs a fast-moving stream of liquid through the cavity at the surface. Impingement by these microjets results in surface peeling, erosion, and particle breakdown, facilitating the release of bioactive compounds and other components from the biological matrix. These effects increase the efficiency of extraction by increasing mass transfer by eddy and internal diffusion mechanisms [16]. Ultrasound assisted extraction (UAE) has also been successfully used for extraction of A. nodosum bioactive compounds including phenolic compounds, fucose and uronic acids [17,18]. The phenolic content was higher in L. hyperborea extracts. Water was demonstrated to be a better solvent than HCl for extraction of phenolics. This may be attributed to fact that acid solvents at a temperature of 70 °C may be detrimental to phenolic compounds leading to a lower content in acid extracts. The highest content of phenolics observed in L. hyperborea and A. nodosum was 0.365 mg PGE/gdb and 0.166 mg PGE/gdb, respectively.
Characterization of Extracts
Laminarin rich extracts were analyzed for their molecular weight distribution using Matrix Assisted Laser Desorption Ionization Quadrupole Time-of-Flight Mass Spectrometry (MALDI-Q-TOF-MS). Figure 1A,B show the mass spectra obtained in negative ion mode for a laminarin standard and an ultrasound assisted extract of L. hyperborea, respectively. The degree of polymerization (DP) for each laminarin peak is shown in bracketed bold numbers above the corresponding m/z peaks. Four extracts were evaluated to investigate the effect of ultrasound and seaweed species on extraction of laminarin. Since M-chains and G-chains in the native laminarins differ by only 2 mass units, they cannot be distinguished from high molecular weight profiles [11]. Laminarin extracted using UAE from L. hyperborea had higher molecular weight laminarins ranging from 3242 to 5052 Da (corresponding to DP20 to DP31) compared to the other extracts measured. Meanwhile, L. hyperborea obtained with conventional extraction yielded laminarins from DP20 to DP24 only. This demonstrates that ultrasound assisted extraction is more efficient than solid liquid extraction for extracting higher molecular weight laminarins. This can be attributed to the bubble cavitation phenomena releasing high molecular weight laminarins from L. hyperborea. Similar results were reported for extraction of high molecular weight phlorotannins from A. nodosum using ultrasound assisted extraction [18].
Ultrasound treated A. nodosum extracts yielded laminarins from DP25 to DP30 (molecular weights in the range of 4075 to 4884 Da). A. nodosum extracts did not yield lower molecular weight laminarins compared to L. hyperborea extracts. Moreover, L hyperborea extracts had a wide range of degree of polymerization compared to A. nodosum extracts.
Bioactivities of Laminarin Rich Extract
Laminarin has been found to possess various biological activities. The antioxidant and antimicrobial activities of the crude laminarin extract are shown in Table 2. The % of 2,2-diphenyl-1-picrylhydrazyl (DPPH) inhibition of seaweed extracts was found to be highest in ultrasound treated extracts using acid solvent for L. hyperborea (87.58%) and A. nodosum (93.24%). O'Sullivan et al. [19] also found that A. nodosum was one of the most effective extracts for scavenging DPPH radicals. The extracts obtained with acid were found to have higher antioxidant activity than those obtained using water solvent. Further, authors reported that A. nodosum with 0.45 g/100 g (Gallic acid equivalent) of phenolic content exhibited antioxidant activity of 25.6% DPPH inhibition [19]. Balboa et al. [20] have extensively reviewed antioxidant activities in brown seaweeds and they have reported that phenolic compounds possess antioxidant properties. The extracts were tested for inhibition against two Gram positive (Staphylcoccus aureus and Listeria monocytogenes) and two Gram negative (Escherichia coli and Salmonella typhimurium) bacterial strains. All extracts of L. hyperborea were found to inhibit the growth of all the micro-organisms tested. The acid extracts had better inhibition of bacterial growth compared to water extracts. However, A. nodosum extracts only inhibited the growth of S. typhimurium completely. Only the acid extract of A. nodosum proved effective in inhibiting the bacterial growth of all species. The higher phenolic content and antioxidant activity of L. hyperborea extracts may enhance the antimicrobial efficacy of these extracts as many phenolic compounds have been shown to possess antimicrobial properties [21]. Laminarin rich extracts prepared using ultrasound and acid solvents had minimum inhibitory concentrations (MIC) of 13.1 mg/mL for E. coli and S. typhimurium and 6.6 mg/mL and 3.3 mg/mL for S. aureus and L. monocytogenes, respectively. This is the first reported study demonstrating that laminarin rich extracts possess microbial inhibitory activity. There are number of reports of seaweed extracts such as Sargassum polyophyllum, Sargassum flavellum, Padina australis and Sargassum binderi possessing antimicrobial activity [22]. However, ultrasound can be a novel alternative to high energy consuming traditional solid liquid extraction methods. These seaweed extracts have potential application in the preparation of antimicrobial products for example, a hydrogel wound dressing incorporating a seaweed Polysiphonia lanosa extract [23].
Seaweed Samples
Brown seaweed A. nodosum and L. hyperborea were harvested from Finavarra, Co. Clare, Ireland in May 2014. Seaweed samples were washed thoroughly with fresh water to remove epiphytes and salt. Fresh seaweed samples were freeze dried. Dried seaweed was powdered using a hammer mill. Samples were stored at 4 °C prior to extraction studies.
Ultrasound Assisted Extraction
Ten grams of A. nodosum and L. hyperborea powders were extracted using 200 mL of solvent (distilled water or 0.03 M HCl). HCl was used as the solvent for extraction based on preliminary studies and previously reported studies for the extraction of laminarin [24][25][26]. A 750 W ultrasonic processor (VC 750, Sonics and Materials Inc., Newtown, CT, USA) with a 13 mm diameter probe and constant frequency of 20 kHz was used. Ultrasonic energy was controlled by setting the amplitude of the sonicator probe. Ultrasound treatment was applied for 15 min at an amplitude level of 60% which corresponds to an ultrasonic intensity of 35.61 W cm −2 . Ultrasonic power dissipated was calculated at each amplitude level, with temperature (T) recorded as a function of time (t) under adiabatic conditions using a T-type thermocouple. From temperature versus time data, the initial temperature rise dT/dt was determined by polynomial curve fitting. The ultrasonic power (P) was determined using Equation (1) where dT/dt is the change in temperature over time (°C s −1 ), Cp is the specific heat of water (4.18 kJ kg −1 °C −1 ), and m is the mass (kg).
⁄
(1) Ultrasonic intensity (W cm −2 ) dissipated from an ultrasonic probe tip with diameter D (cm) is given by Equation (2) 4 The traditional solid-liquid method of extraction involved stirring at 70 °C for 2.5 h using distilled water and 0.1 M HCl as solvents and no ultrasound pretreatment was employed. The extracted samples were then centrifuged at 9000 rpm for 30 min. The supernatant was separated and precipitated with ethanol overnight at 4 °C. The precipitated extract was freeze dried and stored at −20 °C for further analysis. The different methods of extraction carried out in this experiment are listed in Table 3. The extraction yield (%) was calculated by measuring the mass of freeze dried extract over the initial mass of the sample.
Laminarin Assay
Laminarin in the extract was quantified by measuring the glucose concentration released by the enzymatic hydrolysis of laminarin [27]. A 100 μL sample volume was incubated in 100 μL of β-glucosidase enzyme at 40 °C for 15 min. After incubation, 3 mL of GOPOD (glucose oxidase/peroxidase) reagent was added. This mixture was incubated at 40 °C for 20 min. Finally the absorbance of the sample was measured at 510 nm by UV-VIS spectrophotometer (UV3100PC, VWR International). Laminarin produced from Laminaria digitata was used as a standard.
Total Phenolic Content
Total phenolic content was determined using the method of Wang et al. [28]. Folin-Ciocalteau reagent was diluted with distilled water at a ratio of 1:10. An extract of 100 μL was mixed with 100 μL of diluted Folin-Ciocalteau reagent and 100 μL of sodium bicarbonate (20%, w/v) was added to the mixture and diluted to 1000 μL with distilled water. This solution was maintained at room temperature for 30 min and the absorbance was measured at 735 nm by UV-VIS spectrophotometer (UV3100PC, VWR International). Results were expressed as mg phloroglucinol equivalents (PGE)/gdb.
Matrix Assisted Laser Desorption Ionization Quadrupole Time-of-Flight Mass Spectrometry (MALDI-Q-TOF-MS)
Mass spectrometry of samples was performed using MALDI-Q-TOF-MS Waters Corporation, Milford, MA, USA). Prior to analysis, samples were dialyzed (molecular weight cut off of 10 kDa) in distilled water overnight. Aliquots of 5 μL of sample were mixed with 5 μL of matrix sinapinic acid. Finally 1-2 μL of sample was plated on a 96 well stainless steel MALDI plate. Samples were allowed to dry and co-crystallize with the matrix at room temperature and the plate was loaded in MALDI-Q-TOF mass spectrometer. Mass spectral data were obtained in the negative-ion mode for a mass range of m/z 1000 to m/z 10,000.
Antioxidant Activity-DPPH Method
DPPH free radical scavenging inhibition assay was used to determine the antioxidant capacity of extracted samples [29]. Extract sample of 200 μL was added to 800 μL of 60 μM DPPH in ethanol, decrease in absorbance was monitored at 517 nm by UV-VIS spectrophotometer (UV3100PC, VWR International) after 30 min incubation in dark. The readings were compared with the controls, which contained 200 μL of water instead of the seaweed extract. The percent inhibition was calculated as
Bacterial Strains and Culture Conditions
Seaweed extracts were tested for antimicrobial activity against the following strains of bacteria: Staphylcoccus aureus NCTC 8178, Escherichia coli DSM 1103, Listeria monocytogenes NCTC 11994 and Salmonella typhimurium SARB 65. The strains were stored on ceramic beads in glycerol at −80 °C prior to use. A bead of each strain was streaked on a nutrient agar plate and incubated for 18 h at 37 °C. A single colony was removed from each plate and inoculated into tubes containing 25 mL of sterile Mueller-Hinton Broth (MHB) and incubated for 22 h at 37 °C. Overnight cultures were vortexed and aliquots diluted appropriately in sterile MHB to produce solutions containing log10 6.0 ± 0.5 cells/mL. Cell numbers were confirmed by plate counting. Antibiotic Gentamicin (0.2 mg/mL) was used as standard for negative growth.
Minimum Inhibitory Concentration (MIC) Assay
The MIC of each seaweed extract was carried out using a previously described microtitre method of Kenny, Smyth, Walsh, Kelleher, Hewage and Brunton [21]. Each extract (2 mg/mL) was prepared by dissolving the material in distilled water added to the first well of the plate followed by a serial dilution across the plate. Iodonitrotetrazolium chloride (INT) dye was used identify microbial growth [30]. The MIC of each extract against a bacterial strain was determined as the lowest sample concentration at which no pink color appeared. This process was repeated in triplicate for each bacterial strain to ensure reproducibility.
Conclusions
In this study, water and acid extracts from L. hyperborea and A. nodosum were obtained using ultrasound assisted extraction and solid liquid extraction. The extracts were purified to obtain laminarin rich extracts. The L. hyperborea extracts contained higher contents of laminarin. The overall laminarin content in all the extracts was low which may be attributed to seasonal and geographical factors. Ultrasound was demonstrated to be a more efficient method of extraction than solid liquid extraction based on laminarin content and molecular weight distribution observed in the extracts. The laminarin rich extracts were also studied for biological activities including anti-oxidant and anti-microbial activity. This study is the first report of laminarin rich extracts possessing anti-microbial activity. The use of laminarin as a nutraceutical ingredient should be further investigated due to its dietary fiber properties in addition to the anti-oxidant and anti-microbial activities reported in this paper. | 2016-03-01T03:19:46.873Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "99d10692159274ede79d25b80a09d733bdd4700d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/13/7/4270/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99d10692159274ede79d25b80a09d733bdd4700d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55862545 | pes2o/s2orc | v3-fos-license | Rain research with disdrometers : a bibliometric review
Abstract This study analyses the research on disdrometers based on published studies. To doso, a wide data base of bibliographic references has been used: the Web of Science(published by Thomson Reuters). The search was carried out for all of the articles 5 whose “TOPIC” was disdrometer. The more than 300 articles found were analysedaccording to various criteria: countries with research using disdrometers; publicationdates; evolution of the number of articles; concepts studied and research lines followedin each article; and finally, a bibliometric analysis of the more than 60 journals wherethese articles have been published. Since 1963, there has been an increase in the 10 number of articles published on disdrometers, which in the last 20yr has been morethan ten times higher than the increase in the number of articles on meteorology. 1 Introduction Rain is a natural phenomenon that has always interested humanity and which hasbeen the subject of numerous research studies (Ciach and Krajewski, 2006; Krajewski
Introduction
Rain is a natural phenomenon that has always interested humanity and which has been the subject of numerous research studies (Ciach and Krajewski, 2006;Krajewski et al., 2006;Stephens and Kummerow, 2007).However, the detailed study of its most representative physical parameters and the modelling of its behaviour are still difficult today due to the localised nature of precipitation, and the impossibility of comparing two equal events (Ciach et al., 2007).
At present, global climate studies are focusing their attention on rainfall distributions, seeking to identify how they can affect the current and future weather in each specific location (Bartley et al., 2006).The consequences of convective episodes have also been studied, both in terms of the soil erosion they cause (Friedel et al., 2006;Agnese et al., 2006), and interferences in communications (Hitschfeld and Bordan, 1954), and in hydrological studies of reservoirs and dams, through the return periods (Hennessy et al., 1997).Introduction
Conclusions References
Tables Figures
Back Close
Full By studying the characteristic physical parameters of rainfall, it is possible to obtain a physical and dynamic description of rain, and therefore achieve a greater understanding of the impact of its intensity and energy (LeBissonnais, 1996).Efforts to increase the precision of data on raindrop size have progressed in recent decades: Nueberger (1942) studied European research in terms of the comparison of raindrop sizes and their corresponding instrumentation.
The first attempts to measure raindrop sizes date from 1895, when Wiesner (1895) published the description of a method consisting of using a sheet of absorbent paper covered with a water-soluble dye, which was exposed to rainfall for a few seconds.After patting the paper, the drops left permanent marks on it because of the dye.He considered that the diameter of the marks would only depend on the size of the raindrop, although in reality they depend on the thickness of the paper and the velocity at which the drop falls on the absorbent layer.Also, the humidity of the paper affects the measurement of the diameters.One major problem affecting this method is that when large sized drops hit the paper, they can break up and spatter over it, making it impossible to determine their size.With some modifications, such as automating the analysis using software, its objectivity could be improved, adding a further benefit to its low cost and ease of use (Cerd á, 1997;Cruvinel et al., 1999;Salles and Poesen, 1999).
Another traditional way of measuring raindrop size is the flour method, which today has been replaced by using plaster (Ries et al., 2009).This was originally presented by Bentley (1904) and subsequently modified by Laws and Parsons (1943).The process consists of allowing the raindrop to fall on a layer of uncompacted flour between two or three centimetres deep.The layer is held in a surface container with a diameter of 10 centimetres, which is generally exposed to the rain for a few seconds.The raindrops are retained in this two or three centimetre deep layer, and are not collected until a hard, dry paste is formed as a result of the raindrop interacting with the flour.These dough pellets are then removed, classified according to their sizes and photographed for later analysis.The process of classifying the raindrops by size consists of using a Introduction
Conclusions References
Tables Figures
Back Close
Full series of standard sieves to group the dough pellets into different sizes.
The instrument has to be calibrated, as the real size of the raindrop does not correspond exactly to the size of the dough pellet formed after the impact.This process consists of allowing drops of a known size to fall onto the instrument in order to calculate their correspondence with the dough pellet.The disadvantages of this method are that it is very laborious, overlapping can occur (several drops can fall onto the same point), and it is difficult to create the smallest drops for calibration purposes.Jones (1959) developed another method known as the raindrop camera, which consists of two cameras placed close together that are synchronised to take two photos simultaneously of the raindrop from two perpendicular angles (east-west or north-south).
This method makes it possible to obtain a three-dimensional image of the shape of the raindrops, and from there to calculate the measurement of the raindrops, although with some complications in the calculations which make them difficult to compare with other studies.
Until the development of the Illinois camera (Mueller, 1966) it was not possible to carry out an extensive study of raindrop sizes in the field.The data from the camera, gathered in a series of locations in the USA and Indonesia, have been used in a considerable amount of erosion research (Kinnell et al., 1996;McIsaac et al., 1990;Fernandez-Raga et al., 2010).The camera is capable of capturing raindrops in an area of 1 m 3 of air every 10 s.However, apart from being very expensive, it suffers from superimposition problems.Almost simultaneously, Clardy and Tolbert (1961) referred to the disdrometer for the first time as an instrument that could possibly be used for measuring the sizes and velocities of raindrops.The disdrometer they described consisted of a phototube that captured the number of raindrops passing through the sampling region, in each range of sizes.It was found that the distribution of sizes quite closely fit the distribution hypothesised by Laws and Parsons (1943).
One of the most widely known instruments is the disdrometer based on microphonic measurements.It was developed by Joss and Waldvogel (1967).A microphone sensor covering an area of 50 cm 2 converts the vertical momentum of the impact of the Introduction
Conclusions References
Tables Figures
Back Close
Full raindrop into electrical impulses, which are processed and shown on a screen or printed in tables that show the measurements obtained.This system assumes that the raindrop is spherical, and that the descent speed is the terminal velocity.The instrument detects raindrops with diameters of between 0.3 and 5 mm, and can record a maximum of 190 impacts per second in the sampling area.
Some years later, the first electromagnetic-type systems appeared, also known as optical systems, in which the electrical pulse is not produced by a vertical impact, but instead by the raindrop passing through a laser beam.Optical disdrometers have evolved constantly since the 1980s and 1990s, becoming more sensitive, more precise, and more capable of distinguishing between raindrop sizes by using different wavelengths and improved techniques (Ries et al., 2009).A wide range of optical measurement strategies have been developed (Knollenberg, 1976;Donnadieu, 1980;Illinworth and Stevens, 1987;Illinworth et al., 1990).For example, the spectrometer referred to as the Particle Measuring System (PMS) (Knollenberg, 1970;Joe and List, 1987) or the Ground Based Precipitation Probe (PMI Model GBPP-100), determines the size of the particles that cross through a laser beam based on the shadow cast on a series of photodiodes arranged in a line.The optical spectro-pluviometer (OSP) takes measurements from raindrops falling through a parallel beam of ultraviolet light (Hauser et al., 1984).Also, the Laser Precipitation Monitor (Thies Clima), which emits a laser beam at 785 nm, determines the fraction of the electromagnetic energy intercepted by the raindrop.The models also offer the possibility of measuring the descent velocity of the raindrop, as with the Parsivel Laser Optical Disdrometer.
The need to have samples of the distribution of raindrop sizes in a large volume has been resolved by using radar measurements of the velocity spectra generated by the raindrops scattered in volume on the sensor (Rogers, 1967).One method of this type is the Precipitation Occurrence Sensor System, POSS, which is a static X-band radar that points upwards and determines the reflectivity of the raindrops when they are falling a few centimetres above the device (Sheppard, 1990).Another contribution to measuring raindrops is the Micro Rain Radar, an instrument that is halfway between Introduction
Conclusions References
Tables Figures
Back Close
Full a disdrometer and radar system.It combines the reflectivity measurement at different heights with the descent velocity of the raindrops by the Doppler effect (Leijnse and Uijlenhoet, 2010).Some examples of disdrometers are shown in Fig. 1.The increasing proliferation of research carried out with the help of disdrometers in recent decades invites an evaluation of the literature in order to obtain a perspective that serves to guide future studies.Specifically, the aim of this article is to examine the evolution of studies carried out with disdrometers based on the documents that have been published, and to analyse the characteristics of these publications.
Material and methods
The Web of Science (WOS) has been used for this study.This is one of the most prestigious bibliographic databases for consultation purposes.The main appeal of this database is the versatility obtained from the combination of data offered.The general search for a topic can be refined by selecting publications for their area of scientific interest, i.e. the branch of science with which the study is concerned, or by the type of document (articles, abstracts, proceedings, letters, etc.).However, the search can be refined much more by entering a specific author, journal or even a conference where it was published.Searches can also be made by selecting the country and language in which the article was written.Naturally, the documents can be selected chronologically, limiting the search period.
All of these possibilities have made it possible to create a database on the use of disdrometers, and to carry out a detailed analysis of the evolution of authors, ideas, journals and countries over time.Data have been consulted up to 7 January 2011.This database contains more than 3.8 × 10 7 scientific documents included in the Science Citation Index Expanded (SCI-EXPANDED) from 1945 to the present, of which 381 documents include the word disdrometer*, where the asterisk is a wildcard for any letter or group of letters (this was done mainly to include the plural).Out of all of these publications, according to the general category which includes the theme of Introduction
Conclusions References
Tables Figures
Back Close
Full the publication -the "General Categories" -this group of 381 publications would be divided into those that cover themes of Science and Technology, which represent the vast majority, with 365 publications, and those of wider social interest, called Social Sciences, with a total of 15.This last group of publications uses the disdrometer in order to contextualise their research, without so much scientific precision.
Results
Until 7 January 2011, the database contains a total of 381 publications that contain the word disdrometer*, of which the vast majority are articles (294), although there are also presentations at proceedings (79), abstracts in meetings (6), letters (4) and notes (2).These publications were written by a total of 214 authors, from 26 countries.A total of 93 of these documents contain the word disdrometer in their title, which indicates the relevance of the device in their research.
The subject matter covered by these articles belongs to the Subject Category of Meteorology and Atmospheric Sciences, which contains 280 of the previously mentioned publications.This is a much higher number than the rest of the areas in which studies using disdrometers have been published.For example, it is more than twice the 127 in Geology, which is the closest Subject Category to it; it is 2.5 times higher the number of articles published in the area of Engineering, with 112 publications, and 7 times more than the publications corresponding to Geochemistry and Geophysics, Remote Sensing or Telecommunications, with 40, 39 and 35 documents respectively.
Evolution
After its appearance in the 1960s, the disdrometer was firstly described by Clardy and Tolbert (1961) (curiously, this article is not included in the WOS database).The evolution of the publications can be seen in Fig. 2. The first article from the WOS, the only one in the 1960s, is by Joss and Waldvogel (1967).In the 1970s, 13 articles were Introduction
Conclusions References
Tables Figures
Back Close
Full published (Fig. 3), which mainly offer a more detailed description of disdrometers, how they work and improvements in their design.The 1980s were characterised by a general downturn with respect to the number of publications in previous years, with only 6 articles.However, there is an emphasis on a more detailed search for applications of the disdrometer, as comparisons had begun with radars, and studies applied to zones with interesting characteristics, such as tropical storms.Also, the articles largely analysed the evolution of reflectivity as a characteristic parameter of different types of rain.
Yet it would not be until the 1990s when the field of applications of the disdrometer was truly extended, with a much larger number of publications: 64.Topics were studied as varied as the oscillations and polarisation of raindrops, modelling or the distribution of raindrop sizes.This decade also marked the appearance of less theoretical applications, such as erosion studies, and studies applied to evaluate risks caused by extreme weather types, and even the economic implications, as demonstrated by studies on the attenuation and dispersion of telecommunications waves.
From 2001 onwards, the number of publications on disdrometers grew exponentially, to the point of 296 documents being published as of December 2010.In reality, all of these articles aim to complete and develop the areas that had been opened previously, although the use of the disdrometer is clear and well established for its scientific application in numerous fields.In terms of the number of publications, the most frequent are articles based on reflectivity, comparisons with radar and stations, modelling, erosion, wave attenuation, and distribution of drop sizes.
Source of the publications
A geographic analysis shows that research with disdrometers has been carried out in 25 countries, with the majority −58.7 % of the total -published in the USA.In general, it can be seen that the geographic distribution of the use of the disdrometer is closely associated with the problems caused by an extreme abundance or scarcity of rainfall.Outside of the USA, the Mediterranean countries (France, Italy, Spain, Greece and Introduction
Conclusions References
Tables Figures
Back Close
Full Israel) represent nearly 21 % of the total scientific production.Central Europe (Germany, the Netherlands, Switzerland, Austria and Poland) have also dedicated efforts to research in this area, with publications from these countries representing more than 14 % of the total.
Other countries with an important volume of publications are Japan (with 7.4 %), Canada (with 6.6 %) and England (with 4.1 %).The case of India is of special interest due to the frequent torrential climatic changes in its climate: its publications on disdrometers represent 4.4 % of the world total.Australia and New Zealand have published close to 4 % of the total, and finally other countries in Asia, Africa and the Americas offer minimum percentages.Figure 4 shows the location of all of the countries which have published scientific articles on disdrometers, with the percentages in comparison to the world total.Before ending this section, it should be noted that the total of the percentages shown in the previous paragraphs is not 100, as there are collaborations between different countries.An article with contributors from two or more countries is assigned to each of the countries represented.
The pioneering countries -those that published the first documents on disdrometers -are shown in Fig. 5, which also shows the year in which they were first published and the number of articles prior to 1980.
Main authors and citations
Out of the 612 authors who have written the publications included in this database, the author who has written the most on the disdrometer is Bringi, with a total of 29 articles.His career was based on comparing data from the disdrometer with different types of radar, and even with other instruments and satellite data (Fig. 6).His articles consist of research contrasting data with the disdrometer, the analysis of reflectivities, studies of drop size distributions, developing rainfall models and even measurements of drop sizes in wind tunnels.Introduction
Conclusions References
Tables Figures
Back Close
Full However, and without detracting any merit from Bringi's enormous production in terms of articles, a study of the relevance of the articles published to date on disdrometers indicates that Tokay is a leading researchers, found in 5 of the 10 most cited articles, with his publications occupying first, second, fourth, seventh and ten place.In the analysis of the article that has been cited the most, this once again corresponds to Tokay and Short (1996), an article cited 177 times on the drop spectrum in rainfall of a stratiform origin versus that of a convective origin (classification of a tropical storm as convective or stratiform according to its drop size distribution).This is surprising, as in the majority of cases, each with 60 or 70 citations, the most frequently cited articles are on the distribution of drop sizes and their comparison with radar.However, the most widely cited article is a new subject in the most widely cited publications.It also has the highest citation index for all of the documents, with more than 12.5 citations per year.Another innovative subject in this list of the most cited articles is in eleventh place, the calculation of kinetic energy by Van Dijk et al. (2002), or the sixteenth place, corresponding to studies on irregularities in drop shapes.
The number of publications has clearly increased since the 1990s, revealed in the growth in the number of experiments.As would be expected, an increase has also been seen in the number of citations, as shown in Fig. 7, although in this case with a delay of 5 yr in the start of this growth with regard to the increase in the number of publications.
Journals
With regard to specific journals, these articles have mainly been published in 2: The Journal of Atmospheric and Oceanic Technology and The Journal of Applied Meteorology, with 17.8 and 19.1 % of the publications respectively.In these journals, the number of articles published between the 1990s and the first decade of the twenty-first century grew have increased by 250 % (Fig. 8).
On examining the citations, we find that these two journals contain the majority: The Journal of Applied Meteorology or its successor, The Journal of Applied Meteorology Introduction
Conclusions References
Tables Figures
Back Close
Full and Climatology, representing 25.1 % of all citations, followed by The Journal of Atmospheric and Oceanic Technology with 21.4 %.Also, the 3 most widely cited articles are publications from The Journal of Applied Meteorology, and from the 10 most cited articles, 5 are from this same journal.Finally, it must be noted that even though the WOS database is a high-quality source and includes the most important journals in the field, there are many other publications that are not included in this database: theses, books, articles with a non-international character, non-periodical publications, etc.This paper does not analyse all of the existing articles on disdrometers, but it does include a sufficiently representative sample of the articles with the highest quality standards enabling us to draw general conclusions on the trend in this particular field.
Evolution of the use of disdrometers
During the first decade, the applications of the disdrometer were restricted to the design and improvement of the instrument (Joss and Waldvogel, 1967), which were gradually developed during the 1970s.The majority of the articles from this period dealt with comparisons between two disdrometers, or between disdrometers and rain gauges.But it was not until the 1980s when work began on comparing radar data with surface data obtained by disdrometers, with applications in the study of tropical storms, and the evolution of reflectivity.The 1990s saw new progress in studies based on oscillations of small drops, and the development of the first theoretical models on the distribution of drop sizes, with the help of polarised radars.However, the real development of applications other than those strictly applied to meteorology appeared at the end of the 1990s and in the early twenty-first century.Different fields of research have gradually taken shape in which the information detected by this instrument may be crucial.And example of this is the field of erosion, in which the disdrometer can be highly useful, detecting drop sizes and intensities with high erosive power; it has also been used in hydrological models, or even in hazard models for different types of storms or in the attenuation of telecommunication signals.Introduction
Conclusions References
Tables Figures
Back Close
Full
Keywords and topics
A review has been carried out of the keywords that appear the most in the documents that have been found (Table 2).Logically, the vast majority of the articles −308 in total -are defined by the keywords "Drop size distribution", which is what the disdrometer measures.This keyword is followed, to a significantly lesser extent, by the keyword "radar", for two main reasons: firstly, that radar is the most widely used instrument in precipitation studies and is the instrument of reference for cloud behaviour studies, as well as their subsequent evolution; and secondly, because the disdrometer is a measurement instrument located on the surface of the terrain, it provides complementary data to those provided by radar.Other keywords that are logically found in large numbers are "precipitation" or "rainfall and disdrometer".From here, the keywords indicate other areas in which work has been carried out with disdrometers, such as studying drop shape, studying cloud classification, climatic models and precipitation parameters in general, and their classification.In some articles we find words that refer to specific instruments, such as the Waldvogel or bidimensional disdrometer, and words that refer to the study of propagation and soil erosion.It is interesting to note that there are 63 articles that refer to the use of Joss-Waldvogel disdrometers, compared to 54 on optical disdrometers.
By carrying out a new search for articles that used two words simultaneously in the TOPIC field (one of which was disdrometer*), it was possible to study the evolution of the contents, by analysing from which year the disdrometer was used in this field of research.The most frequently used combination of words was "radar + disdrometer*", as would be expected.These two words have been used jointly since 1972, and from them they have continued to be used in 271 of the 366 articles that contain "disdrometer" as their topic.The scientific interest they generate is only comparable to the combination of "rainfall + disdrometer*", found in 241 publications.The success of these combinations lies in the fact that the purpose of the disdrometer is similar to a degree to that of radar: to be able to quantify and study rainfall phenomena.
Conclusions References
Tables Figures
Back Close
Full By 1976 two new words are combined that precisely indicate the objective that was sought with the invention of the disdrometer: "raindrop + disdrometer*", which has been repeated 192 times since it first appeared, together with the combination "reflectivity + disdrometer*", which appeared slightly later (in 1978), and is found in 187 articles.
In the 1980s, there was little scientific innovation in this field.The most original aspect lies in defining a geographic or climatic area in which to study the combinations that have already been commented.Therefore, the combination "tropical + disdrometer*" is the most widely repeated (on 77 occasions after 1987).
From the 1990s onwards (more specifically, since 1991), new research concepts have been dealt with: "drop size distribution* + disdrometer*" or "break up + disdrometer*" have respectively appeared on 194 and 20 occasions since then.Also, new horizons were opened in the uses of the disdrometer, such as measurements for calculating vertical velocity, modelling and measuring wind speeds, with 30, 121 and 59 articles respectively.A total of 78 articles have been published on the attenuation of electromagnetic waves in the field of telecommunications.Subsequently, 44 articles have been published since 1999 with the topics "propagation + disdrometer*".In the case of erosion, a total of 9 articles have been published for "erosion + disdrometer*", the first in 1993.In all likelihood this group could also contain the articles on "kinetic energy + disdrometer", consisting of a further 19 articles since 1996.A final point of interest is the classification of rainfall, as demonstrated by the 87 articles whose topic contains "convective + disdrometer" and the 83 for "stratiform + disdrometer*", which began to be a point of interest in 1993 and 1996 respectively.
The main areas researched using disdrometers are shown in Table 3.In summary, the largest number refer to meteorological research, with 226 articles, which are reduced to one third in the following area of interest (engineering and geology).More specific areas, such as the use of models, do not appear until 1991, or erosion (from 1998), hydrology (1999) or wave attenuation (2003).Introduction
Conclusions References
Tables Figures
Back Close
Full
Conclusions
Over the years the technique for measuring raindrop size and velocity has improved greatly, from manual and quite unreliable methods to automatic methods -optical disdrometers or devices employing the moments method.On studying the publications included in the WOS, we have found that 47 % of the publications on disdrometers appear in only 2 journals: The Journal of Atmospheric and Oceanic Technology and The Journal of Applied Meteorology.These 2 journals also include most of the citations.Proportionally, the most cited articles have also been published in The Journal of Atmospheric and Oceanic Technology and The Journal of Applied Meteorology.The author with the highest number of articles using disdrometers is Bringi, with 28 articles.
However, Tokay is the author of the articles with the highest impact.
The main fields that make use of disdrometers are, above all, meteorology, hydrology, rain parameter modelling and soil erosion.The disdrometer is a device which is being used increasingly in many different countries, with up to 40 articles published in 2009.With these precedents, it is expected that research with disdrometers will continue to increase, as the applications for the data provided by these measurement instruments has also multiplied.Introduction
Conclusions References
Tables Figures
Fig. 3 .Fig. 4 .Fig. 7 .
Fig. 3. Ten-yearly evolution of the number of documents on disdrometers and the number of documents on Meteorology and Atmospheric Sciences.The numbers in the upper border indicate the ratio between both in each decade.
Table 1 .
Articles cited more than 30 times, and the number of times they are cited in the main journals.
Table 2 .
Keywords that have been used more than 10 times, and the number of articles in which they appear. | 2018-12-12T06:21:10.375Z | 2011-09-23T00:00:00.000 | {
"year": 2011,
"sha1": "38ea0a8b1de741fe8d278a0c5c4bdc6780d76c99",
"oa_license": "CCBY",
"oa_url": "https://amt.copernicus.org/preprints/4/6041/2011/amtd-4-6041-2011.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "38ea0a8b1de741fe8d278a0c5c4bdc6780d76c99",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"History"
]
} |
258495717 | pes2o/s2orc | v3-fos-license | Image-Guided Proton Therapy: A Comprehensive Review
Simple Summary In proton therapy, there is a sharp peak in the delivered dose followed by a rapid falloff, known as the Bragg peak, which is not present in photons. This allows for treatment plans that deliver lower doses to normal tissue than can be performed with photons. This requires a high degree of accuracy and precision of delivery due to the short distance between an area of high and low doses. Image guidance allows for better visualization of the target and more accurate delivery of proton (and photon) radiation. The equipment used to deliver proton therapy differs in several ways from that of photon radiation, which impacts the methods used for image guidance in proton therapy. This paper aims to summarize the various methods of image guidance in current proton therapy and their relative advantages and disadvantages, as well as areas for future improvements. Abstract Image guidance for radiation therapy can improve the accuracy of the delivery of radiation, leading to an improved therapeutic ratio. Proton radiation is able to deliver a highly conformal dose to a target due to its advantageous dosimetric properties, including the Bragg peak. Proton therapy established the standard for daily image guidance as a means of minimizing uncertainties associated with proton treatment. With the increasing adoption of the use of proton therapy over time, image guidance systems for this modality have been changing. The unique properties of proton radiation present a number of differences in image guidance from photon therapy. This paper describes CT and MRI-based simulation and methods of daily image guidance. Developments in dose-guided radiation, upright treatment, and FLASH RT are discussed as well.
Introduction
Image guidance for external beam radiation therapy has revolutionized the field of radiation oncology. Image guidance is the use of imaging at the pretreatment or treatment stage that leads to an action that improves or verifies the accuracy of radiotherapy. Due to the Bragg peak, proton therapy (PT) is able to deliver a highly conformal dose to a target; however, the Bragg peak must be correctly placed in the target in order to utilize the potential of proton therapy. Compared to photons, protons have additional uncertainties in the range, or penetration, of the beam in tissue; these uncertainties are predominately influenced by the densities of tissue through which the beam passes. Image guidance (along with proper immobilization) minimizes these uncertainties and illustrates the greater importance it has on proton therapy compared to photon therapy. Further illustrating this point is that when the first hospital-based proton therapy center began operation in 1990, image guidance was used for every fraction of treatment that was delivered [1].
The patient must be positioned properly prior to treatment. This can be achieved by planar X-rays, CT scanners, and visual-based methods. There is also increased interest in using MRIs for positioning, although this has some challenges. The target may also be monitored during the treatment for motion. There are differences in the ways that image guidance is used for protons vs. photons due to the intrinsic properties of protons as well as the differences in hardware between linear accelerators and proton gantries. Dose-guided radiation therapy seeks to spatially measure doses from the byproducts of proton radiation. Finally, FLASH radiotherapy, an ultrahigh dose rate method of external beam radiation, which will involve predominantly protons, has its own set of unique issues for image guidance that will be mentioned.
Imaging for Simulation
Different methods of imaging have differing benefits and drawbacks when used for simulation (See Table 1).
CT-Based Simulation
One key difference between protons and photons when using CT scans for planning is the additional step of deriving proton-stopping power from Hounsfield Units (HU). This process of deriving the actions of the charged proton from the actions of the chargeless, massless X-ray photon leads to greater range uncertainty, roughly 3.5% of the absolute range [2,3]. Additional margins have to be added to target volumes to compensate, decreasing the therapeutic benefit of protons.
Dual-energy CT can be used to better differentiate the composition of materials [4] and further reduce the range uncertainty [5,6]. Another solution is the use of proton CT, which allows for direct reconstruction of proton stopping power, although these are currently not in widespread use due to issues with spatial resolution [7]. One group has compared images from a clinically realistic proton CT scanner to those from photon CT scanners and found proton-stopping power measurement discrepancies of up to 40% in regions with mixed content of air, soft tissue, and bone, such as sinuses [8]. A comparison between a dual-energy CT scanner and a proton CT prototype in calculating relative stopping power (RSP) of phantoms of a known RSP showed that both scanners were accurate to within 1%, that the proton CT was slightly more accurate than the dual-energy CT scanner, despite characteristic artifacts reducing the accuracy of the proton CT [9].
MRI-Based Simulation
MRI-only simulation for treatment planning has also been explored. One benefit of this approach is the superior contrast with MRI when compared to CT. This also may eliminate the need for registration of an MRI to a CT simulation, which introduces additional uncertainty, especially when an MRI is obtained at a different timepoint or in a different position from the simulation CT. Possible drawbacks include an initial unfamiliarity with performing MRI simulation as well as possible incompatibility and artifacts with metallic hardware such as older pacemakers [10].
Another drawback of MRI-only simulation relevant for both proton-based (as well as photon-based) treatment is a possible geometric distortion of images due to changes in the magnetic field uniformity from both the MR system as well as the patient's body itself. An excellent review by Schmidt and Payne goes into further detail [11].
The main challenge for implementing MRI-only simulation is that the MRI data needs to be converted into a pseudo-CT dataset for patient setup and dose calculation. One strategy includes assigning electron density values to predefined tissue levels [12,13]. Another involves the creation of an MR-to-CT image atlas, with the registration of the patient's MRI to the atlas and conversion of the MRI intensity to HU values [14]. Each comes with drawbacks, which makes them not ideal for MRI-only simulation use [15].
More recently, there has been interest in developing methods of automated planning in order to improve the workflow and reliability of MR-based planning [13]. One group has used a machine learning-based method to generate pseudo-CT images from pretreatment MRIs and retrospectively analyzed the dose-volume histogram metrics from these pseudo-CTs compared to their actual plans, which were made for the CT's acquired pretreatment. They found that this model created accurate pseudo-CT images with good agreement from their simulation CT [16]. The value of pseudo-CT in MRI-PT has yet to be validated prospectively to the authors' knowledge.
Imaging for Pretreatment Positioning
Various imaging modalities are used for pretreatment positioning (See Table 2), often in complement with one another. Due to the increased sensitivity of patient positioning to dosimetry, image guidance was implemented early in the history of proton therapy [5]. Proton treatment systems have kV X-ray tubes mounted in the nozzle for a beams-eye-view setup as well as orthogonal imagers, which can be used prior to volumetric imaging. These can be moved into and out of the beam path prior to treatment. Daily images can be taken during setup compared to the digitally reconstructed radiographs (DRRs) from the time of the simulation, while the corrective moves can be performed based on the bony anatomy [17], often on a robotic 6-degree-of-freedom couch [18][19][20]. One caveat is that these can lengthen the path and increase the spot size for pencil beam scanning (PBS) gantries if downstream of the beamline vacuum window. This has been avoided in some systems by placing the X-ray beams on the robotic C-arms, behind the last gantry bending magnet, or attaching them to rooms instead of gantries [21]. One of the reasons for the early adoption of orthogonal kV radiography was the poor quality of proton radiography at the time [22]. One downside of orthogonal X-ray is that there is not as much information about possible tissue changes along the beam path as volumetric imaging, which could lead to overshoot or undershooting [22]. Moreover, since the virtual proton and XR sources are not perfectly coincident with one another, the XR tube only approximates the beam's eye view of the proton beam [23].
Fiducial Markers
In addition to using bony anatomy for the daily setup, fiducial markers may also be used for an additional setup. The advantage of using fiducial markers is that they may be placed in soft tissue, in close proximity to the target, and may provide a better surrogate for the location of the target than bony anatomy. Intraprostatic fiducial markers are placed, which allows for daily orthogonal kV imaging for IGRT of the prostate with both passively scattered protons [24] and PBS proton therapy [25]. This allows for a two-step setup, where the couch adjustments are performed based on the bony anatomy first. Then, another adjustment is performed based on the fiducial markers, or if the displacement of the markers is unacceptably high, the patient is not treated but instead undergoes preparation for treatment (bladder filling and rectal emptying), before repeating the setup.
For uveal melanoma treatments, tantalum clips are placed around the tumor behind the globe in the operating room prior to simulation. In the simulation, there was immobilization with a rigid mask and bite block, and the eye is fixed using stimulus-directed light. Pretreatment, orthogonal kV X-rays are taken and aligned with those taken at the time of the simulation [26]. At Loma Linda University, patients are treated with partial breast irradiation with protons, which can present a challenge due to the inter-and intrafraction motion of the target. Patients lie prone supported by vacuum bags with a custom low-density foam around the breast and plastic breast cup, which provides reproducible placement of the breast and limits respiratory motion. The kV X-rays are taken pretreatment and the radio-opaque surgical clips are aligned with the DRR to determine any setup corrections [27].
However, the fiducial markers may not move exactly as the target tissue. Thus, fiducial markers have a significant effect on dosimetry in proton therapy compared to photon therapy. They may cause HU artifacts on the planning CTs, which could lead to an under or overshoot from the proton beams. Moreover, the markers placed near the end of the proton path may strike them and cause an underdosing of the target region. This effect may be greater with gold markers, when markers are placed closer to the end of the beam or when there are fewer treatment beams used [21]. A hydrogel fiducial marker has been developed for use with proton radiation and has shown negligible depth-dose perturbation in a phantom [28].
Fluoroscopy
Fluoroscopy has been used in both pre-and intrafraction to tailor the respiratory gating parameters. Intrafraction fluoroscopy can be used to gate respiratory motion [29]. One downside to fluoroscopy is the significant dose associated with this procedure [21].
3D CT Imaging
The next step in image-guided proton therapy arrived with the use of volumetric imaging. CT can provide a 3D definition of the anatomy and greater information about the location of soft tissues than 2D kV radiography. It also allows for the possibility of adaptive planning [30]. This may be especially useful in disease sites that are prone to target shifts due to tumor volume changes and weight loss, such as head and neck tumors [31].
CT on Rails
Patients can be moved on the treatment couch on rails from the gantry to CT scanners in the treatment room. Having the CT scanner several meters away from the proton gantry at an angle reduces the interference with the treatment workflow and neutron exposure; however, the CT and couch are at risk of colliding with one another in this setup. Furthermore, a CT on rails avoids the image quality issues of the cone-beam CT (CBCT) mentioned below. The major drawback to this approach is the lack of imaging capability at this isocenter, with no repeat verification being performed after the setup correction. Further, the movement of the couch and CT will lead to additional time for image guidance [22]. At the Paul Scherrer Institute, patients are positioned and imaged outside the treatment room using the same CT scanner that is used in the simulation. Then, they are moved into the treatment room where verification X-rays are taken prior to treatment. This frees up time in the treatment room to allow for more time on the beam [32].
Cone-Beam CT
In addition to patients or CT scanners being moved for pretreatment imaging, CBCT usage is increasing for proton therapy. There are several options for the placement of the CBCT, which have been used by different facilities. It is typically placed on the gantry, and images are acquired as the gantry rotates, similar to linear accelerators [23]. For some facilities with partial gantries and limited space in the treatment room, the CBCT hardware may be installed onto the nozzle instead of the gantry, which must be retracted while not in use. Some facilities use a CBCT mounted on a robotic C-arm as well, which allows for imaging at the treatment isocenter or off of it. Due to the design, the source-to-imager distance is greater in CBCT for proton gantries vs. photon gantries, which reduces the number of scattered photons reaching the detector [22,23].
Daily pretreatment CT imaging does increase the imaging dose received by patients. Though relatively small in magnitude compared to the treatment dose [33], there is an increasing awareness that the cumulative dose may be significant, particularly in pediatric patients due to the concern for late effects and the distribution of the radiation doses [34]. This has led some to recommend the use of fiducial markers and kV X-rays instead of CBCT for daily soft tissue visualizations in pediatric patients [35]. However, one study found that the CBCT dose could be reduced by 81-98% and remains accurate when used in the setup, based on bony anatomy [36].
There is decreased accuracy with CBCT compared to diagnostic CT due to increased scatter, field-of-view (FOV) limitations, ring artifacts, and patient size. This can lead to dose errors when used for adaptive planning [21,22]; however, with correction methods, these errors can be reduced to around 1% [37]. Synthetic CTs can be created by using a neural network to correct daily CBCT imaging errors. Then, these synthetic CTs can be used for accurate dose calculations in adaptive proton therapy [38].
Marker-Based
Visual image guidance can be used in addition to planar or volumetric X-ray-based imaging or can be used alone. External markers can be placed at the relevant locations on the patient's skin. One advantage of this approach is the high frequency of measurement, which is appealing for tracking mobile targets and gating delivery [21]. As above, the location of external markers does not always reflect the location of an internal target. Moreover, there needs to be consistency in the daily placement of the markers. This approach has already been used in patients for photon therapy, and reflective spheres have been placed on a breathing phantom for a scanning ion beam [39].
Surface-Based
Surface imaging consists of 3D mapping of the patient's surface generated by a computer. This may be especially useful for targets that have a variable shape and location with respect to bony anatomy. Another advantage to this approach is that there is no need for external marker placement. At Massachusetts General Hospital, patients underwent postmastectomy radiation with spot scanning proton therapy after localization with a surface imaging system [40]. Surface imagers can also validate the operation of couch moves and the position of the nozzle [21,41].
MRI Guidance
MRI-guided photon therapy has become more popular in recent years. It has superior soft tissue contrast and no moving parts for 3D imaging, which allows for real-time motion monitoring [23]. This raises the possibility of reducing the normal tissue dose with MRI guidance. PTV margin reduction studies have been performed with MRI-guided photon SBRT and have shown a decrease in acute toxicity compared to CT-guided photon SBRT [42]. Due to these advances in photon therapy, there is an interest in the possibility of using MRI guidance for proton therapy.
MRI guidance eliminates the problem of additional ionizing radiation in contrast to CT guidance [43]. The superior soft tissue resolution of MRI over CT and the possibility of real-time imaging make it an attractive option for image guidance.
Pretreatment daily MRI guidance would allow for direct visualization of tumors not easily visualized by a CT or planar X-ray.
One use of MRI for image guidance that comes to mind, given its high resolution and that there is no additional dose for daily imaging, would be for online adaptive planning. A major challenge in the use of MRI guidance for proton therapy, which does not relate to photons, is the interaction between the magnetic fields within the MRI scanner and the charged protons, which causes deflections in the beam path. Due to the number of corrections that need to be made to place the Bragg peak at the same location for adaptive planning based on real-time MRI (changes in scan setting due to the magnetic field, changes in energy due to change in path length), pencil-beam scanning would be the only choice [44].
MRIs do not provide direct information on the stopping power, which can limit the feasibility of the online adaptive MRI-PT. Weekly offline on-treatment MRIs have been shown to improve plan quality for pediatric patients undergoing intensity-modulated proton therapy (IMPT) for CNS malignancies [45]. It has been shown that dose calculations based only on daily MRIs converted to synthetic CTs are feasible [15].
There have been several recent advances in making MRI-guided radiation therapy ready for clinical use. Proton gantries with MRI capabilities are currently under construction. Studies have been conducted that investigate the feasibility of MRI-only workflows [44]. There have been investigations into the hardware required to deliver MRIguided PT, including magnet design, radiofrequency shielding requirements, rotation of the couch and gantry, measurement of the beam in the setting of strong magnetic fields, and correction of the beam [44]. One group has performed work to account for beam deflections from the fringe and imaging magnetic fields by using gantry angle offset and PBS nozzle skew along with patient-specific optimization within a PBS system [46].
PET
One of the limitations in PET-based range verification in protons is the difference between the Bragg peak and the PET signal.
In contrast to photon-based external beam radiation therapy, which can be measured with electronic portal imaging devices (EPIDs) after the beam has passed through the patient, the protons stop in the patient, making direct detection of the dose impossible. However, proton radiation produces byproducts such as positron and gamma radiation as it interacts with matter in a patient. Measurement of these byproducts in space and time allows for the location of the dose and is an area of ongoing research. Online monitoring of the dose may reduce range uncertainty and allow for further reduced margins, decreasing the dose to normal tissues even more. This may also allow for a better ability to determine when a re-simulation scan needs to be performed [47].
The challenges, which include the registration issues with the positioning errors in offline imaging and the cost and time demands for the development of the positron emission tomography (PET) systems, designed specifically for on-beam measurement of dose in vivo, have led to a limited amount of clinical use at this time [48]. A recent study [49] conducted at the National Center of Oncological Hadrontherapy (CNAO, Pavia, Italy) analyzed the in-beam PET data generated from eight patients treated with proton therapy at their center. They found that the standard deviation of interfraction PET activity profiles was 2.3-2.5 mm for patients without anatomical changes during treatment. Furthermore, large variations in PET range data correlated well with areas of anatomical change found on CT scans taken during the treatment course, suggesting that interfraction PET data can be used to detect morphological changes that may have a significant impact on dosimetry.
Positronium atoms consist of a positron bound to an electron. Roughly 40% of the positron annihilations in vivo occur via the formation of positronium atoms [50]. Parapositronium decays into two photons, while ortho-positronium decays into three photons. The rate at which these three-photon decays happen is proportional to the level of oxygen concentration. Therefore, positronium imaging may allow for the identification of hypoxic tissues [48]. A new PET detection system may allow for both range monitoring [51] as well as positronium imaging [52].
Prompt Gamma
Prompt gamma waves are produced instantaneously and have almost no interaction with the patient as they exit [21]. Detecting these gamma waves may allow for the precise location of the Bragg peak in three dimensions and the determination of the dose online in real time. Several groups are developing systems for prompt gamma detectors but there has not currently been any widespread adoption of prompt gamma detectors in patients [53]. One system is the use of a slit camera detector system, whereby prompt gamma radiation is emitted along the proton radiation path and passes through a tungsten slit collimator below the patient onto a floor-based segmented detector, which allows for spatial resolution of the prompt gamma distribution. The absolute range of protons can be determined from this distribution. This is compared to a reference range prompt gamma distribution generated from the planning CT.
One group used a slit camera detector system to validate their predictions of the proton range with dual-energy CT vs. single-energy CT, taken daily from patients undergoing proton treatment for prostate cancer [54]. They reported that the integration of prompt gamma imaging adds about 1 min per treatment field to the clinical workflow and that dual-energy CT-based predictions of the stopping power ratio agree with prompt gamma imaging more than single-energy CT-based predictions.
Ionoacoustics
The heat generated from the energy losses from the generated protons causes thermal expansion of the tissues. This leads to thermoacoustic emissions at a practically instantaneous timescale. The measurement of these waves by ultrasound, known as ionoacoustics, to map dose delivery is also an area of active investigation [55].
Image Guidance for Upright Treatment
A significant portion of the cost of treatment to construct a proton facility is due to the gantry that allows for different beam angles [56]. The large size of the gantry and the building required to house it makes up the majority of the cost.
One alternative is the use of a fixed horizontal beam line with a rotating chair positioning system, which may reduce the costs as it is less expensive to move the patient in the beam line vs. the beam around the patient. Previously constructed fixed beam lines are being repurposed by removing the old treatment chairs and installing new upright patient positioning systems [57], which allow for more flexibility in the treatment angles. Orthogonal kV X-rays can be received during pretreatment, similarly to receiving the treatment lying down [58].
Although CT simulations may be performed in the lying position and the images converted to a supine position for planning [59], there are significant deformations in the anatomy from a lying position than when treated upright, due to the relative change in direction of gravity. Thus, a dedicated CT scanner for upright daily image guidance is preferred for accurate treatment. Various models exist at certain proton centers and include scanners above the patient that descend for imaging alongside scanners that are mounted on columns and rotate around the patient, allowing for an isometric setup. Proton CT for daily setup is facilitated by the fixed beamline and rotating patient chair. For further reading, Volz et al. provided an excellent review of upright patient setup and guidance [60].
FLASH RT
The FLASH effect is defined as a decrease in radiation-induced normal tissue toxicities with dose delivery at ultra-high dose rates, compared to conventional dose rates used clinically [61]. Treatment of the first patient with FLASH-RT was delivered with electrons to a cutaneous T-cell lymphoma with favorable outcomes for tumor control and toxicity [62]. Groups are working on delivering FLASH-rate protons in clinics [63].
A recent trial has shown the feasibility of using FLASH protons in patients [64]. In this trial, the Bragg peak was placed distal to the patient, causing this beam to behave like a photon beam while in the patient, meaning that there was no sharp dose buildup or falloff.
The very short delivery times for FLASH would decrease the need for intrafraction motion management using conventional dose rates. Conversely, a precise pretreatment setup verification is required to ensure that the treatment is delivered to the intended target [65]. The slow acquisition time of CBCT would limit its utility for real-time imaging. Thus, MRI can provide information such as oxygenation and inflammation pre-and posttreatment, which may be useful for biological modeling [65]. One group has developed an inverse planning tool to optimize IMPT to pull the Bragg peak into the target volume and achieve FLASH dose rates in silico [66]. The higher dose-per-pulse rate of FLASH improves the ionoacoustic signal-to-noise ratio, which has led to an increased interest in this technology [65].
Conclusions
The importance of image guidance for proton therapy is clear due to the dosimetric properties of protons. Advances in image guidance in this space have generally followed advances in photon image guidance, possibly due to the much greater numbers of linear accelerators vs. proton gantries. However, as the number of proton gantries increases (119 gantries operational as of January 2023 with 37 under construction and another 29 in planning as of March 2023 [67]) and the number of patients treated with protons grows, it is important that there are continued advancements in image guidance for proton therapy so that it can fulfill its potential dosimetric benefits.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-05-05T15:01:49.088Z | 2023-04-29T00:00:00.000 | {
"year": 2023,
"sha1": "a6d5f9c87dfffb843c88502e4a18d9a3466d524d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cancers15092555",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c44aae48c963d1a59355274dc452c2ebc6a8ea60",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116289478 | pes2o/s2orc | v3-fos-license | Determining the volumetric characteristics of a passive linear electro-magnetic damper for vehicle applications
Previous research has shown that passive electromagnetic damping could be feasible for automotive applications, but there would be a severe weight penalty, particularly in light weight vehicles. With modern advances in permanent magnets the feasibility of passive electromagnetic dampers is re-examined. A model of a permanent magnet and coil system is developed and validated in small scale. This magnet model is used to model a dynamic damper system which is again tested. This dynamic model is then scaled up to a two degree of freedom system to determine the damping for a quarter car model. Two damper designs are created each of which would produce a damping coefficient of 1,600 Ns/m. The proposed dampers require more than three times the volume of the equivalent hydraulic dampers. Subjects: Transport & Vehicle Engineering; Automotive Technology & Engineering; Electromagnetics & Communication
Introduction
In automotive applications common oil dampers have been developed to a high level. However oil dampers provide less benefit for lightweight vehicles. Due to these and other reasons, research has turned to investigating alternative technologies for damping applications. Karnopp (1989) suggested that it is feasible to build linear electro-magnetic (e.m.) dampers for use in vehicle suspensions. The advantages of such a damper included low static friction and the fast control speed of the damper in active and semi-active applications.
ABOUT THE AUTHOR
A. Fow is a Teaching Fellow in the School of Engineering at the University of Waikato. His main area of research is the use of active, passive and semi-active electromagnetic dampers (EM) in vehicles. The research involves mathematical modelling and the use of software, such as Matlab and VisSim to develop numerical solutions. He uses experimentation to validate his solutions and is a strong advocate of "real-time" control for rapid and flexible control system development. The fundamental EM work is now being developed for electric powered agricultural vehicles with full scale testing about to be undertaken.
PUBLIC INTEREST STATEMENT
Shock absorbers are an important component of a cars suspension system. A well designed suspension system is essential for the safety of a vehicles occupants. Most modern vehicles use suspension system designs that date back at least 50 years. An investigation is conducted to see if an electromagnetic damper system can achieve the performance required for use in modern vehicles, with a particular focus on modern lightweight electric cars. A small scale mathematical model of the cars motion and the electromagnetic damper are created and then tested to see if they are valid. The model is then scaled up to model larger dampers for a practical automobile system. It is shown that electromagnetic dampers that develop sufficient forces for realistic use are heavier than the equivalent commercial dampers.
The use of a regenerative EM damper for the generation of power for use in car systems has also been proposed. While the power regenerated is small relative to the total overall power demands of an automobile, this power could be used for the operation of the damper itself, for supplying electrical energy to auxiliary systems in the automobile or for more efficient propulsion of the vehicle (Graves, Iovenitti, & Toncich, 2000;Guo et al., 2016;Li, Zuo, Luhrs, Lin, & Qin, 2013;Kim & Okada, 2002;Satpute, Singh, & Sawant, 2014;Zuo, Scully, & Shestani, 2010).
The concept of passive electromagnetic damping is well known and is based on Faraday's law. Taking the case of a single cylindrical magnet and a coil of wire, damping can be achieved as is characterized in (Agutu, 2007). Unlike conventional dampers, the damping is dependant upon not only the velocity of the magnet relative to the coil, but also their relative positions. Karnopp (1989) proposed a damper that had a magnetic field that was constant over the range of motion of the damper. Due the magnetic field being constant, the forces generated by the EM damper could be modelled by (1), The number of loops of wire in the coil will affect the force generated by the damper. However, while a fine wire will produce many more loops of wire, the reduced diameter of the wire and the extra length of the wire will produce a greater resistance. The extra resistance of a finer wire cancels out any increase in force produced by the increase in the number of loops in the coil. Therefore the force generated by Karnopp's (1989) proposed damper is not dependant upon the number of windings used in the coil, rather the mass of the copper is the main determinant. Karnopp was limited in the force generated due to the strength of the then available permanent magnets. As can be seen in (1) the force generated by the damper is a function of the magnetic field squared, thus the increase in magnetic field strengths using modern Neodymium magnets can produce an appreciable increase in the force generated.
While research into electromagnetic damping for automotive applications has continued, the use of a simple coil/permanent magnet damper is not well modelled for this application. This is in part due to the difficulty in modelling the non-linear nature of the magnetic field produced by a coil/ magnet damper. The model to be developed for research into this field is required to determine the damping forces for an electro-magnetic damper in passive, semi-active and fully active modes as well as determining the voltage and power generated in regenerative damping mode. The complete model should be able to be used in the initial modelling stage as well as with hardware in the loop controllers for development and for practical application.
For the construction of a complete model, two separate components are required: The first element being a model of the magnetic field generated and the second is the model of the damper using the generated field. For a passive e.m. damper both the internal and external fields of the magnet have to be determined. In the determination of the near field and internal field of a cylindrical magnet there are several methods that are used.
A common numerical approach is the use of the Finite Element Method (FEM) of analysis. This approach typically determines the magnetostatic field using Maxwell's equations of electromagnetism (Zienkiewicz, Taylor, & Zhu, 2005) and has been used in cases such as in (Mahmoudi, Kahourzade, Rahim, & Hew, 2013;Kazan & Onat, 2011). In typical applications a high degree of non-linearity of the field can exist and the solution to such problems generally uses an iterative approach in which a sequence of linearised problems are solved.
Another numerical approach is the modelling of a cylindrical magnet as an air cored solenoid. This concept is well known and allows the magnet and the damper to be each modelled as current carrying loops. The Lorentz force between the magnet and the coil can then be determined is used using the Biot-Savart law (Ziolkowski & Brauer, 2010), by determining the mutual inductance (Akyel, Babic, & Kincic, 2002;Akyel, Babic, & Mahmoudi, 2009;Babic & Akyel, 2008a) or using the filament method (Babic & Akyel, 2008b).
In this work a static model of the magnetic field can be created in Matlab, the magnet being modelled as an air cored solenoid and the Biot-Savart law being used to determine the magnetic field at any point. Matlab solvers are used to obtain a solution for the elliptical integrals. To determine the flux within the damper coil at any point, the magnetic field is then integrated from the z axis to coil radius and integrated along the damper coil's length.
A dynamic model of the damper is created in VisSim. This is a visual block diagram language that is used for the numerical simulation of dynamic systems. The VisSim model is used to simulate the behaviour of a one degree or two degree of freedom system with a passive e.m. damper. This model is required to be validated and then the model can be scaled to determine the size of the damper required for use in a light weight vehicle. The specific volumetric damping is then compared to determine if the damper is suitable for use in modern automobiles.
Theory of the magnetic field
Faraday's law is used to determine the forces generated by the damper and the standard description of a damped system for the acceleration, velocity and displacement of the sprung mass. The magnet is modelled numerically as an air cored solenoid and the flux generated by the magnet is determined over the range of displacement for the masses. This is recorded as a lookup table for use in the dynamic model.
Faraday's law was first described independently by Michael Faraday and Joseph Henry in 1831. It is given as (2), where the magnetic flux is dependent upon the geometry of the magnet and the relative position of the magnet and the coil. The power generated by the coil is given by (3) and (4).
While the force generated by the coil at any instant is (5). Substituting (4) into (5) gives (6) This force is then applied to the magnet/coil system.
The magnet/coil system is modelled as a single degree of freedom system with the motion along the z axis. The coil is fixed and the magnet is free to move, while attached to a spring. The magnet and coil provided a damping force for this system. A freely vibrating damped system is described by (7) In the case of a single magnet and coil the force of the damping is dependant upon both the velocity and position. This is now described as (8) For this equation to be complete, two additional terms are required: F C , to represent the Coulombic damping. And c nat to represent the natural damping of the system, the complete equation for the electro-magnetic damper is given as (9).
The magnet is modelled as an air cored solenoid with a finite number of loops. The magnetic field is then generated for each loop of wire with a current flowing through it. The superposition principle is then used at each point measured to sum the magnetic field from each loop in the solenoid to produce the total magnetic field.
For a single loop as shown in Figure 1, using the Biot-Savart law of Magnetostatics, (Kuns (2007) derives the magnetic field at any point is space as (10), E �ẑ� using polar coordinates where the loop is centred on the z axis, with the loop on the r axis. The complete elliptical integrals of the first and second kind are both with regards to k 2 in the form K(k 2 ) and E(k 2 ), where k is given by (11).
In determining the change in flux, only the z component of the field is required and is given by (12).
For the complete model of a coil-magnet system, the coil is modelled as a solenoid with multiple loops and the permanent magnet is also modelled as a solenoid with multiple loops. The flux through each loop of the coil is calculated by (13).
Substituting (13) into (9) gives a complete numerical model of the passive damper.
The dynamic damper model
To describe the magnet, the magnetic flux along the z axis is modelled using (13). This is then used to determine the enclosed flux of a coil along the axis of motion for a distance of 60 in 1 mm increments. Symmetry about the centre of the coil is used along the axis of motion to reduce the calculation times in modelling the flux on each side to the magnet. The magnetic field and fluxes are determined using the functional language MATLAB. These modelled fluxes are then formed into a lookup table using linear interpolation for use in the dynamic damper simulation.
The dynamic damper model is created in the numerical modelling package VisSim, as (10). The flux at any position is determined from the look up table with a linear interpolation. Factors for Coulombic and natural viscous damping are included.
The model is then given a step input and the subsequent displacements of the spring-damper system are then compared to the measured displacements of an electromagnetic damper. The damping coefficient of the total system is then determined. From this and the measured natural damping of the system, the damping coefficient of the damper is determined.
For the dynamic simulation of a one degree of freedom system, Equation (14) is developed in VISSIM. where k is the spring constant, z 1 is the position of the sprung mass, z 0 is the position of the road surface, c is the damping coefficient and m is the sprung mass in kg. To this is added a term, c nat , for the natural damping efficient, which is the damping of the system without the damper, F C which is the Coulombic damping and F D which is the force exerted by the passive electromagnetic damper. This gives (15)
Materials
The magnet chosen to model is a neodymium rare earth magnet rated as an N35, it is 0.028 m long and 0.0095 meters in radius with a pole strength of 0.5778 T. A computer model of the magnetic field is created in MATLAB Simulink. The modelled field produced is shown in Figure 2. The North/ South poles are along the z axis. The figure shows one quarter of the magnet field of the magnet measured from the centre. The field is rotationally symmetrical and the poles are also symmetrical. The magnetic field of the prototype magnet was measured at 2.5 mm intervals using an Alphalab Model 1 DC Magnetometer.
The absolute differences between the measured and modelled values are represented in Figure 3. For majority of the region modelled the difference between the modelled field and the measuredl field is less than 0.01 T. The largest differences between measured and modelled results occur where the magnetic field is changing the fastest and where the field is the strongest. The average error between the measured and modelled fields is 3.5%. When the differences are averaged over the entire field, the measured field readings are 1.1% larger than the predicted values. The largest absolute difference between the measured and modelled field occurs at the physical pole of the magnet where the limitations of the measuring equipment are observed. Even small differences in displacement, such as the thickness of the magnetic probe, make a significant difference to the results. This method of testing only measures the external field of the magnet and cannot measure the internal component of the magnetic field. The peak magnetic field strength is physically inside the magnet and drops off in a non-linear manner with distance from the poles. The component of the magnetic field physically inside the magnet represents the major component of any flux that will be used in the model, therefore an independent method is required to test for the internal field. By using the modelled field, the flux of the magnet travelling through a coil is determined. By passing a magnet through the prototype of the modelled coil, a voltage is produced. This is measured and converted to an electromotive force for calculations. The rate of change of flux is then determined from this as in Figure 4.
Summing over time the total flux of the magnet in the coil is determined to be 0.004501 Wb. Summing the model flux of the damper/coil system gave a modelled flux of 0.00488 Wb. There is an agreement between the modelled and measured data of 92.2%.
Vibration analysis
A small test rig was constructed for initial validation of the damper model as shown in Figure 5. This is adjustable in all three axes to ensure that the magnet does not mechanically interact with the coil. The damper itself consists of 120 turns of 1 mm copper wire wound as three layers of 40 turns each, wound onto a PVC core. An accelerometer is attached to the sprung mass and data is recorded with a Signal Analyser.
A series of trials are conducted with the weight of the damper and hanger of 100 g plus additional weights of 300-1,000 g. A series of runs is also conducted with the damper in place but without damping being applied. These data runs are used to determine the natural damping and the Coulombic damping of the system.
In the time domain
To determine the Coulombic Friction and the natural damping of the system, a run of the physical system being modelled is conducted. During modelling, visual inspection of the acceleration-time graphs for the model and for the recorded data is used to determine these values so as to produce a displacement-time graph that matches a known experimental result for the rig being used. The combined graph for the natural damping coefficient and the Coulombic Damping factor for an added weight of 1 kg are determined as in Figure 6. For the values given, the two graphs merge to the point where there are almost indistinguishable from each other. This process is repeated for every weight that is used during the experimentation. A series of runs is conducted using added weights from 300-1,000 g. A typical result is shown in Figure 7. This represents an added weight of 1,000 grams and the magnet is modelled as a solenoid with a current of 30.4 Amps. As can be noted, there is a difference of magnitude between the predicted and modelled acceleration. This is consistent in all readings and is a systematic uncertainty caused by calibration issues and resolution in numerical modelling.
The damping coefficient of the total damper for various weights is given in Table 1 The relative agreement between the modelled and measured data is also given in Table 1. This showed an agreement between the modelled and the measured damping of the magnet-coil damper system to be between 75-93%. The mean agreement is 84%.
In the frequency domain
To determine the effectiveness of the passive e.m. damper and to verify the accuracy of the modelled passive damper, the passive damper was tested at frequencies from below the resonant frequency of the system to approximately 9 Hz. The upper frequency was limited by the experimental apparatus. A series of experimental runs are conducted over the frequency range with a pseudo-sinusoidal input and compared to the theoretical model. The magnet used is the same 29 mm long neodymium magnet that was previously modelled. The coil is the single coil as described previously. The frequency of the VisSim model is set at 1,000 Hz to avoid any aliasing effects and to reduce numerical errors.
A comparison is conducted between the measured damped motion of the system and the modelled damped motion of the system. The results are given in Figures 8 and 9. The values displayed are the absolute r.m.s. velocity of the damper at the various frequencies for experimental runs of over 30 s.
At the near resonant frequency of 1.58 Hz the modelled displacement and velocity increased by several factors. The physical prototype has a limited allowable range of motion and therefore is much more limited in both displacement and velocity. At frequencies over 4-5 Hz the accelerometer drift of the measuring accelerometer, when integrated, can become larger than the actual velocity measurements. In the range of 1-4 Hz there is a better than 93% agreement between the measured and modelled values. However, for frequencies above 4 Hz the agreement drops rapidly to 38%. This drop in accuracy is caused by accelerometer drift creating a larger noise than the signal that is being measured. These effects are noted for both the damped and undamped motion. In the case of higher frequencies the relative performance of the damped and undamped systems are compared, rather than the absolute values.
While the passive damper is sufficient to produce observable damping effects, the magnitude of this damping is designed for research purposes and not practical damping applications. To determine if this damper is of practical use, a larger scale model is required.
The small scale model
In this model the total mass of 500 g mass represents the lower mass limit of the system. Below this point the model no longer fully represents the damper/mass system as the relative uncertainties accumulate. In all cases the predicted damping is less than the recorded damping. As is observed in Figure 8, there are some calibration issues between recorded data and the model. This should produce a slightly higher than predicted damping factor. It is also noted that the coil is wound onto a paramagnetic substance. This property is not included in the model. This should again mean that the model predicts a lesser value. A further assumption is the axisymmetric nature of the field within the magnet; this is observed to be not fully accurate. The resistance of the coil is important in the final force generated. The resistance of the coil is accounted for, but the reactance of a coil of a coil for an alternating current caused by the magnet is not modelled. An analysis of the reactance of the coil produced a difference of less than 0.1% for the impedance of the coil. From testing the system showed sensitivity to the initial conditions and the measurement with of the total mass of 1,000 g show some of this sensitivity in the result. Resolution issues due the scale of this experiment are apparent during testing. Any disparity between the model and measured will cascade as the dynamic model is modelled using the solutions produced by the static model. The static model showed a greater than 90% agreement and the total model produced a, average agreement of 84%. Having good confidence in the validated model, it was then applied to a full size, lightweight electric vehicle as shown in Figure 10. The magnetic force, F D, was calculated from Equation (6). A two degree of freedom model was simulated with F D , being varied until it produce a typical damped response for the spring mass. This was an iterative approach that gave the parameters for number of coils, magnets and windings. This is explained in detail in Section 6.
Scaling up the passive e.m. damper for a lightweight vehicle
To determine the feasibility of the passive electromagnetic damper as a component for full size automobiles a two degree of freedom model of a car suspension system is created.
For a quarter car, the masses could be modelled as a two degree of freedom as in Figure 11, where m 1 is the unsprung mass/tyre, m 2 is the sprung mass/car body, z 0 is the displacement of the road surface, z 1 is the displacement of the unsprung mass, z 2 is the displacement to the sprung mass, k 1 is the tyre stiffness, k 2 is the shock absorber spring stiffness, c is the damping coefficient of the tyre and F D is the damper force. For a two degree of freedom system the equations of the displacement of the unsprung and sprung masses are given by (16) and (17). As the damping from tyres are usually considered negligible the equations simplified to (18) and (17). This is modelled in VisSim for a light weight electric vehicle similar to the University of Waikato Ultracommuter electric vehicle as shown in Figure 10 and using values as given in Table 2. These values give a resonant frequency of the sprung mass is 1.322 Hz and the frequency of the tyre "hop" is 10.41 Hz.
A pair of larger magnets is modelled for suitability for use in a more powerful damper. Both are cylindrical and axially magnetised. Each of these magnets are matched with a coil the same length as the magnet and consisting of a single layer of wire. The properties of the magnets and their responding coils are given in Table 3.
For the damper to be effective, a damping ratio of 0.5 should be the minimum achievable figure. This requires a damping coefficient of 1,600 Ns/m. For a single coil and magnet the maximum damping coefficient achieved for the ND3522 magnet is 4.60 Ns/m and for the ND5550 the maximum achieved is 20.9 Ns/m. The weight of the magnet-coil system is 0.0934 kg for the ND3522 and 0.5106 kg for the ND5550. With the masses and spring stiffnesses involved a single magnet and coil of the types in Table 3 achieved only a small percentage of the required damping coefficient.
To increase the performance the number of layers is increased to five. This increased the damping effect for the ND3522 to 23.5 Ns//m and for the larger ND5550, 106.3 Ns/m. The masses of the coilmagnet systems increased to 0.171 and 0.892 kg respectively. These damping coefficients remained insufficient to give the suspension system the required damping ratio.
The damping force is further increased by using a second coil with an opposing direction as illustrated in Figure 12(a). This second coil is wound in the opposite direction to the first coil and has a reversed polarity. This provides approximately twice the damping of the single magnet coil system, but without the requirement of adding a second magnet, only a second coil. For a single magnet and two matched coils, as in Table 3, each of five layers, the damping increased to 47.6 and 211.8 Ns/m. Still greater damping coefficients are obtained through the use of a stack of magnets and coils of opposing magnetic fields and polarities as illustrated in Figure 12(b). In (Fow & Duke, 2016) two magnet/coil stacks were proposed, each of which acted as a two phase linear electromagnetic generator. These were both designed to produce a damping coefficient of approximately 1,600 Ns/m. The specifications of these dampers are given in Table 4. Figure 11. A two degree of freedom suspension system.
A major consideration in any damper design is the physical space that the damper occupies. A damper made from ND3522 magnets and coils produces a stack 770 mm tall, without any mounting and 920 mm tall at maximum extension. The volume of this stack is 2.177 litres. For a damper stack made from the ND5550 magnet-coil combination, the height is 800 mm minimum and 950 mm at maximum extension. The volume of the second proposed damper is 3.078 litres. This is compared to a equivalent commercial damper that is 550 mm long at full extension and with all of the mountings. The dash pot on a hydraulic damper will produce a specific volumetric damping of over 2,000 Ns/m per litre of volume. While the passive dampers produce specific volumetric damping of 756 and 532 Ns/m per litre respectively. In the case where the volume of the damper is important, an electromagnetic damper is not only heavier than the equivalent oil damper, but occupies significantly more space and has extra complexity.
Conclusions
Both the model of the magnet and the model of the dynamic system are validated. The model of the magnet producing a better than 96% agreement between the modelled and measured magnetic fields. This produced a 92% agreement between the modelled and measured flux generated by the magnet-coil model. Testing of the single degree of freedom damper model produced 83% agreement in the time domain and 94% agreement for the frequency domain, where the measured signal is sufficient for reliable measurements. The model was then scaled up to produce two proposed dampers, each of which would produce a damping coefficient of 1,600 Ns/m, which is considered sufficient for the damping of a lightweight vehicle. The two dampers occupied three to four times the volume of an equivalent hydraulic damper when sans covers, connecting rods, other damper "furniture" or any systems to remove the heat generated by the damper. It is therefore determined that there is no volumetric advantage in the use of the proposed damper in a purely passive mode. Further research should be conducted to determine if the damper in semi-active or regenerative modes would justify the increased weight and volume of the passive e.m. damper. | 2018-12-05T13:24:11.799Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "cad1a920e78c2cb7eaf9ec2fc59635c0703034e6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311916.2017.1374160",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cad1a920e78c2cb7eaf9ec2fc59635c0703034e6",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
14984770 | pes2o/s2orc | v3-fos-license | © European Geosciences Union 2005
We study the dynamics of the magneto- spheric large-scale current systems during storms by using three different magnetospheric magnetic field models: the paraboloid, event-oriented, and Tsyganenko T01 models. We have modelled two storm events, one moderate storm on 25-26 June 1998, when Dst reached 120 nT and one intense storm on 21-23 October 1999, when Dst dropped to 250 nT. We compare the observed magnetic field from GOES 8, GOES 9, and GOES 10, Polar and Geotail satellites with the magnetic field given by the three models to estimate their reliability. All models demonstrated quite good agree- ment with observations. Since it is difficult to measure ex- actly the relative contributions from different current systems to theDst index, we compute the contributions from ring, tail and magnetopause currents given by the three magnetic field models. We discuss the dependence of the obtained contri- butions to the Dst index in relation to the methods used in constructing the models. All models show a significant tail current contribution to the Dst index, comparable to the ring current contribution during moderate storms. The ring cur- rent becomes the majorDst source during intense storms.
Introduction
Despite the many investigations of storm dynamics made during the recent years, the measure of storm intensity, the D st index, and the relative contributions to it from different current systems during a storm are still under discussion.The D st index was thought to be well correlated with the inner ring current energy density from storm maximum well into recovery (Hamilton et al., 1998;Greenspan and Hamilton, 2000).Several studies, however, have suggested that the D st Correspondence to: V. V. Kalegaev (klg@dec1.sinp.msu.ru)index contains contributions from many sources other than the azimuthally symmetric ring current (Campbell, 1973;Arykov and Maltsev, 1993;Maltsev et al., 1996;Alexeev et al., 1996;Kalegaev et al., 1998;Dremukhina et al., 1999;Greenspan and Hamilton, 2000;Turner et al., 2000;Alexeev et al., 2001;Ohtani et al., 2001;Liemohn et al., 2001;Ganushkina et al., 2002Ganushkina et al., , 2004;;Tsyganenko et al., 2003).
Experimental investigations of the D st problem are often based on Dessler-Parker-Scopke relation (Dessler and Parker, 1959;Scopke, 1966) which relates the magnetic field of the ring current at the Earth's center, b r , with the total energy of the ring current particles, ε r , where ε d = 1 3 B 0 M E is the energy of the geomagnetic dipole above the Earth's surface, B 0 is the geodipole magnetic field at the equator.
The ring current contribution to D st was studied by Greenspan and Hamilton (2000) based on AMPTE/CCE ring current particle measurements in the equatorial plane for 80 magnetic storms from 1984 until 1989.It was shown that the ring current magnetic field obtained from the total ring current energy using the Dessler-Parker-Scopke relation represents well D st (especially on the nightside).However, the currents other than the ring current can produce significant magnetic perturbations of different signs at the Earth's surface, so their total magnetic perturbation will be about zero.
The tail current contribution to D st (to the SY M−H index, more exactly) was studied by Ohtani et al. (2001) for the 25-26 June 1998 magnetic storm.Based on GOES 8 measurements and their correlation with D st , the authors determined the contribution from the tail current at D st minimum to be at least 25%.It was established that D st lost 25% of its value after substorm onset due to tail current disruption.The question about the preintensification level of tail current magnetic field, which continues to contribute to D st after substorm dipolarization, remains open.Thus, based only on the measurements, we cannot explicitly distinguish between the contributions from different magnetospheric current systems which contribute to the ground magnetic field.However, we can estimate them by using modern magnetospheric models, which can provide separate calculations of the magnetic field of the different magnetospheric magnetic field sources.Magnetic field modelling is a useful tool for studying the evolution of large-scale current systems during magnetic storms.
The empirical models developed by Tsyganenko (for example, T96 (Tsyganenko, 1995) and earlier versions) are constructed by minimizing the RMS deviation from the large magnetospheric database (Fairfield et al., 1994), which contains magnetospheric magnetic field measurements accumulated over many years.As magnetic storms are relatively rare events during the observation period, their influence on the model coefficients is small.The applicability of the T96 model is limited to 20>D st >−100 nT, 0.5 nPa<P sw <10 nPa, −10 nT<B z IMF<10 nT.The version T01 (Tsyganenko, 2002a,b) was developed using a larger database which also includes measurements made in recent years.It is valid over a wider range of parameter values.
The existing theoretical models determine the magnetospheric magnetic field from physical constraints.The paraboloid model of the Earth's magnetosphere (Alexeev, 1978;Alexeev et al., 1996;Alexeev et al., 2001) is based on an analytical solution of the Laplace equations for each largescale current system in the magnetosphere with a fixed shape (paraboloid of revolution).The paraboloid model takes parameters of magnetospheric current systems (intensities and locations) as input.These input parameters are determined from empirical data using submodels.Such a feature allows for easy changes to the paraboloid model parameterization.
Several types of studies require an accurate representation of the magnetospheric configuration during a specific event.For such cases, event-oriented modelling is of key importance (Ganushkina et al., 2002(Ganushkina et al., , 2004)).Event-oriented models contain free parameters whose values are evaluated from observations for each time period separately.
The main focus of this paper is the relation between the ring current and the tail current during storm times.To study this we use three different magnetic field models: the paraboloid model (Alexeev, 1978;Alexeev et al., 2001), the event-oriented model (Ganushkina et al., 2002), and the T01 model (Tsyganenko, 2002a,b).To investigate the tail current/ring current relationship we model two storm events, one moderate storm on 25-26 June 1998, when D st reached −120 nT and one intense storm on 21-23 October 1999, in which D st dropped to −250 nT.Comparison of the magnetic field given by different models with satellite data allows us to verify the different modelling approaches and their reliability for magnetospheric studies during disturbed conditions.We compute the relative contributions from the ring, magnetotail and magnetopause currents to the D st index using all three models.Long periods of modelling for each storm allow us to examine and compare the long-term evolution of different current systems during storms with different intensity given by models based on the different approaches.On 25 June 1998 the IMF B z behavior (Fig. 1a) reflected the passage of a magnetic cloud: southward turn at 15:50 UT when B z reached −13 nT and then suddenly jumped to more than +15 nT around 23:00 UT.At 24:00 UT B z decreased rapidly to −5 nT and began a new slower enhancement to the level of about 10 nT which is approached at 05:00 UT on 26 June.The solar wind dynamic pressure had several peaks around 20-30 nPa.The AE index showed the first increase at about 23:00 UT on 25 June but the maximum substorm activity was detected during 02:00-04:00 UT on 26 June with a peak value of 14:00 nT around 02:55 UT.The D st index started to decrease at the beginning of 26 June and reached −120 nT around 05:00 UT, six hours later the first northward B z reversal occurred, after a long period of substorm activity when IMF B z demonstrated relatively slow growth from −5 nT to +10 nT.The detailed analysis and interpretation of this interesting phenomena was made by Ohtani et al. (2001).
Description of storm events
Figure 1b shows an overview of the intense storm on 21-23 October 1999.IMF B z turned from +20 nT to −20 nT at about 23:50 UT on 21 October and after some increase during the next three hours dropped down to −30 nT around 06:00 UT on 22 October.After that, the IMF B z oscillated around zero.Solar wind dynamic pressure showed two main peaks, a 15 nPa peak around 24:00 UT on 21 October and a 35 nPa peak around 07:00 UT on 22 October.There were several peaks in the AE index reaching 800-1600 nT.The D st index dropped to −230 nT at 06:00-07:00 UT on 22 October.
3 Storm-time magnetic field models
Paraboloid model
The basic equations of the paraboloid model represent the magnetic fields of the ring current, of the tail current including the closure currents on the magnetopause, of the Region 1 field-aligned currents, of the magnetopause currents screening the dipole field and of the magnetopause currents screening the ring current (Alexeev, 1978;Alexeev et al., 1996;Alexeev et al., 2001).Here we discuss the latest version of the model, A2000 (Alexeev et al., 2001).In the A2000 model (as in the previous versions of paraboloid model) the magnetopause is set to be a paraboloid of revolution.The condition B n =0 is assumed at the magnetopause.The model parameters determining the large-scale magnetospheric current systems are the following: the geomagnetic dipole tilt angle ψ, the magnetopause stand-off distance R 1 , the distance to the inner edge of the tail current sheet R 2 , the magnetic flux through the tail lobes ∞ , the ring current magnetic field at the Earth's center b r , and the maximum intensity of the field-aligned current I .At each moment the parameters of the magnetospheric current systems define the instantaneous state of the magnetosphere and can be determined from observations.
The A2000 model parameterization is described in detail by Alexeev et al. (2001).The geocentric distance R 1 to the subsolar point is calculated using solar wind data: solar wind dynamical pressure and IMF B z component (Shue et al., 1997).The distance to the inner edge of the tail current sheet R 2 is obtained by mapping the equatorward boundary of the auroral oval at midnight, ϕ n =74.9 • −8.6 log 10 (−D st ), as given by Starkov (1993), to the equatorial plane.The magnetic flux across the tail lobe is a sum of two terms ∞ = 0 + s , which depend on the tail current density, R 1 and R 2 .The first term corresponds to a slow adiabatic evolution of the tail current due to solar wind variations and remains constant ( 0 =3.7•10 8 Wb) while the second term R 1 +1 is associated with substorms.Here s variations represent the integrated substorm activity dependent on the hourly-averaged AL-index (see Alexeev et al., 2001).
According to Burton et al. (1975) and the Dessler-Parker-Sckopke relation (1) the ring current magnetic field variation at the Earth's center is given by db r dt =F (E)− b r τ , where F (E) is the injection function defined in accordance with Burton et al. (1975);O'Brien and McPherron, (2000), and τ is the lifetime of the ring current particles.Burton et al. (1975) andO'Brien andMcPherron (2000) found the average values of the amplitude of the injection function (d in notation of (Burton et al., 1975;O'Brien and McPherron, 2000)), but apparently it varies from storm to storm.In Alexeev et al. (2001) d was obtained from independent research by Jordanova et al. (1999).In these case studies we will find d which provides the minimum RMS deviation between D st and the modelled D st .In such an approach b r will include not only a contribution from the symmetrical ring current but also the symmetrical magnetic fields from the other magnetospheric magnetic field sources, which are not included in A2000.First of all, this is the symmetrical part of the partial ring current magnetic field.
I is determined from the IMF B z component, and solar wind velocity and density as described by Alexeev and Feldstein (2001).
As a result the A2000 allows one to calculate the magnetic field depending on the described above parameters of magnetospheric current systems, which can be obtained from input data: date, IMF, solar wind density and velocity, AL and D st indices.
Event-oriented model by Ganushkina et al.
The Ganushkina et al. (2002Ganushkina et al. ( , 2004) ) storm-time magnetic field model (G2003) used the Tsyganenko T89 magnetic field model (Tsyganenko, 1989) as a baseline, and the ring, tail and magnetopause currents were modified to give a good fit with in-situ observations.
The ring current model consists of symmetric and asymmetric parts (Ganushkina et al., 2004) represented by a Gaussian distribution of the current density.The total current density of the symmetric ring current is a sum of eastward and westward current intensities.The asymmetric partial ring current is closed by field-aligned currents flowing from the ionosphere at dawn and into the ionosphere at dusk, in the Region 2 current sense.The magnetic field from this current system is calculated using the Biot-Savart law.For the tail current system both global intensification of the tail current sheet and local changes in a thin current sheet were implemented (Ganushkina et al., 2004).To adjust for the magnetopause inward motion during increased solar wind dynamic pressure, the magnetic field of the Chapman-Ferraro currents B CF T89 at the magnetopause was scaled using the solar wind dynamic pressure.
The free parameters in the model are the radial distance of the westward ring current (R 0west ) and partial ring current (R 0part ), and the maximum current densities for westward (J 0west ) and partial (J 0part ) ring currents, the amplification factor for the tail current (AT S), and the additional thin current sheet intensity (A ntc ).By varying the free parameters we found the set of parameters that gives the best fit between the model and the in-situ magnetic field observations.The details of the fitting procedure can be found in Ganushkina et al. (2002).
Tsyganenko T01 model
In the T01 model (Tsyganenko, 2002a,b) the general approach is to parameterize the current systems and evaluate these parameter values in a statistical sense, using a large magnetospheric database.Several revisions were introduced in the mathematical description of the major sources of the magnetospheric field and in their parameterization with respect to the earlier T96 model (Tsyganenko, 1995).A partial ring current with field-aligned closure currents are included, and the cross-tail current sheet is warped in two dimensions in response to the geodipole tilt, with its inner edge shifting along the Sun-Earth line and its thickness varying along and across the tail.The magnetopause is specified according to the empirical model by Shue et al. (1997).
The model parameters are geodipole tilt angle, IMF B y and B z components, solar wind dynamic pressure, and D stindex.An attempt is made to take into account the prehistory of the solar wind by introducing two functions, G 1 and G 2 , that depend on the IMF B z and solar wind velocity and their time history.
Comparison of modelling results: magnetic field
To contrast and to examine the reliability of the three models, we present here a comparison of the model results with magnetic measurements from various spacecraft during the June 1998 and October 1999 storms.We calculate the magnetic field along the spacecraft orbits located in the different regions of space: geostationary orbit (GOES−8, −9, and −10), near-Earth's tail (Geotail), and high-latitude magnetosphere (Polar).Analysis of simultaneous measurements in the different magnetospheric regions helps to determine the role of different magnetospheric current systems during magnetic storms.
Figure 2 shows the evolution of orbits in the noonmidnight meridional (upper panels) and equatorial (lower panels) planes of satellites such as GOES 8 (red curve), GOES 9 or 10 (blue curve), Polar (green curve), and Geotail (pink curve), during the time periods when the magnetic field data were used for modelling storm events on (a) 25-26 June 1997, and (b) 21-23 October 1999.All measurements were made inside the magnetosphere.
Figure 3 shows the B x and B z components of the external magnetic field obtained from observations shown by thin lines and A2000 model results shown by thick lines for GOES 8 (two upper panels), GOES 9 and GOES 10 (next two panels), Polar (next two panels) and Geotail (bottom two panels) for (a) 25-26 June 1998 and (b) for 21-23 October 1999 storm events.Dashed grid lines show the noon locations for GOES spacecraft, and perigees of the Polar orbit.Figures 4 and 5 show the observed and model magnetic fields in the same format for the event-oriented model G2003 and the Tsyganenko T01 model, respectively.B x and B z measured components represent the main changes in the magnetospheric current systems.Their comparisons with the model results reveal the main model's features.
It can be seen that generally all models show quite good agreement with observations.For the moderate storm the B x measured at geosynchronous orbit is better represented by the A2000 and T01 models, whereas the G2003 model gives a more accurate reproduction of the B z component.The large observed B x values imply the existence of intense currents that can be either field-aligned or perpendicular, or an even stronger compression of the magnetosphere than that represented by the magnetopause current intensification in Table 1.The RMS deviations in nT between the observed and modelled magnetic field calculated by the paraboloid (A2000, Alexeev et al., 2001), event-oriented (G2003, Ganushkina et al., 2003), and Tsyganenko (T01, Tsyganenko, 2002a,b) In general, all three models show approximately similar accuracy in the representation of magnetic field data observed by Polar.The G2003 model magnetic field agrees with the observed field at Geotail (from 00:20 UT, 25 June until 18:00 UT 26 June while the spacecraft was inside the magnetosphere) slightly better than that given by the A2000 and T01 models.additional discrepancies (e.g.B x drops) that arise from the construction of the tail current model discussed above.However, for both storm events the B x components are described with a reasonable accuracy at GOES 8 and GOES 10, as well as at Polar.Table 1 shows the RMS deviations between the satellite measurements and model calculations determined as δB= 1 N N i=1 (B obs −B model ) 2 .The obtained discrepancies are calculated during the whole considered time-intervals and include quiet as well as disturbed periods.We note that for each orbit the models give the accuracy of about half of the average value of the magnetic field.In general, all models represent well the global variations of magnetospheric magnetic field measured by spacecraft.However, the model features determine the specific behavior of the magnetic field calculated in different magnetospheric regions by different models during the different phases of the considered magnetic storms.
The paraboloid model reproduces well the B x components of the magnetic field measured along the GOES and Polar orbits for any level of disturbances but underestimates the B z depression, due to tail current model features and possibly due to the absence of the partial ring current model in A2000.The T01 model also provides good agreement between the observed and modelled B x component.On the other hand, during the intense storm maximum, the model B z is significantly more depressed than that observed along the GOES and Polar orbit.Because the ring current cannot give the significant contribution to the magnetic field at geostationary orbit, we propose that this discrepancy is due to an overestimation of the tail current contribution.Apparently, this is the consequence of the general approach used in development of any empirical model.Calculation results are very sensitive to the database used for the model construction.Intense storms are only a small part of such databases.As a result just during extremely disturbed conditions the empirical model demonstrates the sufficient discrepancies.The event-oriented model G2003 represents better the substorm-associated variations of the B z component at geosynchronous orbit during both moderate and intense storms, but gives discrepancies in the B x variation during storm maximum.
Model calculations of D st index
In this study, along with Alexeev et al. (2001), we suggest that the magnetopause, tail and ring currents are the main contributors to the D st index.Although the models considered above are also able to calculate the magnetic field from the other magnetospheric currents (see Sect. 3), their contributions to D st are not addressed in this study.
The storm-time magnetic field depression at the Earth's surface is determined mainly by ring current, tail current and partial ring current.However, their relative strength and location in the inner magnetosphere remains ambiguous, and it is difficult to separate in the measurements the partial ring current from the storm-time tail and symmetrical ring currents.Obviously, the magnetic field of the partial ring current has a symmetrical part which contributes to the D st -index.The different estimates for the effect of the partial ring current on D st were obtained by Liemohn et al. (2001), as the dominant contribution during the magnetic storm main phase, and by Tsyganenko et al. (2003), as about 1/7 of the total ring current contribution during storm maximum.Because the question about the partial ring current contribution to D st requires special consideration, it will not be the subject of this paper.Along with Ganushkina et al. (2002), we propose in our calculations that the partial ring current produces a part of the total ring current magnetic field variation measured at the Earth's surface.Actually, it is included in the ring current magnetic field calculated in terms of the G2003 and T01 models.
Moreover, the partial ring current is not included in the A2000 model.Possibly, this is the reason for the discrepancies found during comparison between the model calculations and data measured along the spacecraft orbits.However, the symmetrical part of its magnetic field is included in the ring current magnetic field in terms of the approach used for b r calculation (see Sect. 3.1).So, A2000 allows one to calculate the total symmetrical ring current magnetic field (originated from both symmetrical and partial ring current) as well as the total ring current contribution to D st Earlier studies have given different relative contributions from the magnetospheric current systems to the D st index.These differences can be very large: the tail current contribution to D st was ∼25% in a study by Turner et al. (2000) while the tail current contribution was comparable to the D st in Alexeev et al. (2001) for the same event on 9-12 January 1997.In the present paper we calculate the magnetopause, ring and tail currents storm-time variations at the Earth's surface.The contribution of the ground induced currents to the measured perturbation field is assumed to be 30% of the magnetic perturbation at the Earth's surface (Häkkinen et al., 2002).The magnetic field horizontal components ( H (t)) were computed from the external current systems at the locations of six near-equatorial stations (geomagnetic latitude and longitude are in brackets): Sun Juan (29.9 and Del Rio (39.0, 324.1).Then, the quietest day of the month was determined using the World Data Center catalogue, and the magnetic field variation during this quiet day, H q (t), was calculated from the model.The model D st (SY M − H ) is then where N is the number of stations ( 6), and θ i represents the magnetic latitudes of the stations.This procedure was repeated for total D st and for contributions from the different current systems.This method of D st computation is similar to the official procedure described by Sugiura and Kamei (1991).It allows us to unambiguously derive the D st variations arising from changes in the magnetospheric current systems in the various models.October 1999 storms.The average quiet time fields were −0.58 nT and 2.74 nT, respectively.Figure 6 shows an analysis of the model current contributions to the quiet-time D st -index for 17 June 1998 (left) and 20 October 1999 (right), using (a) the A2000 paraboloid model, (b) the G2003 event-oriented model, and (c) the T01 model, respectively.The ground-induced currents' effect (30% of the variation) was taken into account in all the calculations.
We can see that the amplitudes of the calculated variations are about 8-10 nT for all the models (see the bottom panels), but the average values are different.The average quiet day magnetic field variations computed from the A2000 and G2003 models are close to zero.They are about −5 nT for both events in terms of the A2000 model and about 0 nT and 2.5 nT in terms of the G2003 model.Thus, the magnetic field variation calculated at the Earth's surface by these models during the disturbed conditions can be taken as D st .However, the contributions from the individual current systems to D st are, of course, not zero.Unlike the A2000 and G2003 models, the T01 model gives a quiet day magnetic field varia- tion of about −20 nT.Subtracting this value from the ground magnetic field variation during disturbed conditions is an important step in the D st calculations by the T01 model.
It is important to note that the different quiet-time levels are features of the models and possibly are not connected with the real quiet level magnetic field.In particular, it seems that the large quiet-time field in the T01 model is caused by a relatively small number of measurements in the inner magnetosphere in the database used for T01 construction Tsyganenko et al. (2002a,b).The question about the real quiet time magnetic field level at the Earth's surface remains open for now (see Greenspan and Hamilton (2000).
Figure 7 shows the model contributions and total D st during 25-26 June 1998 and 21-23 October 1999 storm events in the same format as in Fig. 6.The quiet time level and quiet time contributions from the different current systems are subtracted from the model magnetic field variations.In general, all three models provide D st , which is in good agreement with the observed D st index.
During the moderate storm on 25-26 June 1998, the A2000 and G2003 models show that the tail current begins to develop before the ring current and tail current decay begins earlier than that of the ring current.Its contribution to the D st index almost follows the drop in the total D st .
The tail current in the T01 model develops even earlier than the D st starts to decrease.During the storm main phase all models show that the tail and ring current have comparable contributions to the D st .During the recovery phase the ring current remains more enhanced than the tail current according to A2000 and G2003 models, although the G2003 model provides even more tail current contribution than the A2000 model.The ring current in the T01 model recovers rapidly and the tail current remains at an enhanced level almost until the end of the storm recovery.
The situation is quite different during the intense storm on 21-23 October 1999.In all three models the tail current develops first when D st begins to decrease in a manner similar to the tail current behavior during the moderate storm.During the storm maximum the ring current is the dominant contributor to the D st index in the A2000 and G2003 models.In the T01 model the tail current continues its development until the storm maximum and gives a major contribution to the D st index, whereas the ring current contributes only about one third of the tail current contribution.During the recovery phase the tail current contribution decreases and becomes comparable to the ring current contribution.
The tail current contribution to the D st index computed from the A2000 and G2003 models changes during the magnetic storm.It correlates with substorm activity, and approaches its maximum during substorm maximum estimated by AE enhancement.On the other hand, the ring current correlates with the total D st , and its maximum tends to be near the D st maximum.During the moderate storm, the maximum tail and ring current contributions to D st were about 70% and 50% of maximum D st in the A2000 model, 85% and 50% of maximum D st in the G2003 model, and 50% and 50% of maximum D st in the T01 model.During the intense storm the maximum tail and ring current contributions were, respectively, about 50% and 90% for A2000, 70% and 90% for G2003, and 100% and 40% for T01 (note that the D st sources reach their maximums at different UTs).Ring current contribution is determined by injection intensity.Amplitude of the injection function F (E) (see Sect. 3.1) calculated in A2000 for the magnetic storm on 21-23 October 1999 d=−3.8 nT / h(mV /m) −1 exceeds by absolute value d=−2.8 nT / h(mV /m) −1 calculated during the 25-26 June 1998 magnetic storm.It looks reasonable to propose that the stronger storm corresponds to the stronger ring current injection and the larger amplitude of injection (by absolute value).However, this conclusion requires more detailed statistical consideration.
In general, all the models confirm the assumption that the tail current magnetic field can be sufficiently large to provide a significant contribution to the D st , variation (Alexeev et al., 1996).However, the global A2000, G2003 and T01 models demonstrate different tail current development during magnetic storms.While during the moderate storm the tail current and ring current have approximately equal maximum contributions to D st during the strong magnetic storm the models reveal a different behavior.The tail current becomes the major contributor to D st in the T01 model, while the tail current contribution is smaller than that of the ring current in the A2000 and G2003 models.
The total D st computed from the T01 model differs significantly from the measured D st during the main phase of the magnetic storm.Comparison with GOES 8 and GOES 10 data also shows that the model B z is much smaller than the observed one during the 21-23 October 1999 magnetic storm maximum.Because the ring current magnetic field at geosynchronous orbit is small, the source of the discrepancies in D st and in B z along the GOES orbit is probably caused by the strong intensification of the tail current in the model.The T01 model represents well D st and spacecraft measurements during moderate magnetic storms, but does not match D st during intense magnetic storm maximum.This is a known limitation of the empirical models based on the data of satellite measurements.Possibly, the latest Tsyganenko model (Tsyganenko et al., 2003), which is based on the storm-time data, allows one to obtain the more realistic results during strongly disturbed conditions.
The event-oriented G2003 model, which is also based on empirical data, gives excellent results in reproducing D st , as it uses measurements obtained during the magnetic storm which is modelled.This highlights the complexity of the magnetospheric response to the solar wind driving, and the consequent need for event-oriented modelling.
Discussion
Three magnetospheric models based on very different approaches (theoretical, empirical and event-oriented) were used in our calculations of the magnetic field.The solar wind data and geomagnetic indices are used as input for theoretical A2000 and empirical T01 models, while the entire existing database of the measurements inside the magnetosphere is the base of the G03 model.The models have the different parameterizations, but we used a unified procedure of D st and D st -source calculations in terms of all the models, corresponding to the official procedure of D st derivation from data of ground measurements.This procedure includes subtraction of the quietest day effect and takes into account the magnetic field produced by the Earth's induced currents.Such an approach enables unambiguous determination and accurate comparison of the D st contributions produced by the magnetospheric current systems in terms of the A2000, G2003 and T01 models.
In this paper we are interested in the relation between ring and tail current.We assume that the ring current magnetic field includes a contribution from the symmetrical ring current, as well as the longitudinal averaged part of the partial ring current, magnetic field.In fact, the ring current includes symmetrical and asymmetrical parts in T01 and G2003, while the symmetrical part of the partial ring current is included in the ring current model in A2000.The ring current (including the partial ring current), tail current and magnetopause currents are proposed to be the main contributors to the D st index.The models of these currents used in the A2000, T01 and G2003 models were described in detail in Alexeev et al. (1996;2001); Tsyganenko (2002a,b); Ganushkina et al. (2002;2004).They satisfactorily reflect the main features of the observed current systems but have slightly different geometry and depend on different parameters.For example, the tail current system represented by the models consists of cross-tail currents and closure currents on the magnetopause.The different tail current geometry plays a significant role in the magnetic field calculation near the tail current sheet (see the comparison with Geotail measurements, Sect.4) but hardly influences the magnetic field variations at the Earth's surface.Otherwise, the tail current intensity, as well as the geocentric distance to the tail current inner edge, determine strongly the D st dynamics during the magnetic storm.During storm maximum the tail current is located close to the Earth and becomes sensitive to the solar wind dynamic pressure, IMF, and flux content of the tail.So therefore, we would expect that the parameters of the tail current, and consequently its effect on the D st index are controlled by the factors originated from the solar wind and magnetosphere.The dependence of the model parameters on the external factors (e.g.measured solar wind data) determines the model parameterization.We can see from our calculations that the differences in the parameterization of the models provide the main differences between the D st calculated by the A2000, G2003 and T01 models.
In spite of the different model's parameterizations, the results obtained by all the models show that the tail current plays a significant role in the magnetic storm development.Computations of the tail current contribution to D st using the A2000, G2003 and T01 models, show that the tail current contribution to D st can approach values comparable to the ring current contribution to D st during storm maximum.The calculations show that 1) the relationship between tail and ring currents depends on magnetic storm intensity, and 2) this relationship changes during the course of the magnetic storm development.
It was shown that the theoretical A2000 and event-oriented G2003 models give a tail current contribution to D st comparable with the ring current contribution during a moderate storm, but that the ring current becomes the dominant contributor during an intense storm (see also Ganushkina et al., 2004).Although we did not analyze the substorm related processes, we can conclude that the level of substorm activity influences the value of the tail current contribution to D st .We suggest that the tail current can produce its maximum contribution to D st for moderate storms while the ring current remains yet undeveloped.During severe storms, the ring current continues to develop while the tail current has already approached its maximum values.In particular, we can see that the hourly AL index can approach approximately the same maximum values during both moderate and intense storms.The magnetic flux through the polar cap, calculated by the paraboloid model (see Sect. 3.1), as well as the polar cap area, depend strongly on the level of substorm activity and do not demonstrate significant growth during intense storms in comparison with moderate ones.On the other hand, the stronger injection amplitude was calculated during the intense magnetic storm on October 1999.
Detailed investigation of tail and ring current dynamics by the A2000 and G2003 models show that the tail current (as well as other magnetospheric currents) contribution to D st varies during a magnetic storm.Both models show similar behavior of the D st sources: the tail current begins to develop earlier than the ring current and starts to decay while the ring current continues to develop.The magnetotail global changes during the magnetic storm are controlled mostly by the solar wind and the IMF, but are accompanied by sharp variations associated with substorms.The G2003 model (Ganushkina et al., 2002;2004) reproduces the tail current development, which correlates well with the substorm-associated AE index.Clear correlation of the tail current contribution to D st with substorm activity is also apparent in the results obtained from the A2000 model.
Magnetic field sources contributing to D st are controlled by different factors originating in the solar wind, as well as in the magnetosphere, which change nonsynchronously, with different time scales and, consequently, determine the complicated dynamics of the D st .Abrupt changes in D st can be caused either by magnetopause currents in accordance with the IMF and solar wind dynamic pressure pulses, or by tail current variations during substorms.The tail current disruption following substorm onset often influences D st recovery (Iyemori and Rao, 1996;Kalegaev et al., 2001).Along with the results of Ohtani et al. (2001), the substorm related activity during 02:00-04:00 UT on 26 June 1998 resulted in D st decay by 30 nT after the substorm onset.Both A2000 and G2003 models reveal such a D st drop, while the ring current continued to develop.The positive jump from the tail current after substorm maximum is calculated to be about −40 nT in the A2000 model and about −50 nT in the G2003 model.
Conclusions
This study addresses the relation between the ring current and the tail current during storm times.Three different magnetic field models, the paraboloid model A2000 by Alexeev (1978), Alexeev et al. (2001), the event-oriented model G2003 by Ganushkina et al. (2002Ganushkina et al. ( , 2004)), and the T01 model by Tsyganenko (2002a,b) were used to model two storm events.One storm event was moderate with D st =−120 nT, and another was an intense storm with D st =−250 nT.
In general, all models showed quite good agreement with in-situ observations.The event-oriented model G2003 represented best the substorm-associated variations of the B z component at and near geosynchronous orbit during both moderate and intense storms.The T01 model provided good agreement between the observed and modelled B x component, but on the other hand, the model B z was significantly more depressed than that observed during the intense storm.Similarly, the A2000 model reproduces well the B x components of the magnetic field measured along the GOES and Polar orbits.
The A2000, G2003 and T01 models showed that during the moderate storm the tail and ring current contributions are comparable.All three models showed that the tail current develops before the ring current when D st starts to decrease.During the recovery phase the ring current stays more enhanced than the tail current, according to the A2000 and G2003 model results.The ring current in the T01 model recovers quickly and the tail current remains at an enhanced level almost until the end of the storm recovery.
Similar to the moderate storm, during the intense storm, in all three models the tail current developed first when D st started to decrease.During the storm maximum the ring current was the dominant contributor to the D st index in the A2000 and G2003 models.During the early recovery phase the ring current stayed intensified longer than the tail current, becoming comparable to the tail current intensity during the late recovery.In the T01 model the tail current continued to enhance until storm maximum, and gave the largest contribution to the D st index.During the early recovery phase in the T01 model the tail current contribution decreased rapidly and became comparable to the ring current.Unlike the moderate storm in which the theoretical A2000 and event-oriented G2003 models give a tail current contribution to D st comparable with the ring current contribution, during the intense storm the ring current becomes the dominant contributor.
The tail current dynamics in the A2000 and G2003 models is correlated well with substorm activity.The tail current enhancement during substorm precedes the D st recovery, but the ring current continues to develop after the substorm maximum.In agreement with Ohtani et al. (2001), the tail current is responsible for a D st increase of about 30 nT.According to the A2000 and G2003 models, the tail current preintensification level is about −40 to −50 nT.
Magnetic field modelling is a very useful tool not only for the accurate representation of the magnetic field, but also for studies of the evolution of the large-scale current systems.Global models represent well the main features of the magnetospheric magnetic field, but give some discrepancies in representing local magnetic field features.For such cases, event-oriented modelling can be used to improve the accuracy of calculations for specific events.
Figure 1
Figure 1 represents the overview of the measurements during the magnetic storms on 25-26 June 1998 and 21-23 October 1999.The solar wind data and IMF were obtained from Wind spacecraft, taking into account the convection time shift of about 40 min.On 25 June 1998 the IMF B z behavior (Fig.1a) reflected the passage of a magnetic cloud: southward turn at 15:50 UT when B z reached −13 nT and then suddenly jumped to more than +15 nT around 23:00 UT.At 24:00 UT B z decreased rapidly to −5 nT and began a new slower enhancement to the level of about 10 nT which is approached at 05:00 UT on 26 June.The solar wind dynamic pressure had several peaks around 20-30 nPa.The AE index showed the first increase at about 23:00 UT on 25 June but the maximum substorm activity was detected during 02:00-04:00 UT on 26 June with a peak value of 14:00 nT around 02:55 UT.The D st index started to decrease at the beginning of 26 June and reached
Fig. 2 .
Fig. 2. Evolution of orbits of satellites during the time periods when the magnetic field data was used for modelling storm events on (a) 25-26 June 1997, and (b) 21-23 October 1999.
Fig. 3 .
Fig. 3. Comparison of the observed B x and B z components of the external magnetic field in the GSM coordinates (thin lines) with A2000 model results (thick lines) for GOES 8 (two upper panels), GOES 9 and GOES 10 (next two panels), Polar (next two panels) and Geotail (bottom two panels) for (a) 25-26 June 1998 and (b) for 21-23 October 1999 storm events.
Fig. 4 .Fig. 5 .
Fig. 4.Observed and model magnetic fields in the same format as in Fig.3for the event-oriented model G2003.
5. 2 Fig. 6 .
Fig. 6.D st index (black) and the model contributions to the quiettime magnetic field at the Earth's equator from the magnetopause current (green), ring current (red) and tail current (blue) (top panel) together with the total observed D st (black) and modelled quietday variation, δH q , (purple) (bottom panel) for 17 June 1998 (left) and 20 October 1999 (right) using (a) A2000 paraboloid model, (b) G2003 event-oriented model, and (c) T01 model, respectively.
Fig. 7 .Fig. 7 .
Fig. 7. Model contributions to Dst and total Dst during June 25-26, 1998 and October 21-23, 1999 storm events in the same format as in Figure 5.The quiet-time contributions from the different current systems ar subtracted from the model magnetic field variations.
Shue et al. (1997)e A2000 model represents the magnetopause size variations, depending not only on solar wind pressure but also on IMF B z based onShue et al. (1997)model.The A2000 describes the B x values during the magnetic storm main phase (the first 6 h of 26 June 1998) more accurately than the other models.On the other hand, the A2000 model underestimates the B z values during this time interval.This is because the paraboloid model represents the cross-tail currents as a discontinuity between the oppositely directed magnetic field bundles in the southern and northern tail lobes and as a result gives a very small B z component in the vicinity of the tail current. | 2014-10-01T00:00:00.000Z | 2005-02-28T00:00:00.000 | {
"year": 2005,
"sha1": "34a8216f9662ac374b32bb5b8aa1bd3cd6712c26",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/articles/23/523/2005/angeo-23-523-2005.pdf",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "34a8216f9662ac374b32bb5b8aa1bd3cd6712c26",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
53474741 | pes2o/s2orc | v3-fos-license | A new aromatic ester from the mangrove plant Lumnitzera racemosa willd +
Chemical examination of the Indian-mangrove plant Lumnitzera racemosa Willd has resulted in the isolation of a new aromatic ester besides the known triterpenoids, friedelin, betulin and betulinic acid. The structure of the new compound was established as 3-(4-hydroxyphenyl)- propyl-3 1 -(3,4-dihydroxyphenyl)-propionate by a study of its spectral data.
Introduction
Lumnitzera racemosa Willd (Fam: Combretaceae) is a handsome shrub or a small tree found on the coast of India and on the Andaman and Nicobar Islands.The wood of L. racemosa is used as a fuel for its calorific value and the leaves of the plant are eaten in South Pacific Island during periods of scarcity.The reddish brown bark contains 15-19% tannin while the leaves and wood contain smaller quantities.A fluid obtained from incisions made in the stem was reported to be employed as an external application for the treatment of herpes and itches 1 .Antihypertensive activity has been recently reported for the aqueous acetone extract of the plant 2 .Chemical examination of this plant occurring in various parts of the world was reported to give a large number of compounds, long chain rubber like polyisoprenoid alcohols in leaves 3 , flavonoids and long chain fatty acids 4 and low molecular weight carbohydrates 5 .Chemical examination of the Indian species was reported to give friedelin, β-amyrin, taraxerol, betulin, β-sitosterol and triacontanol 6 .The presence of trace elements was also reported 7 .In our continuing interest on the chemical constituents of Indian mangrove plants [8][9][10][11][12][13][14][15][16][17] we have examined this species collected from the Bhiravapalem Island in the Godavary estuary and the results are reported herein.
Results and Discussion
The air dried and powdered stem of L. racemosa was exhaustively extracted with CH 2 Cl 2 : MeOH (1:1
Experimental Section
General experimental procedures.Melting points were determined on a VEB-analytic Dreader HMK hot plate and are uncorrected.IR spectra were recorded on a Perkin-Elmer-841 IR spectrometer in CHCl 3 solution. 1H NMR spectra were measured on a Bruker Advance DRX 300 and Jeol JNM EX-90 spectrometers. 13C NMR spectra were measured on a Bruker Advance DRX 300 spectrometers at 75 MH z and Jeol JNM EX-90 spectrometer at 22.5 MH z using CDCl 3 as a solvent and tetramethylsilane as an internal reference.Elemental analyses were determined on a Carlo Ebra 1108 instrument.Mass spectra were obtained on a Jeol JMS-300 spectrometer.
Plant material.The stems of Lumnitzera racemosa were collected at the Bhiravapalem Island in the Godavari estuary ( 16 0 58 1 N Latitude and 82 0 15 1 E Longitude) in March 1998.The plant material was identified by Prof. B.KondalaRao Dept of Marine Living Resourses, Andhra University and the voucher specimens of the material have been kept in the museums of Organic chemistry, School of Chemistry, Andhra university and NIO Goa as AU1-166.
Extraction and isolation.The air-dried and powdered stem of Lumnitzera racemosa (4Kg) was exhaustively extracted with CH 2 Cl 2 : MeOH (1:1) (8X8L).Removal of the solvent from the combined CH 2 Cl 2 : MeOH extracts gave a residue (20 g) which was extracted with EtOAc (3 X 500 mL).Removal of the solvent under reduced pressure gave a residue (15 g) which was subjected to column chromatography over a column of silica gel (Acme brand , 100-200 mesh, 400 g) using solvents of increasing polarity from n-hexane through EtOAc.In all, 260 fractions (750 mL) were collected .The fractions showing similar spots were combined and the residues from therein were subjected to chromatography over silica gel or silver nitrate (20%) impregnated silica gel columns to yield four pure compounds as given below.
Fraction I.The residue (800 mg) from the column fractions 95-125 (n-hexane: EtOAc,8.75:1.25)was rechromatographed over a small column of silica gel using n-hexane and ethyl acetate mixtures as eluant to afford pure compound 1 Fraction II.The residue (2 g) from the column fractions 35-60 ( n-hexane : EtOAc, 9.5:5 ) was chromatographed through a small column of silica gel using n-hexane and ethyl acetate as Comparison of the physical and spectral data of 4 with the literature values 19,20 of betulinic acid confirmed the characterization.Alkaline hydrolysis of compound 1.To compound 1 (25 mg) dissolved in methanol (10 ml) was added methanolic KOH (10%, 5 ml) and the mixture refluxed on a steam bath for 1 hour.
The mixture was diluted with water (20 ml) and then extracted into ether.The ether solution after evaporation gave 3-(4-hydroxyphenyl)-1-propanol m.p 52 0 C identical with the literature compound 21 (m.p, 1 HNMR taken on 90MH Z instrument in CDCl 3 with TMS as internal standard).The alkaline aqueous solution from the reaction was acidified with dil H 2 SO 4 and the acid liberated was extracted into ether.The ether solution after evaporation left a residue which on crystallization from chloroform-methanol gave 3,4-dihydroxydihydrocinnamic acid m.p 136-38 0 identical with the literature compound 22 ( m.p, 1 HNMR taken on 90MH Z instrument in CDCl 3 with TMS as internal standard).
). Removal of the solvent from the combined CH 2 Cl 2 : MeOH extracts gave a residue which was extracted with EtOAc.Removal of the ethyl acetate under reduced pressure gave a residue which on repeated chromatographic separations over silica gel columns furnished a new aromatic ester 1, in addtion to the known triterpenoids, friedelin, betulin and betulinic acid. | 2018-10-17T02:14:54.511Z | 2003-01-16T00:00:00.000 | {
"year": 2003,
"sha1": "1fcc0b133c90d79877acd8a8e88d732b75f15000",
"oa_license": "CCBY",
"oa_url": "https://www.arkat-usa.org/get-file/18887/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "03f0296a0b44f1d6c262d9a0b3100a584cf6de0f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
2940551 | pes2o/s2orc | v3-fos-license | Assessment of Agricultural Best Management Practices Using Models: Current Issues and Future Perspectives
Best management practices (BMPs) are the most effective and practicable means to control nonpoint source (NPS) pollution at desired levels. Models are valuable tools to assess their effectiveness. Watershed managers need to choose appropriate and effective modelling methods for a given set of conditions. This paper considered state-of-the-art modelling strategies for the assessment of agricultural BMPs. Typical watershed models and specific models were analyzed in detail. Further improvements, including simplified tools, model integration, and incorporation of climate change and uncertainty analysis were also explored. This paper indicated that modelling methods are strictly scale dependent, both spatially and temporally. Despite current achievements, there is still room for future research, such as broadening the range of the pollutants considered, introducing more local BMPs, improving the representation of the functionality of BMPs, and gathering monitoring date for validation of modelled results. There is also a trend towards agricultural decision support systems (DSSs) for assessing agricultural BMPs, in which models of different scales are seamlessly integrated to bridge the scale and data gaps. This review will assist readers in model selection and development, especially those readers concerned about NPS pollution and water quality control.
Introduction
In the last few decades, nonpoint source (NPS) pollution from agricultural lands has caused major water quality degradation and threatened the safety of water resources worldwide.One of the most crucial issues for protecting water quality is how to effectively control NPS pollution [1,2].Best management practices (BMPs) are routinely used, yet there has been much concern regarding the efficiency of BMPs in reducing NPS loads [3].Assessment of BMPs, which has become a thriving area of research, can ensure the most effective use of funding for watershed management and can avoid the implementation of unreasonable practices.Models, which represent the optimal assimilation of physical, chemical, and biological watershed processes, have been proposed to evaluate the impact of BMPs on NPS pollution.Currently, modelling is integrated as a necessary step in watershed management, such as through the Total Maximum Daily Load (TMDL) in the United States and the Water Framework Directive in Europe [4,5].
Models for assessing agricultural BMPs can be divided into various types based on their complexity and scales of application.Watershed models evaluate the hydrologic and water quality response to multiple BMPs at varying scopes and locations.Their application has also been expanded to the basin and regional scales [6].For example, by incorporating Geographic Information System (GIS) techniques, cropland conversion to forest/grassland as an effective BMP can be easily evaluated by watershed models.In a study of the upper reaches of the Yangtze River by Ouyang et al. [7], the land use scenario assumed that croplands were converted to forests.The results from a watershed model revealed that when agricultural lands with slopes greater than 7.5° were converted to forests, the organic nitrogen and organic phosphorus decreased by 42.1% and 62.7%, respectively.In contrast, several structural BMPs are commonly implemented at the field scale at which the utility of watershed models is limited [8].For these widely used BMPs (e.g., filter strips, riparian buffers, and detention ponds), specific assessment models have been developed [9,10].Site-specific conditions and dimensions of agricultural BMPs are incorporated into these specific models, which are often beyond the capacity of most watershed models.
Watershed models and specific models are both effective tools for agricultural BMP assessment, but they are not always used appropriately.There is still room for model improvements to facilitate the assessment process.In general, the selection of an appropriate approach will greatly influence decision making regarding watershed plans and regulations.There is a large body of published literature on the assessment of agricultural BMPs using models, yet a systematic review conducive to model selection and development is lacking.This review aims to fill that void.Our objectives are to (i) critically review state-of-the-art, model-based assessment methods for agricultural BMPs; (ii) compare commonly used watershed models and specific models based on their strengths and limitations; (iii) discuss model improvements to facilitate the assessment of agricultural BMPs; and (iv) propose several implications for future trends.
Commonly Used Models for Assessing Agricultural Best Management Practices (BMPs)
Based on a thorough literature evaluation, we identified 17 models which have been used for assessment of agricultural BMPs.They are Soil and Water Assessment Tool (SWAT) [11], Agricultural Nonpoint Source (AGNPS) [12], Annualized Agricultural Nonpoint Source (AnnAGNPS) [13], Hydrological Simulation Program-FORTRAN (HSPF) [14], Vegetative Filter Strip Model (VFSMOD) [15], Riparian Ecosystem Management Model (REMM) [16], Agricultural Policy/Environmental eXtender (APEX) [17], Groundwater Leaching Effects of Agricultural Management Systems (GLEAMS) [18], Generalized Watershed Loading Functions (GWLF) [19], Erosion-Productivity Impact Calculator (EPIC) [20], Pollution Load (PLOAD) [21], Dynamic Watershed Simulation Model (DWSM) [22], Areal Nonpoint Source Watershed Environment Response Simulation (ANSWERS) [23], Water Erosion Prediction Project (WEPP) [24], Universal Soil Loss Equation (USLE) [25], MIKE SHE/MIKE 11 coupling model [26], and WETLAND [27].The first six models are chosen as the most representative models for this review.The main reason is that they are more frequently used in comparison with other models, which can be recognized by the extent of publication in English.Our selection also assures the most common BMPs can be evaluated by the selected models, by which typical methods for the assessment cover scales (field to watershed) and scopes (structural and non-structural measures, sediment, nutrient, and pesticide processes) for comparative purpose.Other models are excluded from this review mainly because of the following considerations: (i) Similar algorithms and structures in models which however have limited application; (ii) Inability to interpret nutrient process; and (iii) Lack of ongoing development for the assessment of BMPs.Following these criteria, we then selected six models.The first four models were further categorized as watershed models and the other two models are specific models.
Watershed Models
The SWAT, AGNPS, AnnAGNPS, and HSPF are four watershed models selected for this review.Table 1 provides a listing from the existing literature of their use for assessment of agricultural BMPs.The characteristics, including spatial representation, temporal resolution, and watershed process descriptions under BMPs conditions, are briefly summarized in Table 2.
Spatial Scale and Watershed Representation
The way a watershed is discretized determines the basic computational units in which certain types of BMPs are simulated.The appropriate spatial scale at which models operate effectively under BMP conditions can be defined based on a comparison of watershed segmentation methods.On basis of a Digit Elevation Model (DEM), the SWAT model discretizes a watershed into sub-watersheds and stream reaches based on surface topography.A sub-watershed is further divided into hydrologic response units (HRUs).A typical HRU is comprised of a homogeneous land use, soil attribute, and slope.Runoff, sediment, and contaminant loadings from each HRU are calculated separately and then summed together at the sub-watershed level and then routed through reach segments and to the basin outlets [48].Parameters, or inputs related to pollutant removal mechanisms, are lumped in sub-watersheds, HRUs, and reaches, which enables SWAT to evaluate BMPs at the watershed and sub-watershed scales.HRUs may represent the field-level conditions for BMPs, but there is a distinct disconnect between the hydrologic scale of HRUs and the actual fields at which these BMPs are implemented.
The AGNPS model divides a watershed into cells of equal size which are the basic spatial units for simulating BMPs [49].As an expanded version of AGNPS, AnnAGNPS enhances the previously described discretization method by delineating cells, reaches, and impoundments based on topographic homogeneity similar to the SWAT model [50].Cells are almost identical to sub-watersheds in SWAT but have no subdivisions, such as HRUs.The HSPF model, as an inherent component of Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) [51], can also divide a watershed into sub-watersheds and reaches.Simulations are conducted within homogeneous land segments called HRUs.The HSPF model can be applied at different scales (ranging from a few hectares up to 128,000 km 2 ) [52] due to its flexible, user-defined definition of HRUs.Moreover, a HRU in the HSPF model is connected to an adjacent HRU or a reach.Runoff and water quality constituents resulting from agricultural BMPs leave each HRU and route laterally to down-slope land segments or streams.
Therefore, the model can simulate the relationship between successive BMPs at smaller scales, which is not possible in the SWAT model [52].
Actually, neither square units (cells in AGNPS) nor irregularly shaped units (cells in AnnAGNPS and HRUs in SWAT and HSPF) can exactly represent the actual positions of BMPs.Ghebremichael et al. [53] suggested that HRUs in the SWAT model may be manually defined based on the spatial field boundary.Alternatively, the topographic, soil, and land use thresholds may be set to 0% to capture the detailed watershed processes.HRUs may be assigned to their original locations while the HRU-outputs may be transformed to field-level results [54].Adjustments to the parameters and inputs related to BMP characteristics under pre-BMP and post-BMP periods are common principles for representing agricultural BMPs for field scale assessment.In addition, the watershed segmentation strategies discussed above may be inappropriate for BMPs located in special hydrogeological conditions.For example, riparian buffers receive pollutants from upland drainage areas and transfer them to adjacent streams.This interaction mechanism is important for BMPs functioning though is oversimplified in the watershed models.
Temporal Scale and Resolution
Certain BMPs (e.g., sediment basins and vegetative filter strips) should be designed for single events during sudden storm events, which is the current requirement of TMDL [5].Rainfall intensity, duration, and intra-event variability of flow and pollutants are required to assess their ability to model storm event.So, the application of watershed models for assessing BMPs is also limited by temporal resolution which ranges from annual to sub-hourly averages.The SWAT model usually operates continuously at a daily time step [29], which ensures that the long-term impacts of BMPs can be quantified.Sub-daily calculations of runoff, erosion, and sediment transport are also available in new version of SWAT by sub-daily rainfall input and Green & Ampt method [55], though few attempts have obtained a higher temporal resolution.In a recent study by Maharjan et al. [56], hourly runoff prediction at a small watershed was quite acceptable with both coefficient of determination and Nash and Sutcliffe Efficiency greater than 0.8 during calibration and validation.A sub-daily erosion and sediment transport algorithm was also incorporated into SWAT model, which is found adequate for simulating detention-based BMPs (e.g., sediment basins and ponds) [10].Further research should extend to NPS pollution with SWAT model at the sub-daily pattern.
The HSPF model can simulate watershed processes at event-step to long-term steps [57], but storage-based or nonlinear flow routing equations in the HSPF model are insufficient for representing intense or even extreme storm events [58].The AGNPS model can simulate the change in water quality after a storm event.The single-event pattern cannot perceive the long-term features of several BMPs.The hydrographs during an individual event are also not included.The AnnAGNPS model significantly improved many of the features of its predecessor.The most notable modification is that the AnnAGNPS model can also be operated on a daily and sub-daily step, which facilitates the generation of many non-structural BMP scenarios [59].Overall, the advantage of most of the reviewed watershed models lies in their capacity to simulate the long-term impacts of proposed BMPs, which is inappropriate if we focus on the design of storm-based agricultural BMPs.Agricultural BMPs focus on source loading reduction and pollution transport control.Source loading reduction measures may relate to cropland conversion, nutrient (manure and fertilizer) management, integrated pesticide management, poultry management, and grazing management.Croplands are considered as the major source of NPS pollution, the reduction of which has been found to occur in response to the shifts of croplands to less erosive use [7,60].One approach for representing cropland conversion by watershed models is to use GIS techniques to adjust land use maps.Alternatively, SWAT model introduce a land use change (LUC) module, which allows manually adjusting fractional coverage of land use types in each HRUs [61].Pai and Saraswat [62] further developed an automated tool to ingest multiple land use information and activate the LUC module.For other BMPs addressing source loading reduction, the SWAT model allows information about these measures to be modified by scheduling the amount, timing and period of agricultural activities [21].The management file, HRU file, and database files contain the input information, which can be adjusted [63].By contrast, the AGNPS and AnnAGNPS models have no specific options for agricultural management practices, but they use BMP-responsive inputs (fertilization level, the availability factor, and rate of fertilizer applied) to represent nutrient management [43].However, there are no documented studies reporting the application of AGNPS and AnnAGNPS models in the field for evaluating pesticide control or poultry and grazing management (see Figure 1).The main reason lies in their rough sketch of farming practices.
For transport control BMPs, physically based algorithms in watershed models can be used by altering the values of parameters sensitive to the functioning of the BMPs.The removal mechanism of a typical BMP involves watershed processes, including interception, infiltration, overland flow, interflow, evapotranspiration, sheet and rill erosion, contaminant routing, and within-channel processes [64].The key input parameters and processes in watershed models (SWAT, AGNPS/AnnAGNPS) used to represent agricultural BMPs for pollution transport control are summarized in Table 3.The SWAT, AGNPS and AnnAGNPS models are agriculture-oriented and employ rather similar equations to quantify the impact of agricultural BMPs.In upland areas, overland flow routings in SWAT, AGNPS, and AnnAGNPS models are related to curve number (CN) and Manning's roughness coefficient (n).Adjustments to CN and n values represent BMPs that decrease surface runoff by increasing infiltration (e.g., contour farming, terracing, and strip cropping) and decrease flow rate by intercepting runoff (e.g., residue management and strip cropping), respectively.The simulation of the impact of BMPs on sheet and rill erosion in overland areas is also quite similar in these three models.The Universal Soil Loss Equation (USLE) and its associated forms, the Revised Universal Soil Loss Equation (RUSLE) and the Modified Universal Soil Loss Equation (MUSLE), are respectively incorporated in the AGNPS, AnnAGNPS and SWAT models.The RUSLE and USLE have the same formula as Equation (1) while MUSLE is represented by Equation ( 2): where A is the average annual soil loss, and sed is the sediment yield on a given day.R is the rainfall erosivity factor; K is the soil erodibility factor; Qsurf is the surface runoff volume; qpeak is the peak runoff rate; and areahru is the area of the HRU.The common parameters in the above equations include the cover and management (C) factor, the support practice (P) factor, and the topographic (LS) factor.Each of these factors can be adjusted to represent the adoption of agricultural BMPs (e.g., terracing, contour farming, strip cropping and residue management).
As for the channel network, the Bagnold or Einstein equation for the calculation of sediment routing within the SWAT, AGNPS and AnnAGNPS models define Manning's roughness coefficient (CH_n) to calculate channel flow capacity which influences sediment deposition and the relevant pollution loadings [58].So CH_n can be altered in watershed models to represent the impact of many channel BMPs (e.g., grassed waterways, lined waterways, and stream stabilization).Specifically, SWAT introduces two BMP-responsive parameters: The channel erodibility factor (CH_EROD), which is a function of the properties of the bed or bank material, and the channel cover factor (CH_COV), which is defined by vegetative cover [63].These two parameters as well as channel geometric parameters (CH_width, CH_depth and CH_SLOP) enhance the evaluation ability of the SWAT model to assess the impact of BMPs on the channel network.
As a structural BMP, vegetative filter strips (VFSs) are widely used to mitigate sediment and nutrient levels in runoff before it reaches water bodies.Their effectiveness has been assessed by the SWAT model in many studies [30,35].However, in the previous versions of the SWAT model (before SWAT 2009), the same efficiency was assigned to sediments and all nutrient forms, which is problematic according to field investigations.The effects of flow concentration that are apparent at various scales were also neglected [8].The new routine (developed in SWAT 2009) employs different filtering efficiencies for all forms of sediments and nutrients.A VFS is divided into two sections to consider the concentrated flow: Section one where 90% of the VFS receives the least flow and section two where the remaining 10% of the area receives the major runoff (25%-75%).The drainage area to VFS section one (DAFSratio1) and drainage area to VFS section two (DAFSratio2) are calculated using the Equations ( 3) and (4): Three additional parameters can be altered to describe the new structures of the VFSs in the SWAT model: The drainage area to VFS area ratio (DAFSratio), the fraction of the field drained by the most heavily loaded 10% of the VFS (DFcon), and the fraction of the flow through the most heavily loaded 10% of the VFS that is fully channelized (CFfrac) [8].In contrast, AGNPS and AnnAGNPS models have no specific routines to assess VFSs.The most common way to represent VFSs is to change the current land use type to grasslands or increase the value of n [43].The SWAT and AnnAGNPS models can also treat BMPs like sediment basins and detention ponds as impoundments, which can be simulated in their specific modules.PND_K (bottom permeability coefficient), PND_FR (fraction of sub-watershed area draining to the pond), PND_PSA (surface area of ponds) and PND_PVOL (volume of the ponds) in the SWAT model can be adjusted for sediment loss calculation [65].For the AnnAGNPS model, sediment accumulation and resuspension processes were added to generate an accurate representation of sediment basins and detention ponds [46].Decisive inputs, such as the detention time for a specific storm and a pond's geometric parameters (surface area, depth), can be modified in the context of watershed-scale modelling.
The HSPF model has a unique module called BMPRAC (Best Management Practice Evaluation) to facilitate the assessment of many structural BMPs [14].In the BMPRAC, modellers can use recommended removal fractions pertaining to an assumed BMP [38].However, these removal fractions are based on documented studies conducted in diverse conditions.The assignment of user-defined values to the effectiveness of BMPs at a site-specific area may lead to crude or even indeterminate results.Another module of HSPF, called SPEC-ACTIONS, allows detailed inputs related to management activities, including ploughing, planting, harvesting, and pesticide and nutrient application [66].Thus, this module is useful for representing agricultural improved management practices.
Specific Models
Watershed models are more often used for assessment at the watershed and sub-watershed scales.However, some BMPs, especially structural BMPs, are implemented at the field level where the response of water quality to these BMPs deserves more attention [67].VFSs and riparian buffers are widely used structural BMPs, and specific models have been developed for the assessment of their trapping mechanisms.In this section, two specific models, VFSMOD and REMM, are discussed in detail.
VFSMOD is an event-based model that routes the incoming hydrograph and sediment graph to simulate outflow, infiltration, and sediment trapping under field conditions [68].It uses many physical parameters to represent site-specific characteristics, including vegetation type, geometric shape (length and width), land slope, and soil properties.It should be noted that nutrient and pesticide processes are excluded from the original version of VFSMOD.However, Kuo and Muñoz-Carpena [69] and Sabbagh et al. [70] coupled VFSMOD to empirical trapping efficiency equations for phosphorus and pesticides, respectively.The enhanced model was also combined with a graphical user interface and other tools to develop a vegetative filter strip modelling system (VFSMOD-W) [71].This system contains two components: The main program for VFS simulation and a front-end program (UH).When input data are not available, the UH component can generate source area inputs for each storm design, including a rainfall hyetograph, a runoff hydrograph, and sediment loss from the source area.VFSMOD-W also provides three tools for sensitivity analysis, parameter calibration with an automated inverse algorithm, and analysis of uncertainty from inputs and parameters.
The specific model, REMM, has a simulation structure that considers typical three-buffer riparian zones [72].Simulations are performed at field scale and daily steps, and the interactions between the surface and subsurface hydrology, sediment transport, nutrient dynamics, and vegetation growth can also be characterized.Detailed data, such as climate inputs and site-specific conditions with their dimensions, vegetation types per zone, biomass harvesting, and soil characteristics are required [72].Site-specific characteristics are accounted for in the two specific models, which give rise to more accurate assessments than from the watershed models.However, these models require input data of higher resolution than that of watershed models.Thus, they are more suitable for farmlands or small watersheds with full-featured databases.
Simplified Models
As shown in Figure 1, the SWAT model has many advantages over other watershed models in terms of the assessment ability.However, it is prohibitively complex to operate for users with little knowledge about SWAT models.Simplified tools can be devised using the SWAT model as a hidden engine but with easy-to-use interfaces.The Pasture Phosphorus Management (PPM) calculator was first designed to assess the edge-of-field phosphorus loss in the Lake Eucha/Spavinaw basin [73].The effectiveness of various BMPs, including poultry management, grazing management, and nutrient management, can be assessed by PPM calculator.White et al. [74] developed PPM Plus to specifically assess phosphorus and sediment loss in Oklahoma.The soil phosphorus was redefined using a more explicit representation.In addition, assessment options for several agricultural BMPs (e.g., conservation tillage, crop rotation, filter strip, and pond) were also added.Recently, PPM Plus has evolved into the Texas BMP Evaluation Tool (TBET), which allows more agricultural BMPs to be assessed and can be adapted for diverse land uses [75].TBET is a vastly simplified tool for predicting sediment and nutrient loss and BMP scenarios.TBET is currently being validated with over 350 years of data and shows reliable predictive ability.
These simplified tools provide meaningful reference points for future development.The robust modelling ability of watershed models can be used in the background.Simplified tools may then act as input and output interfaces for interpreting results while insulating the conservation planners from the complexities of the watershed models.Databases containing multiple input data (e.g., DEM, land use, and monitoring data) for models should be built-in to streamline cumbersome data preparation and entry.However, such databases cannot include input data from all over the country, so simplified tools should be designed for typical watersheds or regions, especially for those where agricultural BMPs are being promoted and studied.
Integration of Different Models
As mentioned above, any model has a preferred scale for application, and a model that performs satisfactorily at every scale has not yet been found.Watershed models cannot explicitly describe the site-specific conditions at the field scale where most of the processes of the structural BMPs occur.However, specific models require data on source runoff and associated loadings from hydraulically connected upland areas.Up-scaling the water quality response at the field level to the sub-watershed or watershed level is essential for watershed management.So, researchers have addressed these scale issues through model integration.Cascaded frameworks such as AnnAGNPS/VFSMOD, SWAT/REMM, and AGNPS/VFSMOD have been developed [76,77].The data gap is a concern because the output from watershed models may not meet the requirements for inputs to specific models.Specific tools or programs for processing data should be developed.Another major challenge is filling the scale gap between the simulation units for BMPs in specific models and watershed discretization by watershed models.In an application of AnnAGNPS/REMM, Yuan et al. [78] considered the drainage area to a riparian buffer as a single cell, so uniformity of soil properties and land use was assumed.However, this situation is highly questionable because of the heterogeneity in a watershed.BMPs usually cover a small portion of a sub-watershed.It is more reasonable to extract the areas that drain a field-scale BMP.As an improvement, Liu et al. [76] partitioned a sub-basin into three parts (inland, concentrated, and buffer drainage areas) in SWAT/REMM.The rainfall-runoff process is thoroughly considered within a sub-watershed, so the contributing areas upslope of a riparian buffer can be defined more accurately.The calibrated SWAT/REMM model predicted a 27.9% abatement in sediment and a 37.4% reduction in total phosphorus by the existing riparian buffer.It should be mentioned in this context the DEM and other spatial data of high resolution should be used in preparation for the delineation of drainage and flow paths.
Incorporation of Climate Change Consideration
Climate change is increasingly considered as a major challenge for water resources management and water quality control worldwide [79].The hydrologic pattern and watershed processes may be greatly influenced by climate change, which potentially offsets expected gains achieved by BMPs' implementation.Assessing proposed or implemented BMPs for their climate vulnerabilities help decision makers to be more aware of the risks.Then, modified or new strategies may be designed to minimize the potential negative impacts of climate change for meeting TMDL requirements in future conditions.
SWAT model has been recently used to evaluate the effectiveness of agricultural BMPs under future climate conditions, with four general steps: (1) Model parameterization and calibration under current conditions; (2) Development of future climate change scenarios; (3) Analysis of the influence of climate variability on streamflow, sediment, and nutrient; and (4) Comparison of BMPs effectiveness under current and climate change conditions.Relevant studies indicate that individual agricultural BMPs and their combination are likely to be less effective under future climates.Higher BMPs implementation rate in future may relieve the negative impact of climate change on NPS pollution [80][81][82].Some climatic trends have substantial influence on specific agricultural BMPs.For example, once storm events become more frequent and extreme, the trapping efficiency of VFSs reducing flow and sediment will deteriorate because of less infiltration from rising water tables and saturated soils and less interception from rising velocity of runoff [82].Increased flooding may overwhelm storage-based BMPs, such as sediment basins, and rising temperatures may harm the vegetation that plays the most critical role in the function of infiltration-based BMPs.Model-based assessment methods can further quantify the influence of climate change on BMPs' effectiveness by sensitivity analysis.The one-at-a-time perturbation of BMP-responsive parameters can determine the sensitivity of each BMP under different climate change scenarios [83].Reliability of each BMP in performance can be evaluated by relative sensitivity index.BMPs with high sensitivity are more susceptible to future climates, indicating the need to well maintaining those BMPs or expanding implementation rates.Recommendation of BMPs with less sensitivity can help to build resilience to climate change.It should be noted that climate change may be not an issue to some kinds of BMPs whose service lives are much shorter than the onset of major climate change impacts.Renovation or replacement of these BMPs (e.g., grassed waterways and grade stabilization structures) may be required long before the full impacts of climate change are evident.
Incorporation of Uncertainty Analysis
The present implementation strategies for BMPs may not be able to achieve their expected goals due to uncertainty in the assessment process.Two sources of uncertainty, input to BMPs systems (e.g., precipitation, inflow and related pollutant) and BMP-responsive parameters (e.g., CN and n), were identified [5,28,84].Precipitation is the driving force of NPS pollution.The inherent randomness of rainfall will result in significant variability of inflow and related pollutants into BMPs systems, which can be categorized as an important source of input error [83,85].The process-based assessment methods are generally treated as conceptualizations of BMPs system functions.The BMP-responsive parameters are defined as quantifiable sub-processes based on the watershed characteristics.These parameter values require careful calibration and field experiment.However, intensive data are not always easily accessible, hindering parameter identification.As discussed above, the performance of BMPs may vary over time under different conditions.The commonly used method, which assigns a fixed value to the parameter, cannot match such variability.Researchers have used several analysis methods for BMPs parameter uncertainty, such as Monte Carlo Simulation and Generalized Likelihood Uncertainty Estimation [83,84].Cumulative distribution functions and confidence intervals, rather than point estimates resulting from traditional assessments, should be given to evaluate the acceptable level of risk.Uncertainty analysis at spatial distribution can determine the risk of BMPs placement throughout a watershed.Temporal analysis of uncertainty also has been carried out to give insight to the risk and reliability of each BMP on a monthly or seasonal basis [28].Those future climate change scenarios showing more frequent and extreme storm events also raise concerns about the uncertainty of BMPs on single events, during which the risk may be attenuated if we only focus on long-term trends.
Structural uncertainty in BMPs assessment models arises from inaccurate descriptions of the function mechanism of BMPs.Details of BMPs functionality cannot be captured fully by models.Simplification of some processes (e.g., infiltration, interception, and evapotranspiration) is inevitable.Development or modification of currently used method is a common approach addressing structure uncertainty [86].For assessment of VFSs in SWAT model, the empirical calculation of trapping efficiency was improved from those only considering filter width to equations that combine modeling results and field experiments [8].If more physical processes are incorporated, mechanism models (e.g., VFSMOD model) will be developed to further reduce the structure uncertainty to some extent, though more input and parameters are required for model setup.A trade-off between the ease of model use and the uncertainty level remains a subjective but critical issue.
Improvements to Assessment Methods
Despite the considerable achievements discussed above, there is still room for further study.First, the range of contaminants that can be evaluated is limited to sediments and nutrients (see Table 1).Very few studies have attempted to explore the impact of agricultural BMPs on pesticides and pathogens.SWAT, HSPF, AnnAGNPS, and VFSMOD-W models have incorporated modules to simulate pesticide cycling, but only SWAT and HSPF models can evaluate the sources and transportation of pathogens.More studies should focus on the impact of BMPs on these pollutants because their contribution to water quality degradation is receiving more and more attention.
There is a trend towards introducing more types of BMPs for assessment by models, especially local BMPs.For example, multi-pond systems are visible all over the farmlands in southern China.These systems are initially built to improve irrigation efficiency, while the ponds and small river courses in multi-pond systems can also reduce sediments and other agrichemicals.The current method for assessment of multi-ponds is over-simplified in which current land use types are changed to impoundments [41].Such agricultural BMPs should receive more attention and be evaluated by reliable assessment methods.In other words, the representation of agricultural BMPs is likely to be refined as the understanding of the processes increases.
How much confidence do we have in the reliability of the predicted outcomes?Though modelling methods have been widely used, very few of them have been verified [87].Part of the reason is the lack of detailed monitoring data.Monitoring approaches, ranging from field measurement covering pre-and after-BMPs conditions on individual farmland, to monitoring of paired watersheds, are strongly recommended.Meanwhile, the stochastic behaviors of NPS pollution also prevents land managers verifying model reliability, yet this can be addressed in an uncertainty analysis as mentioned above.The question raised by the above discussion compounds the puzzle that weather sophisticated, over parameterized, deterministic models would "deterministically" interpret the watershed and BMP-related processes [88].High data requirement of those complex models may provide litter, if any, improvements, especially in large areas where sufficient data are not always available.By consequence, a meta-model, which is not only based on process oriented simulations but also extends the BMPs' effectiveness to a large scale using approximation methods, seems to be an alternative to better face NPSs' complex behavior in practical applications [89][90][91][92].Land managers should keep these intrinsic characteristics in mind when adopting modelling methods for assessment of BMPs.
Agricultural decision support systems (DSSs) for BMP assessment are also needed.The selection of a proper combination of BMPs is the major challenge faced by watershed managers who seek to achieve their desired water quality targets.A DSS may comprise not only the models that evaluate BMPs at different scales but also tools for BMPs siting and for optimizing BMPs in terms of environmental benefits and cost.These models and tools may be seamlessly integrated, which will bridge spatial gaps by precisely sketching the paths through drainage areas and BMPs and will bridge data gaps by placing a powerful data editor in charge of processing inputs/outputs.It is worth mentioning that a similar DSS for urban stormwater management was developed and sets a good example to follow.This DSS is named the System for Urban Stormwater Treatment and Analysis Integration model (SUSTAIN) [93].It has inherent limitations in the assessment of agricultural BMPs because of two main reasons: (1) Algorithms for simulating non-sediment pollutants in pervious areas are mainly based on a buildup and wash off conception that may not be able to explicitly represent their cycles in agricultural systems; (2) There is no specific routine for crop cultivation in SUSTAIN, so many non-structural BMPs (e.g., crop rotation and residue management) are not included in the BMP module.The development of DSSs specific to the assessment of agricultural BMPs still requires a joint effort.
Conclusions
There is an increasing interest among practitioners to document the environmental effects of agricultural BMPs adoption.Modelling methods are well developed and improving, and have been widely used to assess the impact of agricultural BMPs on water quality.Given the various models available, practitioners are often unaware of the appropriateness of models for certain conditions.Given this, we reviewed typical watershed models, specific models, and associated approaches in this article, to generalize several considerations for model selection, including spatial and temporal scale, watershed discretization, BMPs' representation, data requirement, scale gap, and uncertainty issues.Several findings should be highlighted to improve our choice of certain models and help researchers establish priorities for model improvements: (i) Neither watershed models nor specific models can simultaneously operate well at multiple scales.The predominant processes at the scale that models are applied should firstly be explicitly determined; (ii) Watershed models show acceptable performance at the watershed-scale assessment because of their methods to discretize a watershed; and specific models account for field-level characteristics that are beyond the capacity of watershed models; (iii) Daily time step based and event based equations for rainfall-runoff and water quality simulations, which are integrated in most of the reviewed models, are not robust enough to represent fast-and short-responding processes in storm and flood event; (iv) Simplified tools using models as hidden engines but acting as user-friendly interfaces can be developed for watershed managers with little knowledge of model operation; (v) Model integration is encouraged to achieve BMPs' effectiveness assessment at multiple spatial scales; (vi) Incorporation of climate change considerations is a necessary step to build more resilience confronting future conditions; and (vii) Incorporation of uncertainty analysis into the assessment process can determine an acceptable level of risk to increase the credibility of decision making.
These conclusions were based on state-of-the-art understanding of the modelling strategies for model selection and improvements.However, the main determinant lies in the questions that decision makers are attempting to address.A tradeoff between advantages and limitations of each method is inevitable but essential.What the future holds for agricultural BMPs' assessment was also explored.There are still many areas for future research, including broadening the range of pollutant types, introducing more local BMPs, improving the representation of the function of BMPs, gathering monitoring date for validation, and developing agricultural DSSs.These issues will be rich areas for researchers to explore concerning NPS pollution and watershed management.
2. 1 . 3 .
Representation of BMPsThe common principle of BMPs representation is to depict the change in watershed processes and the response of water quality under or without BMPs by changing model inputs or parameter values.In this sense, watershed models are generally the conceptualization of the way in which the BMPs are functioning at the watershed scale.The types of agricultural BMPs that can be assessed by different watershed models are shown in Figure1.The cause of the discrepancy is mostly due to model structures and algorithms.
Figure 1 .
Figure 1.Typical agricultural BMPs which can be assessed by each watershed model.(Black squares indicates that the model can address those BMPs and white squares indicates the opposite.GSS: Grade stabilization structure.SCS: Stream channel stabilization).
Table 1 .
The summary of the studies on agricultural BMPs' assessment using watershed models.
Notes: a EFFMIX: The mixing efficiency of a tillage options; b EFFTIL: Depth of mixing caused by tillage options. | 2016-03-22T00:56:01.885Z | 2015-03-12T00:00:00.000 | {
"year": 2015,
"sha1": "f19a7b7ec820be8b4d6703860a3185a270144300",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/7/3/1088/pdf?version=1433837818",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "f19a7b7ec820be8b4d6703860a3185a270144300",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
11187632 | pes2o/s2orc | v3-fos-license | Cryptococcus gattii molecular type VGII as agent of meningitis in a healthy child in Rio de Janeiro , Brazil : report of an autochthonous case Cryptococcus gattii tipo molecular
Cryptococcus gattii causes meningoencephalitis in immunocompetent hosts, occurring endemically in some tropical and subtropical regions. Recently, this fungus was involved in an outbreak in Vancouver Island and British Columbia (Canada). In this temperate region, the VGII type is predominant. The paper describes an autochthonous case of meningoencephalitis by C. gattii VGII in a previously health child in Rio de Janeiro, considered nonendemic region of Brazil. The fungus was identified by biochemical tests and the molecular type was determined by URA5-RFLP. The present report highlights the need for clinical vigilance for primary cryptococcal meningitis in nonendemic areas. Key-words: Meningoencephalitis. Immunocompetent. Cryptococcus gattii.
Cryptococcus gattii is an agent of life-threatening disseminated infections in healthy, immunocompetent hosts.The most common clinical manifestations are meningoencephalitis and pulmonary disease, occurring mainly as endemic mycosis in tropical and subtropical regions.This primary emerging pathogen has attracted special attention during an outbreak of pulmonary and disseminated infection in Vancouver Island since 1999 1 .
Using specific primers for the minisatellite-specific core sequence of the wild-type phage M13 and/or URA5-RFLP analysis 2 , four molecular types VGI-VGIV are identified and used for epidemiological studies for this species.VGII type is the principal agent, which caused human cases in Vancouver 1 .Infantile cryptococcosis is a rare event, but it has been frequently diagnosed in healthy children in the north and northeast regions of Brazil 3,4 .
Considering the unexpected occurrence of meningitis caused by C. gattii type VGII in an immunocompetent child born and resident in the State of Rio de Janeiro, the clinical-epidemiological features of this case are discussed.
Clinical and epidemiological data were obtained from analysis of the patient`s medical records, outpatient follow-up, domiciliary visits and interviews with family members.The family of the patient signed a consent form authorizing this report.
The primary isolate obtained from cerebrospinal fluid (CSF) seeded on Sabouraud dextrose agar 2% medium was identified by morphological and physiological tests, including phenol oxidase production on niger seed agar medium (NSA), cycloheximide sensitivity, assimilation of C and N sources (Vitek ICB, bioMerieux, Durham, USA), and the canavanine-glycine-bromothymol blue medium (CGB test) to identify the species.
High molecular weight DNA was extracted, according to Ferrer et al 5 , and the molecular type was identified by URA5-RFLP, according to Meyer et al 2 .The RFLP patterns were assigned visually by comparison with the patterns obtained from the reference strains.
A five years-old boy, born and resident in the Metropolitan area of Rio de Janeiro, was admitted to a medical facility near his residence on January 3 rd 2005 complaining of an abrupt onset of fever, malaise, frontal headache, abdominal pain and post alimentary vomiting for two days.These unspecific manifestations were treated with oral amoxicillin and symptomatic medication.After four days, with improvement of his symptoms, he was discharged to continue treatment at home.
During the following two weeks, the patient remained well, but in the fourth week of January, recurrence of fever, malaise and headache occurred.He was readmitted in a municipal hospital presenting hepatomegaly and a right convergent strabismus without DISCUSSION consciousness disturbance or meningeal signs during clinical examination.Computerized tomography (CT) scans of the brain revealed hypodense areas in the right basal ganglia and subcortical region, with ring enhancement after iodine contrast infusion, strongly suggestive of neurotoxoplasmosis (Figure 1).Sulfadiazine and pyrimethamine administration were initiated.After a short stay, the patient was transferred to the Fernandes Figueira Institute (IFF).
Upon admission to IFF, laboratorial tests were negative for HIV (ELISA) and IgG and IgM serology for Toxoplasma gondii.A spinal tap was performed and showed 62 leukocytes/mm³ with 92% of mononuclear cells, protein levels of 70mg/dl and glucose levels of 63mg/dl.All bacteriological tests were negative, including tuberculosis investigation (direct bacterioscopy and culture).Antimicrobial drugs for neurotoxoplasmosis were discontinued and vancomycin, ceftriaxone and metronidazole were initiated to treat a possible cerebral abscess.Dexamethasone was also added to ameliorate the cerebral edema.Improvement in the patient's clinical condition was observed.On day 10 of admission to IFF, the patient presented worsening of intracranial hypertension (ICH) along with meningeal irritation and tonic-clonic seizures.CSF culture was positive for Cryptococcus sp and following this result, a latex agglutination test also proved positive.As a consequence, all antibiotics were interrupted and 1mg/kg/d of amphotericin B deoxycholate was initiated.A new CT scan of the brain presented dilatations of the ventricular system (Figure 1) and another spinal tap was performed.The CSF revealed 19 leukocytes, 97% mononuclear cells, protein of 45mg/dL and glucose of 46mg/dL.The patient evolved with fluctuation of mental status.After failure of ICH control with serial lumbar punctures, a peritoneal shunt was implanted.
On day 18 of admission the patient presented with bilateral blindness and severe motor deficits, mainly limbs.Negative results of CSF cultures for fungus and of direct examination for Cryptococcus sp occurred on day 22 following the onset of treatment.Other complications observed during inpatient stay were arterial hypertension, salt waste syndrome and low potassium levels as a consequence of amphotericin B administration.
The patient was discharged after 95 days with accumulated amphotericin B dosage of 400mg, over a six week period.The remainder of treatment was achieved with fluconazole at a daily dose of 12mg/kg for four weeks.
Outpatient follow-up continues, showing the persistence of neurological sequelae, mainly hypotonus of the leg muscles, two seizure episodes (controlled with anticonvulsant drugs) and bilateral blindness, during the initial visits.At present, the patient presents expressive improvement of motor disability and attends a school for the visually handicapped.Cr y ptococcosi s meningoencephalit i s af fect ing an immunocompetent child represents a diagnostic puzzle, mainly because C. gattii is not a typical etiological agent of meningitis, but also because of its nonspecific clinical manifestations.Consequently, delay in the diagnosis and in the onset of specific treatment is frequently observed 4 .
Studies involving infantile cryptococcosis in Brazil demonstrated that the most common manifestations are fever, headache, neck stiffness and vomiting 3 ; in this case, these suggestive manifestations occurred after the second week of onset.
Cranial CT scans can also cause confusion, since the aspect of the lesion is very similar to other diseases, such as neurotoxoplasmosis and cerebral abscess.In a study of tomographic alterations in 11 children with meningitis by C. gattii in the State of Pará, Brazil, all cases presented hypodense nodules, most of them located in the basal ganglia 6 .The same study also described hydrocephalus and diffuse cortical atrophy.
In the present case, the cryptococcal isolate from CSF was only identified at a species level by a reference laboratory.The fact that the commercially available kits for yeast identification do not discriminate C. neoformans from C. Gattii, must be taken into account.This fact, together with the absence of case surveillance, limit current knowledge regarding C. gattii epidemiology in Brazil.
In the north and northeast regions of Brazil, encompassing the Brazilian Amazon forest and the semiarid savanna areas, the high proportion of HIV-negative children with cryptococcal meningitis configure an unique epidemiological picture in the world.In the State of Pará, it an infantile cryptococcal meningitis frequency of 18.6% was verified for 43 cases diagnosed from 2003 to 2007 6 and 24.3% in 78 cases diagnosed from 1992 to 1998 in the same state 3 .A frequency of 33% of cases in children was observed in the State of Amazonas 7 and 21% in the State of Piaui 8 .Epidemiological studies in these regions show that the molecular type VGII is the main agent for cryptococcal meningitis in young adults and children 9,10 .The previous few VGII cases identified in the State of Rio de Janeiro were from patients who came from the northeast of the country 9 .The epidemiological data of the present case were reassessed and a domiciliary visit was conducted.The family reported no travel history and the child was born and has always lived in a poor community within the metropolitan area of Rio de Janeiro in an area of ongoing deforestation.Future environmental studies are necessary in this region to identify potential sources of human infection.Besides colonization of hollow trees and wood decay substrata, deforestation may also be related to C. gattii infections in Brazil.Subtyping and genetic studies comparing C. gattii isolates from Rio de Janeiro to
FIGURE 1 -
FIGURE 1 -A: Initial contrasted CT scan of the brain showing hypodense nodular lesions with ring enhancement in the right basal ganglia and subcortical region with marked perilesional edema affecting the ipsilateral internal capsule.B: CT scan of the brain, ten days later (a), showing cerebral ventricle dilatations. | 2017-06-25T20:28:18.432Z | 2010-11-01T00:00:00.000 | {
"year": 2010,
"sha1": "1e4cbd591219833fcdc220e5ac65828bb3d4a78d",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rsbmt/a/QQxMRwPD9sBLp847ZNWLSyF/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1e4cbd591219833fcdc220e5ac65828bb3d4a78d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Geography",
"Medicine",
"Biology"
]
} |
513141 | pes2o/s2orc | v3-fos-license | FGF-23 in bone biology
Recent studies have demonstrated that levels of fibroblast growth factor 23 (FGF-23), a key regulator of phosphorus and vitamin D metabolism, rise dramatically as renal function declines and may play a key initiating role in disordered mineral and bone metabolism in patients with chronic kidney disease (CKD). The physiologic importance of FGF-23 in mineral metabolism was first identified in human genetic and acquired rachitic diseases and further characterized in animal models. FGF-23 and its regulators, including phosphate regulating endopeptidase homolog, dentin matrix 1 (DMP1), and matrix extracellular phosphoglycoprotein, are made primarily in bone, specifically in osteocytes. Dysregulation of these proteins results in osteomalacia, implicating the osteocyte in the regulation of skeletal mineralization. Studies in pediatric patients with CKD, the majority of whom have altered skeletal mineralization in early stages of CKD, have demonstrated that skeletal expression of both FGF-23 and its regulator, DMP1, are increased in early stages of CKD and that expression of these proteins is associated with alterations in skeletal mineralization. Thus, dysregulation of osteocytic proteins occur very early in the course of CKD and appear to be central to altered bone and mineral metabolism in this patient population.
Introduction
Abnormalities in mineral and bone metabolism occur early in the course of chronic kidney disease (CKD) and progress as renal function declines [1]. Traditionally, these abnormalities have been ascribed to changes in the parathyroid hormone (PTH) and vitamin D axis, which lead to subsequent alterations in calcium and phosphorus metabolism [1][2][3]. However, recent studies have revealed that circulating values of fibroblast growth factor 23 (FGF-23), a key regulator of phosphorus and vitamin D metabolism, rise dramatically as renal function declines and may play a key initiating role in the development of abnormal mineral metabolism in patients with CKD [4].
FGF-23 is made in osteocytes in mineralized bone [5], and studies of FGF-23 in human on genetic and acquired diseases and those using animal models have demonstrated that both under-and over-expression [6][7][8] of FGF-23 result in impairments in bone biology. Although the defective skeletal mineralization observed in patients with FGF-23 excess is likely a consequence of low phosphorus and vitamin D values, studies of FGF-23 deficiency in animal models and in cell culture suggest that FGF-23, and the proteins that regulate FGF-23, also have a direct effect on bone [9]. In these models, FGF-23 appears to directly regulate osteoblast differentiation [9], while a complete lack of the FGF-23 protein impairs skeletal mineralization, despite adequate (even excessive) circulating levels of phosphorus and vitamin D [6,7]. In addition, recent studies suggest that alterations in skeletal FGF-23 expression also coincide with impairments in skeletal metabolism in the CKD population. Indeed, FGF-23 is up-regulated early in the course of CKD and is associated with skeletal mineralization indices in these individuals [10]. Although the mechanisms by which these effects on bone are mediated are unknown, they may involve a number of proteins that have been shown to regulate both FGF-23 levels and skeletal mineralization [11,12].
FGF-23 expression is regulated by vitamin D, phosphate and, potentially, PTH. In both animals and humans, the administration of 1,25(OH) 2 vitamin D increases circulating FGF-23 levels [26], apparently due to a direct action of vitamin D on FGF-23 via a vitamin D response element located upstream of the FGF-23 promoter [27]. Sustained increases in dietary phosphorus are also associated with increasing FGF-23 levels and declining 1,25(OH) 2 vitamin D levels [28,29], while dietary phosphorus restriction reverses these trends [28,29]. PTH levels may also stimulate FGF-23 expression [30]; findings in primary hyperparathyroidism [30], McCune-Albright syndrome [31], and Jansen's disease [32] suggest that osteocytic stimulation by PTH directly increases skeletal FGF-23 release. The mechanism by which phosphate and PTH mediate changes in FGF-23 expression remain unknown and may be either direct effects on FGF-23 gene expression itself or mediated through other potential regulators of FGF-23.
Regulation of FGF-23: effects on bone metabolism and interaction with other skeletal proteins
Although the effects of FGF-23 on mineral metabolism obscure the potential direct effects of the protein on bone biology, a growing compendium of data from animals as well as from genetic and acquired human diseases of FGF-23 deficiency and excess have yielded many insights into the role that both FGF-23 and the factors that regulate FGF-23 play in bone biology. While FGF-23 is expressed in a variety of tissues, the majority of circulating FGF-23 is derived from osteocytes (in high levels) and osteoblasts (in lower levels) [33]. Although Klotho, the obligate coreceptor for the actions of FGF-23 on mineral metabolism, has not been described in skeletal tissue, a number of studies suggest that FGF-23 has a direct effect on bone.
FGF-23 appears to directly inhibit osteoblast maturation and matrix mineralization particularly during embryonic skeletal development [9]. Consistent with an effect of FGF-23 on osteoblast proliferation, FGF-23 expression is much lower in the embryonic skeleton than it is in adult animals [5] and, indeed, disruption of the Wnt signaling pathway-a pathway responsible for osteoblast proliferation and bone matrix mineralization-has been noted in mice with excess skeletal FGF-23 expression [34]. In mature animals, a complete lack of FGF-23 also results in focal alterations in skeletal mineralization, despite adequate (even excessive) serum phosphate, calcium, and vitamin D levels [6,7], suggesting a direct role of the protein on maintaining skeletal mineralization at later stages of development.
Several factors have been described which are produced in bone and regulate skeletal FGF-23 expression and which may themselves contribute to the skeletal mineralization process. The genetic condition of XLH (a condition with a phenotype very similar to that of ADHR) and its mouse homolog, the Hyp mouse, are associated with increased FGF-23 levels as a result of defects in the phosphateregulating endopeptidase homolog (PHEX). PHEX is a cell-surface endopeptidase predominantly located in osteoblasts and osteocytes. Although the exact actions of PHEX in vivo have not yet been completely defined, inactivation of PHEX leads to increased FGF-23 expression by an indirect mechanism. Potential mediators of these increased FGF-23 levels include increased skeletal expression of fibroblast growth factor 1, which has been shown to directly stimulate the FGF-23 promoter [34], and decreased expression of UDP-N-acetyl-alpha-D-galactosamine polypeptide N-acetylgalactosaminyltransferase 3 (GALNT3), an enzyme essential for the glycosylation (and hence, stabilization) of the FGF-23 protein [34].
Whether from a direct effect of increased skeletal FGF-23 expression or due to some other factor modulated by the loss of PHEX activity, bone from Hyp mice displays an intrinsic mineralization defect that is not corrected by normalization of circulating calcium and phosphate concentrations; indeed, the selective ablation of PHEX in osteoblasts and osteocytes is sufficient to generate a phenotype of osteomalacia in mice [35], while the transplantation of Hyp mouse bone into wild-type mice does not reverse the phenotype of the explanted bone [36]. This intrinsic mineralization defect may be due to excessive proteolytic activity in the absence of PHEX; Rowe et al. [37] have demonstrated that the mineralization defect in the Hyp mouse can be reversed with CA074 and pepstatin-inhibitors of proteolytic activity-without correcting systemic hypophosphatemia. Factors which regulate local pH, such as carbonic anhydrase 12 (Car12), carbonic anhydrase 3 (Car3), and sodium-dependent citrate transporter (Slc13a5) expression, are also dysregulated in Hyp osteoblasts [34], suggesting that altered local bicarbonate and/or citrate concentrations may also impair mineralization by depriving the osteocyte of citrate necessary for energy metabolism. In addition, intrinsic mineralization inhibitors, including matrix gla protein (MGP) and thrombospondin (Thbs) 4, are increased in Hyp mouse osteocytes and may also contribute to altered skeletal mineralization [34].
Skeletal mineralization in various forms of hypophosphatemic rickets may also be regulated through interactions with members of the short integrin binding-ligand, N-linked Glycoprotein (SIBLING) family. It has been proposed that PHEX binds to members of the (SIBLING) family, proteins which regulate both FGF-23 [11] and the process of skeletal mineralization [38]. Indeed, PHEX regulates at least two SIBLING proteins, matrix extracellular phosphoglycoprotein (MEPE) [11] and dentin matrix protein 1 (DMP1), thereby preventing their proteolytic cleavage and the release their active C-terminal peptide [11,12]. Consistent with these findings, MEPE is increased in XLH patients and in Hyp mice [11,39], while PHEX inhibits the cleavage of the acidic, serine-and aspartic acid-rich motif (ASARM) peptide, an active peptide which inhibits mineralization, from full-length MEPE [11]. However, as the deletion of MEPE fails to correct the Hyp phenotype [39], other factors are likely to be involved.
The role of DMP1 in the regulation of FGF-23 and skeletal mineralization may be of even greater importance. In contrast to MEPE, DMP1, or rather the 2 active (N-and C-terminal) fragments of DMP1 generated from its cleavage by such proteinases as bone morphogenic protein 1 (BMP1) [40], promotes mineral formation [41]. In both humans and animals, DMP1 dysfunction results in increased skeletal and circulating FGF-23 values as well as a diffuse skeletal mineralization defect [33,42] and disrupted osteocyte structure [33]. Furthermore, the DMP1/FGF-23 double knockout is phenotypically similar to the FGF-23 knockout [43], suggesting that DMP1 regulates FGF-23 and is located upstream of the FGF-23 molecule.
The effects of FGF-23 on mineral and bone metabolism in CKD FGF-23 levels rise progressively as renal function declines [4,44]. Several potential mechanisms for these increasing values have been proposed, including (1) an increased production by bone in response to a decreased capacity for renal phosphate excretion and/or (2) decreased renal clearance of FGF-23. The variety of assays available for detecting FGF-23 complicates this issue; currently, the "intact" molecule may be detected by two assays produced by two different manufacturers: Kainos and Immutopics [45]. A "C-terminal" assay is also available (Immutopics, Los Angeles, CA) that has the potential to measure potentially inactive C-terminal fragments of the molecule [45]. However, in a number of studies, values of FGF-23 by these different assays are well correlated [10,46], and recent data suggest that the vast majority of circulating FGF-23 in dialysis patients is in the full-length, intact, active form of the molecule [47]. Thus, although these three assays are calibrated differently, all three likely measure full-length, active FGF-23 in circulation.
Regardless of the cause, increased FGF-23 values are found in early stages of CKD-before any abnormalities in serum calcium, phosphorus, or PTH are apparent [4,48,49]. Since normal serum phosphate levels are typically maintained until late in the course of CKD [4], increasing concentrations of FGF-23 appear to represent a compensatory response to maintain normal serum phosphate levels in the face of declining nephron mass. As FGF-23 values are independently associated with decreasing kidney function and low 1,25(OH) 2 vitamin D levels [4], the decline in calcitriol levels associated with increasing FGF-23 levels is thought to represent the initial event in the development of secondary hyperparathyroidism.
As in patients with primary excesses in FGF-23 [8], defective skeletal mineralization is also common in patients with all stages of chronic kidney disease, in whom increased circulating levels of FGF-23 occur in the presence of normal or elevated serum phosphorus values [6,7]. However, the association between FGF-23 and bone in this population differs greatly from that in the general population. The results of a cross-sectional analysis of 49 pediatric dialysis patients with secondary hyperparathyroidism suggest that high circulating levels of FGF-23 in pediatric dialysis patients are associated with improved indices of skeletal mineralization [10]. Although these results appear to contrast with findings in patients with normal kidney function, they are similar to the mineralization defects found in rodents with a complete lack of FGF-23, despite adequate circulating mineral content [6,7].
Confirming this association, a study of FGF-23, DMP1, and MEPE expression in bone tissue of 32 pediatric and young adult patients with CKD demonstrated that both FGF-23 and DMP1 expression were up-regulated in trabecular bone in early (stage 2) CKD, while MEPE expression remained unchanged from normal controls. In patients with all stages of CKD, the amount of bone FGF-23 correlated directly with bone DMP1 expression and the expression of each was inversely related to osteoid accumulation. In contrast, MEPE expression was not related to skeletal mineralization, but it was inversely related to bone volume. Although the simultaneous increase in both DMP1 and FGF-23 expression appears to be contrary to previous data suggesting that DMP1 acts to suppress FGF-23 expression, other data have suggested that the over-expression of DMP1 does not suppress FGF-23 expression [50]. Moreover, DMP1 promoter activity increases in response to increasing phosphate concentrations [51]. Thus, it is possible that the simultaneous increase in bone DMP1 and FGF-23 expression reflects the increasing phosphate burden associated with progressive renal failure. Alternatively, increased DMP1 expression may reflect an alteration in protein function in the context of CKD. Although the mechanism by which this might occur is unknown, alterations in DMP1 protein phosphorylation or cleavage [41] could play a role. Recent data suggest that DMP1 undergoes post-translational cleavage, leaving less than 1% of the protein in the full-length form [52]. The cleavage products appear to have distinct biological functions; in vitro mineralization studies have demonstrated that while the carboxyl-terminal fragment promotes mineralization [41,53], the full-length DMP1 molecule may inhibit hydroxyapatite formation [41]. Thus, alterations in protein cleavage could have significant ramifications for DMP1 function.
Summary
FGF-23 plays a central role in mineral and bone metabolism. This role was initially delineated by the study of genetic and acquired conditions of hypophosphatemic rickets, but the greatest clinical impact of the discovery of FGF-23 may be in the management of CKD patients. FGF-23 and its regulators are made in osteocytes in bone, and in patients with CKD, FGF-23 levels rise as renal function declines, likely due to the decreasing capacity of the damaged kidney to excrete dietary phosphorus loads. Rising FGF-23 levels contribute to the development of secondary hyperparathyroidism and may also be linked to alterations in skeletal mineralization in the CKD population. Thus, through the expression of various proteins crucial to mineral metabolism, osteocytes appear to be endocrine cells with a key role in the regulation of skeletal mineralization. Alterations in osteocyte metabolism occur in very early stages of CKD and likely mediate altered bone and mineral metabolism in patients with even very mild degrees of renal dysfunction.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2014-10-01T00:00:00.000Z | 2009-12-15T00:00:00.000 | {
"year": 2009,
"sha1": "7408c40243fbf7b6e80d62585b399f751ad993da",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00467-009-1384-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "889089183ee1de7cf69af1ef8a1367142d5785c2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
216054684 | pes2o/s2orc | v3-fos-license | Examination of WHO/INRUD Core Drug Use Indicators at Public Primary Healthcare Centers in Kisii County, Kenya
Background Irrational drug use is a global problem. However, the extent of the problem is higher in low-income countries. This study sets out to assess and characterize drug use at the public primary healthcare centers (PPHCCs) in a rural county in Kenya, using the World Health Organization/International Network for the Rational Use of Drugs (WHO/INRUD) core drug use indicators methodology. Methods Ten PPHCCs were randomly selected. From each PPHCC, ninety prescriptions from October to December 2018 were sampled and data extracted. Three hundred (30 per PPHCC) patients and ten (1 per PPHCC) dispensers were also observed and interviewed. The WHO/INRUD core drug use indicators were used to assess the patterns of drug use. Results The average number of drugs per prescription was 2.9 (SD 0.5) (recommended: 1.6–1.8), and the percentage of drugs prescribed by generic names was 27.7% (recommended: 100%); the percentage of prescriptions with an antibiotic was 84.8% (recommended: 20.0–26.8%), and with an injection prescribed was 24.9% (recommended: 13.4–24.1%). The percentage of prescribed drugs from the Kenya Essential Medicines List was 96.7% (recommended: 100%). The average consultation time was 4.1 min (SD 1.7) (recommended: ≥10 min), the average dispensing time was 131.5 sec (SD 41.5) (recommended: ≥90 sec), the percentage of drugs actually dispensed was 76.3% (recommended: 100%), the percentage of drugs adequately labeled was 22.6% (recommended: 100%), and the percentage of patients with correct knowledge of dispensed drugs was 54.7% (recommended: 100%). Only 20% of the PPHCCs had a copy of KEML available, and 80% of the selected essential drugs assessed were available. Conclusion The survey shows irrational drug use practices, particularly polypharmacy, nongeneric prescribing, overuse of antibiotics, short consultation time, and inadequacy of drug labeling. Effective programs and activities promoting the rational use of drugs are the key interventions suggested at all the health facilities.
Background
Drugs are very significant components of any healthcare system and should be used rationally. Rational drug use means that patients get medications suitable to their medical needs, in the right doses, for a suitable period of time, at the cheapest cost [1]. Inappropriate use of drugs is an issue of concern with so many undesirable consequences such as the increased incidences of drug resistance, adverse drug reactions, cost of drug therapy, wastage of resources, and reduced quality of drug therapy [2]. erefore, irrational use of drugs leads to serious consequences, both in terms of healthcare and economics [3].
Irrational drug use may take many different forms, including polypharmacy, inappropriate use of injections and antibiotics, failure to comply with the standard treatment guidelines (STGs) while prescribing, and inappropriate selfmedication [4]. Improvements in the manner in which drugs are used are very crucial in minimizing the morbidity and mortality associated with irrational drug use [5].
Drug use indicators have been established by the World Health Organization and the International Network for Rational Use of Drugs (WHO/INRUD) [6]. ey are broadly divided into two groups, namely, core and complementary indicators. e core indicators have been pretested and standardized and are grouped into three major categories, namely, prescribing, patient-care, and facility-specific indicators [7]. ese drug-use indicators are usually used in assessing drug use in outpatient facilities, where they provide measures of the optimal drug use as well as identify areas of deviations from the expected standards.
Primary healthcare (PHC) is a very crucial part of the healthcare system and is responsible for providing basic healthcare services. ere is a six-level hierarchy of health facilities in the Kenyan health system; in ascending order, they include community services, dispensaries and clinics, health centers and nursing and maternity homes, subcounty hospitals, county referral hospitals and national referral hospitals, and large private teaching hospitals. PHC services are mainly provided at the community services, dispensaries, and clinics [8].
Irrational use of drugs can result in wastage of resources and widespread health hazards. A survey carried out at the health facilities of Southern Malawi showed that the country wasted its financial resources in the purchase of excessive drugs which ended up being used irrationally and quite a number expiring at the health facilities' stores [9]. In a study conducted in Jordan, the average number of drugs prescribed per encounter was higher against the WHO standards; there was a lower percentage of generic prescribing. e rest of the prescribing indicators including the injections prescribing, antibiotics prescribing, and prescribing from the essential medicines list were within the optimal range of values recommended by the WHO [10]. Also, in a study carried out in Eritrea's community pharmacies, it was found that the percentage of antibiotics being prescribed at the community pharmacies in Asmara was 53%, which deviated significantly from the WHO recommended values. Furthermore, the percentage of encounters with injection was 7.8% lower than the WHO value. Patients' age, gender, and number of drugs were significantly associated with antibiotic prescribing [11].
Due to the complexity of drug use, it is important for it to be assessed so that problems may be identified and interventional strategies implemented so as to keep on the check the unsafe trends in drug utilization. Studies done in different parts of the world show that there are different drug use patterns, and a few such surveys have been carried out in Kenya.
Since no study of this kind has ever been conducted in Kisii County since the inception of devolution of healthcare in 2010, it was most likely that the county government was wasting its resources on irrational drug use.
is study, therefore, sets out to use the WHO/INRUD core drug use indicators methodology to examine the patterns of drug use and the prevalence of irrational drug use at the public primary healthcare centers (level II and III facilities) in Kisii County, Kenya.
Study Area.
e study was conducted at the public primary healthcare centers (PPHCCs) in Kisii County. is county in western Kenya has a total of 104 operational PPHCCs comprised of level II (81) and III [12] facilities. e clientele of these centers is drawn from a population of about 1.2 million people from the entire county as well as the neighboring counties.
Study Design.
e study was a hospital-based crosssectional survey. Ten PPHCCs within the county were selected by a simple random sampling method. A survey was performed on patient prescriptions issued during the last quarter of 2018 (1 st October-31 st December 2018). A total of 900 prescriptions (90 per PPHCC) were sampled by systematic random sampling. Patient-care and facility-specific surveys were conducted concurrently. For the patient-care survey, a total of 300 patients (30 per PPHCC) who visited the facility during the survey period were recruited by convenient sampling as they waited to see the prescribing officer. Also, one dispenser from each PPHCC was recruited.
Data Collection.
Prescription survey and patient-care survey data were collected by trained research assistants using standardized data collection forms. Data on patientspecific indicators were collected from participating patients by both direct observation and interviews as the patients moved from the prescribing area to the dispensing area. One dispenser from each of the selected PPHCC was interviewed to collect data on the key aspects of facility-specific indicators such as availability of copies of the Kenya Essential Medicines List (KEML) and availability of key drugs at the facility. (Centers for Disease Control and Prevention, US) and then exported to STATA version 14.2 (StataCorp, USA) for analysis. e data were summarized using means, standard deviations, frequencies, and percentages. e ANOVA test was also used to test for differences among the PPHCCs. e core drug-use indicators were also determined as described in guidelines for calculating the WHO/INRUD drug-use indicators [13].
Ethical Approval.
Ethical approval to carry out the study was granted by the Kenyatta National Hospital/University of Nairobi Ethics and Research Committee (KNH/UoN-ERC) (Reference number: KNH-ERC/A/50). Permission to conduct the survey was also granted by the office of the Director for Health, Kisii County. Written informed consent was obtained from the patients, prescribers, and dispensers before conducting the interviews.
Results
e study was carried out at ten randomly sampled public primary healthcare centers (PPHCCs) in Kisii County, 5 level II facilities, and 5 level III health facilities. e total outpatient attendance of patients at the selected PPHCCs in the last quarter of 2018 was 39,222 patients. Most of the patients presented with respiratory (33.4%), GIT (14.9%), urological (14.7%), and skin (12.6%) complaints. e prescribers were either medical officers (MOs), clinical officers (COs), or nurses, while the dispensers were either pharmacists or pharmaceutical technologists. ree facilities had neither a pharmacist nor a pharmaceutical technologist as the qualified dispensers at the facilities-dispensing was done by nurses. Cumulatively, 2636 drugs were prescribed to the outpatients in the 900 sampled prescription encounters. e majority of the prescribed drugs were analgesics/antipyretics (36.8%) and antibiotics (30.2%). e least prescribed drugs were antivirals (0.2%).
Prescribing Indicators.
e overall average number of drugs prescribed per patient encounter was 2.9 ± 0.5 (Prescribing indicator 1), ranging from one to eight drugs per prescription. e difference in the average number of drugs prescribed per patient encounter differed significantly among the 10 PPHCCs, p � 0.043. No facility had an average number of drugs prescribed that were within the WHO/ INRUD recommended optimal range of 1.6-1.8.
Out of the 2636 prescribed drugs, 706 (27.7%) were written in their generic names (prescribing indicator 2); 1677 (63.6%) were prescribed by brand names, and the remaining 253 (9.6%) had their generic names abbreviated. e practice of generic prescribing was observed to be significantly different among the PPHCCs, p � 0.005.
Out of the 900 prescription encounters, 795 (84.8%) had antibiotics (prescribing indicator 3). Amoxicillin was the most widely prescribed antibiotic followed by cotrimoxazole and metronidazole. e differences in antibiotic prescribing among the PPHCCs was statistically significant, p � 0.033.
Out of the 900 encounters, 224 (24.9%) included injections (prescribing indicator 4). e extent of prescribing of injections was statistically significant among the PPHCCs, p � 0.002. However, the percentage of prescriptions for 7 of the 10 PPHCCs fell outside the WHO/INRUD optimal range of 13.4% to 24.1% [14]. Antipyretic and antibiotic injections were frequently prescribed. Diclofenac and Ceftriaxone injections were the most widely prescribed injections, at 38.2 and 24.4%, respectively.
Out of the 2636 drugs prescribed, 2550 (96.7%) were prescribed from the KEML 2016 (prescribing indicator 5). e prescribing indicators are summarized in Table 1.
Patient-Care Indicators.
e overall average consultation time for the 300 patients observed was 4.1 minutes (range 1-14 minutes) (patient-care indicator 1). e differences in consultation times among the PPHCCs was statistically significant, p � 0.046. e average dispensing time was 131.5 seconds (range 45-360 seconds) (patient-care indicator 2). Again, the difference in dispensing times among the PPHCCs was statistically significant, p � 0.004.
Out of 872 drugs prescribed to the 300 recruited outpatients, 656 (76.3%) drugs were dispensed to the patients (patient-care indicator 3). Out of these 656 drugs dispensed to the outpatients, 148 (22.6%) were adequately labeled (patient-care indicator 4). Majority of the dispensers only wrote the frequency of administration of drugs on the drug package or envelop/bag. WHO/INRUD recommends that each drug label should contain patient name, dose regimen, dose, frequency of administration, and quantity of the drug [14]. e overall score on patients' knowledge of drugs dispensed to them was 54.7% (patient-care indicator 5). Patients' knowledge of drug indications and dosage was good (77.0% and 75.7% of the patients correctly knew the indications and dosages of their drugs, respectively). However, very few patients (11.3%) were aware of the side effects of the drugs issued to them.
Facility-Specific Indicators.
Out of the 10 PPHCCs, only 2 (20%) reported having hard copies of the KEML 2016 booklets both at the prescribing and dispensing areas (facility-specific indicator 1). ere were no drug formularies available at any of the PPHCCs. e availability of the 18 drugs selected from the KEML was assessed at the selected PPHCCs. Overall, 80.0% of the selected essential drugs assessed were available at the PPHCCs during the survey visit (facility-specific indicator 2).
Discussion
Cumulatively, 2636 drugs were prescribed to the outpatients in all the 900 sampled prescription encounters. e majority of the prescribed drugs were analgesics/antipyretics (970 (36.8%)) and antibiotics (795 (30.2%)). e least-prescribed drugs were antivirals (0.2%). e commonly prescribed analgesics were paracetamol (43.7%), ibuprofen (19.4%), diclofenac (8.9%), and tramadol (5.2%). e average number of drugs prescribed per prescription was 2.9. is was above the optimal range of 1.6-1.8 recommended by the WHO/INRUD [14], indicating the likely practice of polypharmacy. In studies conducted in other countries, the average number of drugs per prescription was also higher than the recommended optimal range and ranged between 21.4 in Sudan [15], 2.5 in Egypt [13], 3.4 in Pakistan [16], 3.0 in Sri Lanka [17], and 4.8 in Ghana [18]. Incompetent prescribers, unavailability of STGs, lack of continuous medical education (CME) programs, and the unavailability of therapeutically potent drugs at the PPHCCs could be some of the reasons for the observed polypharmacy [16]. Polypharmacy adversely influences patient treatment outcomes since patients are more likely to be noncompliant or experience adverse drug reactions (ADRs) [13]. Rational prescribing is encouraged by the WHO/INRUD in order to avoid unnecessary excessive use (wastage) of drugs and probable adverse effects on the patients [16]. e percentage of drugs prescribed by their generic name was 27.7%, indicating that clinicians attending to patients at the PPHCCs' in Kisii County rarely prescribe drugs by their generic names. In studies carried out in other countries, the percentage of drugs prescribed by generic Advances in Pharmacological and Pharmaceutical Sciences 3 name was found to be exceedingly variable, from as low as 6% in Andorra [19] and 38.3% in Uzbekistan [20] to as high as 71.6% in Nigeria [21], 95.4% in Egypt [13], and 99.4% in Malawi [9]. Previous studies done in Kenya had comparable findings, with 25.6% at Mbagathi District Hospital [12] and 45.5% at Makueni County Referral Hospital [22]. e WHO/ INRUD optimal percentage of drugs prescribed by the generic name is 100% [14]. e findings of this study were way below the recommended value. is might be attributed to the belief of prescribers in branded drugs over generic products, extensive promotional activities by drug companies' medical representatives to the prescribers, or absence of a national policy of generic prescribing. e WHO/INRUD recommends prescribing drugs by their generic names. It gives clear identification, allows easy information exchange, and allows improved communication among health professionals [16]. e percentage of encounters with antibiotics prescribed was 84.8%. e percentage was found to be higher compared to other studies. For instance, at Arba Minch and Chencha Hospitals in Ethiopia, the prevalence was 48.7% and 60.2%, respectively [23]. In India's PHCCs, it was 60.9% [2], 35.4% in Tanzania [24], 43.0% in Nepal [25], 33.1% in Burkina Faso [26], 50.0% in Burundi [27], and 28.8% in Brazil [27]. e WHO/INRUD recommended value for percentage encounter with an antibiotic prescribed is 20-26.8% [14], suggesting that prescribers at the PPHCCs in Kisii County are overusing and misusing the antibiotics. e overuse and misuse of antibiotics lead to increased antibiotic resistance and wastage of scarce resources. e percentage of encounters with an injection prescribed was 24.9%.
e prevalence of injection prescribing in this study (24.9%) was only slightly above the recommended range, which is encouraging. Antipyretic and antibiotic injections were frequently prescribed. Diclofenac and Ceftriaxone injections were the most widely prescribed injections, at 38.2 and 24.4%, respectively. e percentage of drugs prescribed from the KEML 2016 was 96.7%. All the PPHCCs had almost all the drugs prescribed from the KEML. is was higher than that reported in other previous studies conducted in Kenya, which was 72.2% at Mbagathi District Hospital [12] and 89.1% at Makueni County Referral Hospital [22]. Other studies reported comparable findings: 95.4% in Egypt [13], 100.0% in Ethiopia [23], 96.7% in Tanzania [24], and 86.1% in Nepal [25]. It was notable that though many PPHCCs in Kisii County did not have copies of KEML, they prescribed from the list. Prescribing drugs from the EML is one way of rational prescribing. However, prescribers may not choose drugs not in the EML due to the inadequate supply of EML copies [16]. e time that healthcare providers devote to patients, majorly at the prescribing and dispensing service delivery points, determines the quality of disease diagnosis and management [23]. e average consultation time was 4.1 min. e optimum WHO/INRUD value for average consultation time is ≥10 min [14]. e time taken by the prescribers at the PPHCCs in the current study was shorter than that recommended to conduct a thorough patient assessment and prescribe drugs appropriately.
is was comparable with findings reported in other countries where average consultation times ranged from 2.0 to 7.5 min [13,16,23,25]. However, the study conducted in Nigeria reported a better consultation time of 11.3 min [29]. Insufficient consultation time can lead to an incomplete examination of patients and subsequently irrational therapy [30]. Prescribers need to take sufficient time with patients in order to carry out comprehensive history taking, patient examination, provide suitable health education, and ensure good clinician-patient rapport. is is significant as it ensures good patient care. e increased workload of the prescriber and religious, ethnic, or socioeconomic barriers between prescribers and patients could be the reasons for the short consultation time [16]. e average dispensing time was 131.5 seconds. e optimum value set by the WHO/INRUD for average dispensing time is ≥90 seconds [14]. Based on the WHO/ INRUD minimum time, the dispensers at the PPHCCs took sufficient time in processing the prescriptions and ultimately dispensing the prescribed drugs to the patients. In many studies conducted around the world, the average dispensing time was lower than that of the current study, ranging from 38 to 78 seconds [13,16,23,25,29]. A study carried out at public hospitals in Ethiopia found more time taken by the dispensers at an average of 219.6 s [31]. Adequately long dispensing time is required to explain key information about the drug(s) (dosage, adverse effects, and precautions) to the patient(s) as well as label the drug(s) adequately and dispense them to patients. e percentage of drugs actually dispensed was 76.3%. e recommended optimal value of drugs actually dispensed by the WHO/INRUD is 100% [14]. e finding of this study was less than those found in previous studies [13,23,25,29]. However, the percentage was higher compared to that reported at the public health facilities of Tanzania (56.2%) [24]. e findings of this study could be an indication that some drugs may have been out of stock.
Drug labeling practice was very poor at the selected PPHCCs.
e percentage of drugs dispensed adequately labeled was 22.6%. e poor labeling practices noted in this survey was similar to the findings of the survey performed at PHCCs in Eastern Province of Saudi Arabia (10.4%) [4] and Tanzania (20.1%) [24], where patient names and other vital details about the drug dosage regimen were not written in the labels [32]. However, all drugs dispensed were adequately labeled (100.0%) in the Tertiary Care Hospital of India [33]. e findings in Cambodia were worse (0.0%) compared to the current study [34]. e omission of patient name, storage conditions, and any other special precaution concerns on the drug label can lead to serious consequences such as drug misuse by patients [16].
Patients' percentage knowledge on dispensed drugs was average, at 54.7%. e optimal WHO/INRUD value for patients' percentage knowledge on correct drug dosage is 100% [14]. e findings of this study (54.7%) was a little bit higher than that from studies in India (46%) [33], Tanzania (37.9%) [24], and Malawi (27.1%) [9] but were much lower than those reported in Egypt (94.1%) [13] and Nigeria (93.2%) [29]. Patients' knowledge of drug dosage is important. It helps in improving patient care by avoiding the overuse of drugs and preventing ADRs/adverse effects that can cause harm to the patients' health.
In any healthcare center, availability of qualified prescribers and dispensers and adequate supply of key drugs and information access about drugs, such as EMLs/formulary, influence the ability to prescribe and dispense drugs rationally. Without these factors, it is difficult for healthcare workers to provide health services efficiently [14]. Out of the 10 PPHCCs, only 2 (20.0%) had copies of the KEML 2016 booklets available both at the prescribing and dispensing areas. e findings were not consistent with the study carried out in Egypt, where 8 (80.0%) out of 10 PHCCs had copies of the EML [13], 62.3% in Nigeria [29], and 67.4% in Malawi [9]. e surveys in Nepal [25] and Pakistan [16] found that all the facilities (100.0%) had copies of EML. e WHO/INRUD requires that all health facilities have copies of EML [14]. is is aimed at ensuring adherence of prescribers to the medicines listed in the EML when prescribing to promote the efficient provision of healthcare to patients [16].
Eighty percent (80.0%) of the selected essential drugs assessed were available at the PPHCCs at the time of the survey visit. WHO/INRUD recommends 100% availability of essential drugs at the health facilities [14]. e shortage of key drugs is detrimental to patients with regard to their health status and out-of-pocket expenses [14]. e use of WHO/INRUD guidelines on the three core drug-use indicators and adherence to the WHO Advances in Pharmacological and Pharmaceutical Sciences 5 methodology offers more strength to the study. Also, adding to the study strength was the use of a large sample size of 900 prescriptions and 300 outpatients. e reasons for the irrational use of drugs could not be revealed in this study because it was limited. Further studies are necessary to disclose these reasons. Also, being a crosssectional and retrospective study, there could have been an information bias and desirability.
Conclusion
Most of the prescribing indicators greatly deviated from the WHO/INRUD recommended optimal values, indicating irrational drug use practices such as the practice of polypharmacy and misuse of antibiotics. Patient-care and facility-specific indicators were also far from the optimal values except that of the average dispensing time. e findings of inadequately labeled drugs and poor patients' knowledge of drugs dispensed to them were rather concerning.
e County Health Management Team (CHMT) together with other stakeholders should implement interventions aimed at strengthening good prescribing and patient-care practices [35].
Data Availability e primary data gathered by the authors and which support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 2020-03-19T10:19:40.130Z | 2020-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "37a2628be29d66dc89687980ce338f4daca6576c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/aps/2020/3173847.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b65fba9781e4ab4b56b2a07f4b9170d3a9ffbb2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
38842620 | pes2o/s2orc | v3-fos-license | The Main Anticancer Bullets of the Chinese Medicinal Herb, Thunder God Vine
The thunder god vine or Tripterygium wilfordii Hook. F. is a representative Chinese medicinal herb which has been used widely and successfully for centuries in treating inflammatory diseases. More than 100 components have been isolated from this plant, and most of them have potent therapeutic efficacy for a variety of autoimmune and inflammatory diseases. In the past four decades, the anticancer activities of the extracts from this medicinal herb have attracted intensive attention by researchers worldwide. The diterpenoid epoxide triptolide and the quinone triterpene celastrol are two important bioactive ingredients that show a divergent therapeutic profile and can perturb multiple signal pathways. Both compounds promise to turn traditional medicines into modern drugs. In this review, we will mainly address the anticancer activities and mechanisms of action of these two agents and briefly describe some other antitumor components of the thunder god vine.
Introduction
Traditional Chinese Medicine (TCM), which has been used for centuries in treating illnesses ranging from inflammation to cancer, continues to provide front-line pharmacotherapy for many millions of people worldwide. Evidence records that compounds from medicinal herbs and minerals are considered as the source or inspiration for the majority of FDA-approved agents [1,2]. As a representative of Chinese medicinal herb, thunder god vine (Tripterygium wilfordii Hook. f., TwHf; also known as Lei Gong Teng, seven-step vine, Figure 1A) which belongs to genus Tripterygium, Celastraceae family, grows widely in the mountainous regions of southeast and southern China. A large body of knowledge demonstrates the promising therapeutic potentials of TwHf in a number of autoimmune and inflammatory conditions, and phase 2b clinical trials have been conducted to test the efficacy of TwHf extracts in rheumatoid arthritis [3,4], Crohn's disease [5] and kidney transplantation [6]. Adverse effects such as diarrhea, headache, nausea and infertility are also recorded, and numerous attempts have been made to improve its efficacy and safety [7].
A broad spectrum of therapeutic profile of TwHf may be attributed to its complex mixture of ingredients. TwHf contains more than 100 small compounds such as diterpenes, triterpenes, sesquiterpenoids, and alkaloids, which are used for the treatment of a variety of autoimmune and inflammatory diseases including rheumatoid arthritis, nephritis, and systemic lupus erythematosus [8,9]. Some components of TwHf exhibit powerful anti-fertility effects in male animal models [10,11]. Recently, researchers worldwide have paid more attention on the anticancer activities of TwHf extracts, with triptolide and celastrol as the two most promising and potent bioactive ingredients [12,13]. Though advances have been made in understanding the molecular mechanisms of action of these two compounds, the exact targets of triptolide and celastrol remain elusive, and they must pass along a pathway of chemical synthesis, mechanistic studies and clinical testing before their eventual deployment in the clinic.
Triptolide
Triptolide ( Figure 1B) is a diterpenoid triepoxide firstly purified from the roots of TwHf in 1972 [14], and several synthetic routes have been described [15,16]. Reports document that triptolide has anti-inflammatory and immunosuppressive, anti-fertility and anticancer abilities [14]. Triptolide has been tested in clinical trials for the treatment of psoriasis vulgaris [17], diabetic nephropathy [18] and nephritic syndrome [19]. On the other hand, many derivatives of triptolide have been synthesized to improve water solubility and reduce the potential side effects. For example, PG490-88 [20] is a succinate salt water-soluble derivative which has entered Phase I clinical trials as an immunosuppressant.
Antitumor Activity of Triptolide
It is well documented that triptolide has a broad spectrum ability to inhibit proliferation and induce apoptosis of various cancer cell lines in vitro and prevent tumor growth and metastases in vivo. Triptolide shows anticancer activity in cells derived from both hematological malignancies and solid tumors, such as HL-60, T cell lymphoma (Jurkat) [ [33] and neuroblastoma in nude mice [34].
Mechanisms of Action
Molecular mechanisms underlying triptolide's anticancer activity have been extensively investigated, and reports show that triptolide is capable of interfering with a variety of signal pathways, many of which are crucial for the survival of cancer cells ( Figure 2).
Targeting transcription factors and epigenetic modifiers
Since triptolide has epoxide moieties, it is conceivable that this compound could bind to a certain cellular protein via formation of covalent bond. In 1974, Kupchan et al. [35] suggested that the 14b hydroxyl along with the 9,11 epoxide might be responsible for the observed antitumor activity. In 2007, McCallum et al. [36] discovered that triptolide could bind specifically and irreversibly through the epoxide moieties to a 90 kDa nuclear protein, which may be a transcriptional regulator or somehow involved in turnover of a critical transcriptional regulator, such that its covalent modification prevented a key step in transcription. Recently, Titov et al. [37] reported that triptolide covalently bound to a human 90 kD protein, XPB (also known as ERCC3) which is a subunit of the transcription factor TFIIH, and inhibited its DNA-dependent ATPase activity, which led to the inhibition of RNA polymerase II-mediated transcription and likely nucleotide excision repair. The identification of XPB as the target of triptolide accounts for the majority of the known biological activities of triptolide.
In human gastric and prostatic epithelial cells [24] and HL-60 leukemia cells [38], triptolide-caused proliferation inhibition and apoptosis induction may be primarily mediated by its modulation of p53, a nuclear phosphoprotein which acts as a tumor suppressor. Nuclear factor κB (NF-κB) is a transcription factor that can promote cell survival, stimulate growth, and reduce susceptibility to apoptosis via upregulation of various targeted proteins. Inhibition of NF-κB may lead to cell apoptosis [39]. Lee et al. [40] showed that triptolide blocked TNF-induced NF-κB activation via inhibiting p65 transactivation but not DNA binding affinity, thus promoting TNF-triggered apoptosis. However, other reports suggest that triptolide inhibits the DNA binding ability of NF-κB or cytokine-stimulated NF-κB activity [40][41][42]. In multiple myeloma cells, triptolide decreases histone H3K9 and H3K27 methylation via the downregulation of histone methyltransferase SUV39H1 and EZH2, respectively, and reduces the expression of HDAC8, leading to increase of the histone H3 and H4 acetylation [43]. Triptolide also inhibits the activity of RNA polymerase, resulting in the general transcription inhibition [44].
Inhibiting molecular chaperone and proteasome
Molecular chaperone heat shock protein (HSP) represents a group of highly conserved proteins that can protect cells from adverse environmental, physical and chemical stresses, and is reported as inhibitors of apoptosis [45]. By a small molecule screening assay, Westerheide et al. [46] recently demonstrated that triptolide was an inhibitor of human heat shock gene transcription which led to enhancement of stress-induced cell death. As a major stress-inducible HSP, HSP70 renders cells highly resistant to several chemotherapeutic drugs. Interestingly, Philips et al. [33] reported that triptolide could inhibit HSP70 at both mRNA and protein levels, and induce apoptosis in pancreatic cancer cells with overexpressed HSP70.
The ubiquitin/proteasome system is an important cellular pathway for protein degradation. Given that the aromatic ketone carbon could interact with the hydroxyl group at the N-terminal threonine of the β5 subunit of the proteasome, thus causing inhibition of the proteasomal chymotrypsin-like activity [13,47], triptolide which forms ketones under oxidizing conditions might have potential proteasome inhibitory activity. This possibility was confirmed by a recent study [48] which showed that triptolide could inhibit cellular proteasomal chymotrypsin-like activity, resulting in accumulation of proteasomal substrates including IκB, p27 and Bax, and subsequently apoptosis of both PC-3 and MDA-MB-231 cancer cells.
Suppressing kinases
Gain-of-function mutation of C-KIT, a member of the type III receptor tyrosine kinase family, activates its downstream pathways (Jak-STAT, MAPK, and PI3K) and confers uncontrolled cell proliferation and survival advantages to cancer cells [49,50]. C-KIT abnormalities are closely associated with acute myeloid leukemia (AML) with t(8;21) [49], the most common chromosomal translocation seen in AML which generates the AML1-ETO (RUNX1-RUNXT1) fusion transcript. AML1-ETO may upregulate C-KIT via inactivation of TGFβ [49]. Recently, Zhou et al. [23] showed that triptolide triggered inactivation of C-KIT and a caspase-3-dependent cleavage of AML-ETO, forming a positive feedback loop to induce programmed cell death of t(8;21) leukemic cells. Jin et al. [51] reported that triptolide inhibited imatinib-resistant mast cells harboring D816V C-KIT. Suppression of other kinases as Bcr-Abl, PDGFRα, and Jak2 by triptolide was also related to its anti-cancer activity [52][53][54]. In addition, triptolide was shown to be able to inhibit tumor angiogenesis through regulating VEGFR-2 and Tie2 angiogenic pathways [55].
Perturbing other molecules
In a cDNA array analysis, Zhao et al. [56] demonstrated that triptolide inhibited the expression of genes involved in cell cycle progression and cell survival, such as cyclins D1, B1, and A1, Cdc-25; Bcl-X and c-Jun. Triptolide reduced the expression of apoptosis antagonists XIAP, Bcl-2 and Mcl-1 [20]. Triptolide induced caspase-dependent apoptosis of leukemia and cervical cancer cells [28,57], and triggered caspase-independent autophagic cell death in pancreatic cancer cells [30]. Leuenroth et al. [58] identified calcium (Ca 2+ ) channel polycystin-2 (PC2) as a putative direct target of triptolide in a mouse model of polycystic kidney disease (PKD). Triptolide may perturb multiple targets and interfere with multiple signaling pathways, and potentiate activities of other antitumor agents such as Apo2/TRAIL, tumor necrosis factor α, and other chemotherapeutic agents.
Celastrol interferes with multiple signal pathways
Plenty of work has been done focusing on mechanisms of action of celastrol, and the major pathways affected are shown in Figure 3. Celastrol inhibits NF-κB through targeting IκB kinase and TAK1-induced NF-κB activation [91,94], binds to Cdc37 and disrupts the Cdc37-Hsp90N complex which is critical for stabilizing oncogenic kinases in various cancers [78,95,96], and inactivates the p23 protein which is another co-chaperone of HSP90 [97]. Celastrol also inhibits topoisomerase II [98], potassium channels [99], and AKT/Mammalian target of rapamycin pathway [85]. It suppresses cell-extracellular matrix adhesion via targeting β1 integrin [100], down-regulates the expression of VEGF receptor [82] and cell survival proteins and up-regulates death receptors via the ROS-mediated increase of CHOP pathway [92]. Celastrol was shown to be able to inhibit the proteasome activity and accumulate the ubiquitinated proteins in prostate cancer cell lines [13,101]. However, Chapelsky et al. [102] reported that in RAW264.7 cells, celastrol showed slight inhibitory activity against the chymotryptic activity of the 20S proteasome at high (10 µM) but not low (3 µM) concentration, and did not inhibit the chymotrypsin-like activity of the 26S proteasome which is responsible for the degradation of ubiquitylated proteins in intact cells. Celastrol cannot inhibit the cleavage of the substrates by the 26S proteasome, making it very different from other proteasome inhibitors such as MG-132 [102]. Celastrol-caused accumulation of ubiquitinated proteins may be a result of HSP90 inhibition and stress response. In contrast to other proteasome inhibitors, celastrol-induced inhibition of IκB-α degradation is due to its suppression of IκB-α phosphorylation [91].
Direct targets of celastrol
Since celastrol possesses a broad range of biological activity, it is crucial to identify its direct targets. Structure-activity studies indicate that the quinone methide functional group of celastrol may be responsible for its cytotoxic activity [13,103]. Computational electron density analysis demonstrates that C2 on A-ring and C6 on B-ring of celastrol have a high susceptibility toward a nucleophilic attack, suggesting that one or both of these carbons could interact with its target proteins [78,96,103]. Indeed, studies show that celastrol can interact with the nucleophilic thiol groups of cysteine residues and form covalent Michael adducts. The two primary functions of celastrol, inhibition of the HSP90 and suppression of NF-kB pathway, may be attributed to its ability to interact with the thiol groups of cysteine residues of the proteins. The inhibition of NF-kB activation by celastrol could be abolished by dithiothreitol (DTT) and reduction of the quinone methide of celastrol with NaBH 4 to dihydrocelastrol [94]. When all the nine cysteine residues of full-length Cdc37 are blocked with N-ethylmaleimide (NEM), it no longer reacts with celastrol, indicating that cysteines indeed undergo chemical reactions with celastrol [96]. Additionally, it was also reported that the effects of celastrol could be countered by pre-loading thiol-containing agents and celastrol and thiol-containing agents could also react with each other to form new compounds [74]. Another study further confirms that the quinone methide moiety seems crucial to celastrol's effects on melanoma cells because dihydrocelastrol which lacks this moiety, fails to inhibit melanoma cell viability, whereas pristimerin, the celastrol methyl ester with the quinone methide functional group retained intact in the molecules, is equipotent or even slightly more potent than celastrol against SW1 cells. These findings strongly suggest that celastrol could bind to proteins irreversibly or pseudo-irreversibly in melanoma cells, possibly through interaction with cysteinyl residues as well [104]. Taken together, these results indicate that celastrol may affect the functions of a variety of proteins via formation of Michael adducts, which seems to be the major mechanism contributing to its broad anticancer effects.
Prospective
Despite these advances, some questions still need to be further addressed. Firstly, as a HSP90 inhibitor, celastrol decreases many HSP90 client proteins including Akt, Cdk4, FLT3, EGFR, BCR-ABL and androgen receptor (AR), but the mechanisms underlying remain largely unknown [78,95]. Secondly, it was showed that celastrol could inhibit p23 function by altering its 3-dimensional structure, leading to rapid formation of amyloid-like fibrils. This may be triggered by the non-covalent binding of celastrol to p23, rather than irreversibly reacting with cysteine residues in p23, though covalent interaction indeed forms between them [97]. However, evidence is needed to elucidate this possibility. Moreover, routes for chemical synthesis of celastrol are desired to reduce reliance on the natural source which are critical for drug development [1,103]. Finally, celastrol is one of the main ingredients with reproductive toxicity, which may greatly limit its application. Thus, new derivatives and analogues of celastrol with higher pharmacological activities and lower toxicological effects should be designed and synthesized.
Conclusion Remarks
As prospective anti-tumor drug candidates, TwHf and its extracts have been studied widely in the past several decades. However, there are also many challenges warranting careful investigation. It is widely accepted that the main active components of TwHf are triptolide and celastrol. However, due to the complexity of the chemical ingredients of TwHf, it will be valuable to detect if there are any other more effective active components for cancer therapy. In addition, the precise molecular mechanisms of triptolide and celastrol remain obscure, which may also be important to elucidate the potential mechanisms of toxicity and to design and synthesize compounds or derivatives with more selective activity and to reduce the unexpected therapeutic side effects. Finally, it is vital to explore the absorption, distribution, metabolism, excretion, and toxicity (ADMET) characteristics of these monomer compounds extracted from TwHf, laying a sound foundation for clinical trails in the disease interested in the future. | 2014-10-01T00:00:00.000Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "8f86c65683f8b615845530344239c51a807701b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/16/6/5283/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f86c65683f8b615845530344239c51a807701b4",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
198942060 | pes2o/s2orc | v3-fos-license | A Brief Description on Impersonal Constructions in Uyghur
Impersonal constructions have been a regular topic of investigation in various languages which belong to different language families across the world. Discussions over Impersonal constructions would constitute main contribution to the theoretical study. Although impersonal constructions are the main characteristics of Uyghur, but it’s been hardly noticed by theoretical linguists. With this research, I would like to put forward the idea that Uyghur also have a wide variety of impersonal constructions, the analysis of which would bring an interesting contribution to the typology of impersonality. Scholars conceive impersonal constructions in different terms, some apply morphological methods, and others adopt syntactic approaches. Whichever methodology they apply, it is unarguable that impersonal constructions are agentless by nature, in which the sentences may not have an overt subject. They are many varieties of types, such as agentless gerunds, agentless passives, existential sentences etc. Since Uyghur is a pro-drop language, omitting the pronominal subject also helps to form impersonal constructions. This paper provides a brief description on the types of impersonal constructions, explicates several ways of forming agentless sentences, and introduces main types of impersonal sentences in Uyghur.
Introduction
Impersonal constructions have been a regular topic of investigation in Indo-European studies, and similar constructions have been described in languages spoken in various areas of the world and belonging to different language families. It had been widely discussed in Anna Siewierska [1,2], Andrej Malchukov and Anna Siewierska [3], Andrej Malchukov and Akio Ogawa [4] etc.
Modern Uyghur is a direct descendant of old Uighur, which is spoken in western China. Even though Impersonal constructions are one of the major sentence types, we can rarely see exclusive description on them in Uyghur grammars. Some important works such as Hazirqi Zaman Uyghur Tili 'Modern Uyghur Grammar' [5,6] didn't even mention it. Since most Uyghur Grammarians mainly focused on morphology, detailed information on sentence structures are scarce. In spite of that, there are two well-discussed papers on this topic in Uyghur with Chinese translations. They explained the impersonal constructions from different perspectives. General description on impersonal constructions was published by several linguists. See Xaliq NIYAZ [7], Zäynäp NIYAZ [8], Niyaz TURDI [9], Cheng at al. [10], Hämit TÖMÜR [11] and Litip TOHTI [12]. Xaliq NIYAZ [7] mainly discussed sentence types in modern Uyghur and classified them according to their properties. He called one of these sentence types as Igisiz Jümlä 'subjectless sentence'. Here, he mainly implied the grammatical subject. Hämit TÖMÜR [11] and Cheng et al. [10] provide general information on Uyghur grammar, only scattered discussions on impersonal constructions can be found in these works. Litip TOHTI [12] thoroughly discussed the syntactic structure of Altaic languages in the frame of Generative Syntax. Page 139-140 includes some discussions on impersonal constructions, where he called them 'agentless sentence. Most of these works cover general discussion without details.
General Description
Impersonal construction mainly points to agentless construction in Uyghur grammar. The notion 'impersonal' in Uyghur grammar books is disparate because some scholars conceive it in morphological terms, while others adopt syntactic approaches. Most Uyghur grammarians adopt syntactic approach. Nevertheless, they have slightly different understandings on the nature of impersonal constructions.
The syntactic characterization of impersonality involves subjecthood. Impersonal constructions are seen to either lack a grammatical subject altogether or alternatively feature only a pleonastic (semantically empty) subject, be it overt one or covert on [1] When the gerund is in the dative case and combined with the construction toγra käl-'have to', it produces subjectless sentence which indicate the need for the action to be carried out.
Bu
iš-ni bügün tügit-iš-kä toγra kel-idu. this work-ACC today finish-VN-DAT have to-NPAST 'This work must be completed today.' When linked with the verb bol-'to be', it indicates the possibility that the action expressed by the adverbial will be carried out. Such sentences are without subject.
It is possible to finish this job within two days.'
Main Types of Impersonal Sentences
There are several types of impersonal sentences: agentless sentences, incomplete sentences, existential/ dependent construction, modal sentences etc.
1) agentless sentences The agent of these sentences are hardly identifiable, it only includes a core sentence which equals to predicate. e.g: One must not retreat in front of the difficulty.' Ay-ni etäk bilän yep-ip bol-ma-s. moon-ACC elbow with cover-CONV be-NEG-AOR 'It is impossible to cover the moon with the elbow. (One couldn't hide the crystal-clear truth.)' As Janet R. Aiken [13] pointed out, constructions lacking subject or verb or both are of a great variety of types, from the imperatives such as COME HERE and the omitted first person types WENT DOWN TOWN TODAY …. where it is difficult or impossible to construct a full sentence convincingly.
Such imperative sentence can also be impersonal in Uyghur. It is difficult to construct the agent of the sentence, since it does not agree with the agent-predicate agreement principle. E.g.: Silär-niŋ yardim-iŋlar-γa köp rähmät. You.pl-GEN help-POSS.2pl-DAT many thank 'Many thanks to your help' Heyt-iŋiz-γa mubaräk bol-sun. Eid-POSS.2sg-DAT happy to be-IMP.3sg 'Happy ramazan festival!' 2) Incomplete sentence These sentences are incomplete by nature, so it can be said non-sentence [13] since it is completely impracticable to supply the missing elements to make complete sentences.
This type exists in many languages, since short sentences agree with the economy principle. Main part of these sentences is indicated by nominative phrase, or it can be a short clause. e.g.: Bazar-da adäm köp-tur. street-LOC people many-COP 'There are lot of people on the street.' Existence or appearing is the main information in these sentences. Therefore the place of existence must be spoken out first, otherwise it will become non-existential sentence. 4) Modal sentence English modal verbs correspond to Uyghur modal adjectives. In these types, agent part is gerunds, predicate part is modal adjectives such as mümkin 'possible', keräk~lazim 'must, should, have to', šärt 'should'.e.g.: Öy-dä adäm yoq bol-iš-i mümkin. house-LOC man no be-VN-POSS.3sg possible 'May be there is no one in the house.' In this type of sentences, the modal adjectives and gerunds constitute a strict agent-predicate construction, which loosens the relationship between possessor and dependant. As a result, the genitive case drops. a.
Silär-niŋ ätä kel-iš-iŋlar-// keräk You.pl-GEN tomorrow come-VB-2pl.POSS must b. Silär-// ätä kelišiŋlar keräk 'You must come tomorrow.' 5) impersonal passives Langacker and Munro [14] argue persuasively that passive constructions are basically agentless, and that agentive phrases are derived from external sources. In this view, corresponding passive and active sentences are related semantically, but do not have a common conceptual (i.e. underlying) structure. They give evidence from a number of Uto-Aztecan languages and from Mojave, a Yuman language, to show that passives are basically impersonal constructions, derived 'from structures in which a clause with unspecified subject is embedded as subject complement to the predicate BE.
Explicit agents do occur with impersonal passives. Furthermore, when agents are not explicitly expressed, they are predictable from the context in a number of cases. Generally, agentless passives are derived in all cases by a transformation of indefinite agent deletion.
Passive constructions in Uyghur are produced by attaching the suffix -n (-n/-in) or -l (-l/-i-/-ul/-ül). Passive voice indicates that the grammatical subject of the sentence is actually the logical object of the original action. iii. It is necessary to especially accentuate the logical object.
Düšmän-lär yoqit-il-di. enemy-PL exterminate-PASS-PST 'The enemies were exterminated.' iv. In some situations it is necessary to point out the logical subject at the same time as accentuating the logical object. In such cases, if the logical object is a person, people in general, or some organization, the noun which indicates that logical subject is combined the the proposition täripidin 'by' to form an adverbial modifier.
Paša išan täripidin orunla-n-γan naxša alqiš-qa eriš-ti. Pasha Ishan by play-PASS-PARTCPL song applause-DAT obtain-PAST 'The song performed by Pasha Ishan was applauded.' v. If the logical subject was something else, the role of the logical subject is indicated in different ways.
Some Arguments over the Nature of Impersonal Construction
Most scholars agree on the nature of impersonal constructions. However, two authors put forward slightly different opinions.
According to the characteristics of some agentless constructions whether it can be converted into agentive ones, Niyaz TURDI [9] asserted that impersonal constructions can be divided into two types: absolute impersonal construction and relative impersonal construction. 1) Relative impersonal constructions a) The predicate part of relative impersonal construction is the combination of non-personal gerunds and modal adjectives, such as lazim 'should', keräk 'must' etc. By adding person marker to the gerund, the sentence can be converted into covert subject sentence. E.g.: Qiyinčiliq ald-i-da täwrä-n-mäs-lik keräk (impersonal) difficulty front-POSS.3sg -LOC shake-PASS-NEG-NOML must 'One must be unshakeable in front of difficulty.' Qiyinčiliq ald-i-da täwrä-n-mäs-lik-imiz keräk.
(personal) difficulty front-POSS.3sg-LOC shake-PASS-NEG-NOML-POSS.1pl must 'We must be unshakeable in front of difficulty.' The personal marker -imiz which added to the gerund is the key factor that converts impersonal construction into personal contruction.
b) If the first part of a sentence, whose predicates is bol-'to be', toγra käl-'have to', is gerund, it will form an impersonal construction. If the gerund part has person marker, it forms a covert personal construction. E.g.: U-niŋ öz-i bilän kör-üš-üš-kä toγra kel-idu. 'We have to meet his own self.' c) When the head of the sentence is dative or genitive noun phrase, and after that appears the combination of gerund and toγra käl-'have to', if the case marker drops off, the gerund also loses its possessive marker. As a result, the sentence becomes an agentive construction.e.g.
Mämät-kä qayt-ip ket-iš-kä toγra käl-di. Hämmi-miz-niŋ kel-iš-imiz zörür. All-POSS.1pl-GEN come-VN-POSS.1pl necessary 'It is necessary for all of us to come.' She argued that they are all agentive sentences. In these sentences, the -š gerunds as mundaq rohiŋizdin öginiš, hämmimizniŋ kelišimiz play the role of the subject, and keräk, lazim, zörür can play the role of predicates. She also argued that these sentences [the combination of gerunds+mümkin] as below were also mistakenly taken as subjectless: Bu kitab-din paydilin-iš mümkin. this book-ABL use-VN possible 'It is possible to make use of this book.' Biz-niŋ u yär-gä bar-mas-liq-imiz mümkin. we-GEN that place-DAT go-NEG-NOML-POSS.1pl possible 'Tt is likely that we go there.' In these sentences, bu kitabdin, bizning u yärgä barmasliqimiz are nominative gerunds which play the role of agents. And mümkin is a predicate whose copila -dur is omitted.
Thirdly, these sentences also belong to such mistakes:
Conclusion
To sum up, Zäynäp NIYAZ [8] and Niyaz TURDI [9] put forward different opinions on the nature of impersonal constructions and narrowed the scope. They thought that such sentences as hämmimizniŋ kelišimiz zörür were in fact subjective sentences, and treated hämmimizniŋ kelišimiz as a subject. But from the point of subject predicate agreement rule in Uyghur, in which the predicate must agree with the subject in person, for instance in män käldim 'I have come', the person marker -m in predicate agrees with the subject män, we have to deny their suggestions. In män käldim 'I have come', by omitting the subject män, a syntactic pleonasm will be formed. E.g.: män käldim 'I has come' käldim 'I has come' In this case, the pronoun män 'I' is grammatically optional, both sentences mean 'I has come'. But in the case of Zäynäp NIYAZ [8] and Niyaz TURDI [9], the covert pronoun biz 'we' doesn't agree with the predicate zörür in person.
Another sentence type, whose agent is difficult to define, is pro-drop ambiguity sentence. E.g.:
Tülkä
käl-gän täräp-kä qari-di fox come-PARTCPL direction-DAT look-PAST 'The fox looks at the direction from where it (the fox itself) comes. ' 'The fox looks at the direction from where he (human) comes. ' 'He (human) looks at the direction from where the fox comes.' The position of covert subject is the main reason of ambiguity in this sentence. It was extensively discussed by Muzappar ABDURUSUL [14]. Here I shall limit my discussion to conventional impersonal constructions.
To sum up, impersonal constructions are one of the main sentence types of Uyghur language. It not only displays universal features, but also shows language-specific characteristics. In this preliminary study, the author has discussed the types of impersonal constructions and their formation in Uyghur. Thus, this paper will pave a way for further studies of impersonal constructions and provides important languages facts for future cross-linguistic study. | 2019-08-02T22:40:09.207Z | 2019-07-17T00:00:00.000 | {
"year": 2019,
"sha1": "dd24a0242607c59d17e03f08452aae669dec7f76",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijll.20190704.13.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "887ddcc1d8e110ee2b25057c32e27626dd306ec0",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
249872310 | pes2o/s2orc | v3-fos-license | Theoretical investigation on the interactions of microplastics with a SARS-CoV-2 RNA fragment and their potential impacts on viral transport and exposure
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes the coronavirus disease-19 (COVID-19) pandemic spread across the world and remains difficult to control. Environmental pollution and habitat conditions do facilitate SARS-CoV-2 transmission as well as increase the risk of exposure to SARS-CoV-2. The coexistence of microplastics (MPs) with SARS-CoV-2 affects the viral behavior in the indoor and outdoor environment, and it is essential to study the interactions between MPs and SARS-CoV-2 because they both are ubiquitously present in our environment. To determine the mechanisms underlying the impact of MPs on SARS-CoV-2, we used molecular dynamic simulations to investigate the molecular interactions between five MPs and a SARS-CoV-2 RNA fragment at temperatures ranging from 223 to 310 K in vacuum and in water. We furthermore compared the interactions of MPs and SARS-CoV-2 RNA fragment to the performance of SARS-CoV-1 and Hepatitis B virus (HBV) RNA fragments in interacting with the MPs. The interaction affinity between the MPs and the SARS-CoV-2 RNA fragment was found to be greater than the affinity between the MPs and the SARS-CoV-1 or HBV RNA fragments, independent of the environmental media, temperature, and type of MPs. The mechanisms of the interaction between the MPs and the SARS-CoV-2 RNA fragment involved electrostatic and hydrophobic processes, and the interaction affinity was associated with the inherent structural parameters (i.e., molecular volume, polar surface area, and molecular topological index) of the MPs monomers. Although the evidence on the infectious potential of SARS-CoV-2 RNA is not fully understood, humans are exposed to MPs via their lungs, and the strong interaction with the gene materials of SARS-CoV-2 likely affects the exposure of humans to SARS-CoV-2.
Introduction
The global pandemic of the coronavirus disease-19 (COVID-19) has suddenly made us realize that viruses have become important biological pollutants. The outbreak of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) not only seriously threatens human health (Topol, 2020;Turner et al., 2021), but also greatly increases environmental stress (Adelodun et al., 2021a(Adelodun et al., , 2021bBedrosian et al., 2021). It is thus essential to understand the environmental fate and the behavioral dynamics of the coronavirus. The SARS-CoV-2 can travel in all environmental compartments like water (Navarro et al., 2021;Sala-Comorera et al., 2021), air (Dubey et al., 2021;Razzini et al., 2020), and soil (Anand et al., 2021;Steffan et al., 2020). A nucleic acid material (DNA or RNA) enclosed in a nucleocapsid protein is referred to as the non-enveloped structure of a virus particle (Müller et al., 2019). This is in contrast to the enveloped structure of a virus particle which contains a biological membrane. An envelope increases viral sensitivity to external physical stressors (pH, heat, dryness, etc.) as biological membranes are relatively fragile structures. Consequently, the SARS-CoV-2 as an enveloped virus is more sensitive to environmental factors than non-enveloped viruses (Achak et al., 2021). Thus, it is reasonable to believe that the non-enveloped structural materials of the SARS-CoV-2 could be more resistant to these inactivation factors and are likely to maintain their stability for a long time. Furthermore, studies on the nucleic acid material of SARS-CoV-2 are used for its detection and control in the environment and even for the implementation of personal health prevention measures.
The plethora of sources that can contribute to the release of MPs into air have been summarized in (Catarino et al., 2018;UNEP, 2016). In addition, the main sources of indoor and outdoor plastic debris released into the air and subject to human inhalation are illustrated by Amato-Lourenço et al. (2020). The indoor concentrations ranged between 1.0 and 60.0 fibers/ m 3 whereas outdoor concentrations were significantly lower as they range between 0.3 and 1.5 fibers/m 3 (Dris et al., 2017). This is important to quantify and realize, because MPs have been reported as carriers or vectors for concurrent pollutants, e.g. metals , organic pollutants , and they exhibit diverse interactive effects (Bhagat et al., 2021;Kim et al., 2017;Sun et al., 2021). In addition, MPs are becoming a novel ecological habitat termed the plastisphere (Zettler et al., 2013), and could facilitate the survival and dissemination of bacterial and fungal pathogens (Moresco et al., 2021), and antibiotic resistance genes . Importantly, plastic pollution could be a secondary pathway for the transmission of human pathogenic viruses (Moresco et al., 2021) via the respiratory exposure route. We focus here on the MPs-SARS-CoV-2 interactions because both the virus as well as the sources of MPs (like fibers from clothes, building materials, household objects, polymer fragments in urban dust) are closely correlated to the presence of human. It was also reported that SARS-CoV-2 remains more stable on plastic surfaces than on stainless steel, glass, and ceramics (Gidari et al., 2021) which has its consequences for the oral and hand contact exposure routes for humans. Amato-Lourenço et al. (2022) found that SARS-CoV-2 aerosols may bind to total suspended particles, such as MPs, and facilitate virus entry into the human body. Moreover, SARS-CoV-2 virus particles have the ability to sorb to the surface of MPs released during washing processes (Belišová et al., 2022). Hence, there is an urgent need to further explore the interactions and mechanisms of MPs and SARS-CoV-2.
Virus stability in the environment is strongly influenced by the size and structure of the virus particle (including the presence or absence of an envelope), the type of genome (DNA or RNA), a transmission route such as faecal-oral and air droplets, the presence of vectors or carriers like the MPs, and the viral concentration of the contamination source. As known, the intrinsic properties such as polymer type of MPs dictate their interaction affinity with other co-contaminants (Fred-Ahmadu et al., 2020;Menéndez-Pedriza and Jaumot, 2020). Besides, many environmental factors can affect the stability of viruses in the environment (Aboubakr et al., 2021;Achak et al., 2021;Paul et al., 2021), in humans (Matson et al., 2020), and on common touch surfaces (Aboubakr et al., 2021). Notably, temperature (Paul et al., 2021) and relative humidity (Zhao et al., 2020) are the two critical factors that determine the fate and transport of coronaviruses given certain environmental conditions. Therefore, searching for some key characteristics that may affect the interaction of the MPs and SARS-CoV-2 is a noteworthy issue.
In silico methods are a promising approach and play a significant role in elucidating the mechanisms of the interactions of pollutants and biomacromolecules Ge et al., 2011). In particular, the molecular simulation method such as molecular dynamics (MD) simulation is a practical in silico method in environmental applications Feng et al., 2022;Sun et al., 2013). In addition, the molecular simulation method has shown to be an effective tool in exploring the interactions between MPs and SARS-CoV-2, and offered theoretical insights into the adsorption/separation and inactivation of carbon nanoparticles with a SARS-CoV-2 RNA fragments (Zhang et al., 2021b). This way in silico methods can not only contribute to minimizing the challenge of time-consuming and labor-intensive virus experiments under high risks of infection, but also to meeting our precautionary demand for options to handle any new versions of the coronavirus that might emerge in the future.
In light of the demands from the exploration of the interaction and mechanism between MPs and SARS-CoV-2, this knowledge gap needs to be addressed. Hence, in this work for the first time MPs were studied theoretically by MD simulation to characterize their interactions with the nonenveloped structural materials of SARS-CoV-2 including a nucleocapsid protein and a SARS-CoV-2 RNA fragment in the water phase and in the vacuum phase (as a reference for the water phase and as an approximation to the gas phase). Two reference viruses, namely SARS-CoV-1 (homologous coronavirus similar to SARS-CoV-2) and Hepatitis B virus (HBV, noncoronavirus dissimilar to SARS-CoV-2) were selected to compare the performance in interacting with MPs. The influence of five different MP types and the temperature as an environmental factor is considered. The objectives of this study were divided in several parts: 1) Comparison of the interactions of the MPs with the nucleocapsid protein and with the viral RNA fragments; 2) Interaction mechanisms between the MPs and viral RNA fragments; and 3) Correlation of the interaction affinity and molecular parameters of MP monomers.
MD simulation
The selected three-dimensional structure models of the SARS-CoV-2 RNA fragment determined by Zhang et al. (2021c), the SARS-CoV-1 RNA fragment determined by Robertson et al. (2005), and the HBV RNA fragment determined by LeBlanc et al. (2021) were used as model compounds for the simulation of the interactions between MPs and the viral RNA fragments. It should be noted that the SARS-CoV-2 RNA fragment is a model molecule of a frameshift stimulation element (FSE) from the SARS-CoV-2 RNA genome (Zhang et al., 2021c). The FSE plays an important role in the virus replication cycle and has emerged as a major drug target (Lan et al., 2022). The selected three-dimensional structure models of the SARS-CoV-2 nucleocapsid protein determined by Kang et al. (2020), the SARS-CoV-1 nucleocapsid protein determined by Huang et al. (2004), and the HBV nucleocapsid protein determined by Böttcher and Nassal (2018) were used as model compounds for the simulation of the interactions between MPs and the viral nucleocapsid protein. The structures of the RNA fragments [PDB ID: 6XRZ (SARS-CoV-2), 1XJR (SARS-CoV-1), 6VAR (HBV)] and nucleocapsid proteins [(PDB ID: 6M3M (SARS-CoV-2), 1SSK (SARS-CoV-1), 6HU7 (HBV)] were obtained from the RCSB Protein Data Bank (Burley et al., 2019).
The polymer chains derived from five plastic monomers were built as model compounds for MPs including polybutene (PB), polyethylene (PE), polypropylene (PP), polystyrene (PS), and polyvinyl chloride (PVC) within the simulation. All the simulations were carried out in a box with three-dimensional boundary conditions. The dimensions of the simulation boxes were a = b = c = 85 Å, a = b = c = 90°. The length of the simulation box in each direction was large enough to enable the interactions between the MP polymer chain and the materials of the viruses. The process of building the MP models refers to the simulation methods developed by Guo et al. (2019) with slight modifications. The MP polymer chains were built and energy minimized using the smart geometry optimization algorithm, which is the combination of steepest descent, conjugate gradient, and quasi-Newton geometry optimization algorithms. Then the optimized polymer chain was randomly packed in rectangular boxes with three-dimensional periodic boundary conditions by Amorphous Cell Construction. For each box, only one polymer chain was added. The amount of PB, PE, PP, PS, and PVC monomer molecules were 200, 600, 500, 200, and 600. The MP-virus systems included one polymer chain, one RNA fragment or one nucleocapsid protein, and either a vacuum layer (83 Å) or a water layer (83 Å). For the water system, 1000 water molecules were incorporated in each unit cell. The smart geometry optimization algorithm was used to minimize the energy of the simulation systems. Then the MD calculations were performed in the canonical ensemble NVT system in which the number of molecules [N], volume [V], and temperature [T] of the system are kept constant at 223, 263, 273, 298, and 310 K. These temperatures represent the range from a low-temperature environment to the temperature of the human body. The universal force field was adopted in the simulation framework. The van der Waals interaction cut-off was 12.5 Å, and the Ewald method (accuracy 0.001 kcal/mol) was used. The simulation was performed for 100 ps which allowed the studied system to reach equilibrium, and each step was 1.0 fs. A Nose thermostat was adopted. All the simulations were performed with the Materials Studio software package (ver. 8.0).
Interaction energy
For the interaction systems, the magnitude of the interaction energy (E int ) is an indication of the magnitude of the driving force towards complexation. A negative value reflects stable adsorption on the plastisphere. E int was calculated by where E MP-virus , E MP , and E virus represent the energies of the complex, the isolated MPs, and the viral RNA fragment or nucleocapsid protein, respectively.
Molecular parameters and linear correlation models
The MP monomers' molecular parameters (Table S1) such as volume of molecule (V M ), polar surface area (PSA), and molecular topological index (MTI) were selected to correlate with E int so as to develop a quantitative relationship between the inherent properties of MPs and E int . The molecular parameters were calculated using Multiwfn 3.8 software Chen, 2012a, 2012b). Correlation of interaction affinity and molecular parameters of the MP monomers was described using a polynomial relationship by performing linear regression models in Sigma Plot, ver. 14.0 (Systat Software Inc., San Jose, CA).
Statistical analysis
Statistically significant differences between test groups were determined by independent t-test and one-way analysis of variance with the Waller-Duncan test post hoc, at a significance level of p < 0.05 (IBM SPSS Statistics for Windows, ver. 23.0, IBM Corp., Armonk, NY). Linear regression analysis at the significant level of p < 0.05 was carried out using the SPSS.
Comparison of interactions of MPs with viral nucleocapsid protein and RNA fragments
To fully understand the interactions between the MPs and the nonenveloped structures of the virus, the interactions of the MPs with the nucleocapsid protein and with the viral RNA fragments were compared after geometry optimization (Fig. 1). As shown in Fig. 1A, for the SARS-CoV-2, the absolute E int values between the MPs and the nucleocapsid protein were significantly lower (p < 0.05) than those between the MPs and the RNA fragment. In contrast, for the HBV (Fig. 1C), the absolute E int values between the MPs and the nucleocapsid proteins were significantly higher (p < 0.05) than those between the MPs and the RNA fragments. For the SARS-CoV-1 (Fig. 1B), the absolute E int values between the MPs and the nucleocapsid proteins were higher than the corresponding values between the MPs and the RNA fragments, but the two groups showed no significant difference (p > 0.05). Moreover, there was no significant difference in the absolute E int values between the interactions of the MPs with the nucleocapsid proteins of the SARS-CoV-2 and the SARS-CoV-1. However, the absolute E int values between the MPs and the nucleocapsid proteins of the HBV were significantly higher than those between the MPs and the nucleocapsid proteins of the SARS-CoV-2 or the SARS-CoV-1 (p < 0.05). In addition, the absolute E int values between the MPs and the RNA fragments of the SARS-CoV-2 were significantly higher than those between the MPs and the nucleocapsid proteins of the SARS-CoV-1 or the HBV (p < 0.05). Moreover, no significant difference in the absolute E int values between the interactions of the MPs with the RNA fragments of the SARS-CoV-1 and the HBV was found.
Generalizing, when comparing the nucleocapsid protein and the RNA fragment, then the MPs exhibited a stronger interaction with the RNA fragment for the SARS-CoV-2, while the MPs exhibited a stronger interaction with the nucleocapsid protein for the HBV. Furthermore, this difference in the interactions was not affected by the type of MP. The plastic types were a bit more discriminative for SARS-CoV-1 and HBV compared to the SARS-CoV-2 that had interactions energies all similar for each type of MPs.
Interaction mechanisms between MPs and viral RNA fragments
To reveal the mechanisms of the interactions of the MPs with the viral RNA fragments, the values of E int as derived from the total energy (E t ), the potential energy (E p ), the 'van der Waals' energy (E v ), and the electrostatic energy (E e ) are summarized in Figs. 2, S1, and S2. As shown in Fig. 2A temperature range in vacuum and the full temperature range in water. This indicates that the MPs can form stable complexes with the SARS-CoV-2 RNA fragment. Furthermore, the computed E int derived from the E e between the MPs and the SARS-CoV-2 RNA fragment were generally closer to the E int values derived from the E t /E p than the E int values derived from the E v in both vacuum and water. Moreover, there were no significant differences between the E int values derived from the E e and E t /E p (p > 0.05) in vacuum, but significant differences between the E int values derived from the E v and E t /E p (p < 0.05). This implies that the electrostatic interaction contributed mainly to the mechanism of interaction between the MPs and SARS-CoV-2 RNA fragment. The genetic material of the SARS-CoV-2 is positive single-stranded RNA (Zhang et al., 2021c), whereas the studied MPs are neutral and the electrostatic interactions are mainly ion-induced dipole interactions. Moreover, the absolute E int values derived from the E t , E p , or E e for the interactions between the MPs and the SARS-CoV-2 RNA fragment ( Fig. 2B) in water were significantly greater than those in vacuum (p < 0.05) ( Fig. 2A), implying that the interaction affinity of the MPs with the SARS-CoV-2 RNA fragment in water was stronger compared with the affinity in vacuum. This may be caused by the hydrophobicity of MPs (Ding et al., 2020;Zhang et al., 2020), which can provide stronger interactions with the viral RNA fragment in water. As depicted in Figs. S1 and S2, the E int values derived from the E t and E p for the interaction between the MPs and the SARS-CoV-1 RNA or HBV RNA fragments in vacuum and water phases were significantly lower than those for the interaction between the MPs and SARS-CoV-2 RNA fragment (p < 0.05). This means that the MPs exhibited stronger interaction with the SARS-CoV-2 RNA fragment than with the SARS-CoV-1 RNA and the HBV RNA fragments. Moreover, most of the E int values for the interaction between the MPs and the SARS-CoV-1 RNA fragment or the HBV RNA fragment tended to be positive. This implies that the complexes of the MPs with SARS-CoV-1 RNA fragment or HBV RNA fragment were instable. As a result, it is difficult to analysis the interaction mechanisms of the MPs and the SARS-CoV-1 RNA fragment or the HBV RNA fragment.
Correlation of interaction affinity and temperatures
To test the impact of the studied temperature on the interactions of the MPs with the viral RNA fragments, the variation of the interaction affinity with the temperatures was plotted (Figs. 3, S3, and S4). In general, for each of the MPs, the E int values derived from the total energies fluctuated with the temperature. In particular, the E int values between the MPs and SARS-CoV-2 RNA fragment tended to reach the highest value at 298 K in vacuum (Fig. 3A), implying that the interaction affinity between the MPs and SARS-CoV-2 RNA fragment was lowest at 298 K. In water, the E int values between the PS MPs and SARS-CoV-2 RNA fragment decreased with an increase of the temperature (Fig. 3B). A similar phenomenon occurs in the interaction between the PS MPs and SARS-CoV-1 RNA fragment in water (Fig. S3B). In terms of considering the various types of MPs as a whole, the E int values were not significantly different between the temperatures (Figs. 3, S3, and S4). This also means that temperature was not a determinative factor affecting the interaction affinity between the MPs and viral RNA fragments in the present simulation study.
Correlation of interaction affinity and molecular parameters of MP monomers
To explore the impact of the inherent properties of MPs on their interactions with the viral RNA fragments, a correlation was conducted between the interaction affinity and molecular parameters of MP monomers (Tables 1, S2, S3, and Fig. 4). As shown in Table 1, the E int values derived from the total energies for the interaction of each of the MPs with the SARS-CoV-2 RNA fragment in vacuum and water phases correlated with the molecular parameters V M , PSA, and MTI of the MP monomers to varying degree. The degree of correlation tended to be higher in vacuum and at 310 and 298 K in water except for the PS MPs with aromatic hydrocarbons. In particular, the E int values correlated highly (Fig. 4A-C) and significantly ( Fig. 4D-F) with the molecular parameters except for the PS MPs. On the whole, the greater the V M , PSA, and MTI values, the stronger the interactions between the MPs and the SARS-CoV-2 RNA fragment (Fig. 4).
Generally, the E int values derived from the total energies for the interaction of the SARS-CoV-1 (Table S2) or the HBV (Table S3) RNA fragment with the MPs in vacuum and water phases correlated moderately or weakly with the molecular parameters V M , PSA, and MTI of the MP monomers. It can be also found that there was a higher correlation between the E int values for the interaction of the SARS-CoV-1 RNA fragment with the MPs and the molecular parameters of the MP monomers at 310 K in vacuum except for the PS MPs. In addition, for the interaction of the MPs with the SARS-CoV-1 and the HBV RNA fragment, no significant correlation was found between the E int values and the molecular parameters of the MP monomers.
Discussion
Owing to the high prevalence of both enteric and respiratory viruses in the population and the environment, there is significant potential for human viruses to become associated with the plastisphere (Moresco et al., 2021). There are many sources of MPs in the environment and potential pathways for the interaction, colonisation, and dissemination of viruses. We have studied the interaction between three different viruses and five different MPs in water and vacuum air. For these exposure routes we have taken different conditions; being different temperatures, and different coating of the virus. These coatings have been modelled theoretically how Fig. 3. Variation of the interaction energies derived from the total energies of the five types of MPs with the SARS-CoV-2 RNA fragment in vacuum (A) and in water (B) with the studied temperatures (223, 263, 273, 298, and 310 K). Different letters represent statistically significant differences between the treatments (p < 0.05).
Table 1
Correlation coefficients between the E int values derived from the total energies between the MPs and SARS-CoV-2 RNA fragment and the molecular parameters of the MP monomers. a Correlation model Temperature the genetic material such as the RNA of a virus is released into cells after the virus undergoes fusion. There, the RNA segments are covered with the nucleocapsid protein enabling to travel to specific organelles such as the ribosome.
The first pathway described is via the respiratory path: MPs can enter the human body through breathing, mainly due to the presence of MP pollution in the air (Amato-Lourenço et al., 2020); indoor dust as well as air in cities were shown to be large contributors. So not only the virus and MPs dose will be higher indoors, also interaction effectivity is large. It makes it a large potential exposure route for humans. It has been proven that face masks can release large numbers of MPs, which were detected in nasal mucus of mask wearers and can be inhaled by human beings (Ma et al., 2021). In a way face masks are preventing inhalation of virus for human not infected, but those infected may even breathe the virus out. It is speculated that the virus can bind to MPs from the mask and human beings inhale them again as an agglomerate.
Second, the SARS-CoV-2 is transmitted primarily through respiratory droplets (Stadnytskyi et al., 2020) and/or aerosols (Liu et al., 2020a). Airborne dust is another transmission route linked to infectious diseases (Maestre et al., 2021;Moreno et al., 2021). More severe weather phenomena such as sandstorms may exacerbate the migration of the virus (Meo et al., 2021). The adsorption of the SARS-CoV-2 on these airborne media can contribute to the long-range transport of the virus. Note that the airborne transmission route refers to the presence of particles with diameter < 5 μm, who can remain in the air for long periods (Morawska and Cao, 2020). The particle sizes of the MPs are also in this scale range. Thus, MPs dispersed in air can be inhaled by humans (Amato-Lourenço et al., 2020). The MPs can be released into the atmospheric air via several sources, e.g., synthetic textiles , tire wear particles (Lee et al., 2020), domestic laundry dryers (O'Brien et al., 2020), etc. Hence, there is a high probability that the MPs and the SARS-CoV-2 will meet in the atmospheric environment. It has been reported that SARS-CoV-2 aerosols may bind to MPs and facilitate virus entry into the human body (Amato-Lourenço et al., 2022). Our results show that the MPs stabilized the SARS-CoV-2 RNA fragment in both vacuum and water. This also means that the MPs could act as a carrier capable of carrying the gene materials of the SARS-CoV-2 and become a new airborne media for the transport of the virus.
Third, a non-droplet transmission is also possible, as the infectious SARS-CoV-2 particles are also present in human excretions (Wiktorczyk-Kapischke et al., 2021). The fragment of the SARS-CoV-2 RNA has been frequently detected in various countries in wastewater (Kumar et al., 2020;La Rosa et al., 2020;Randazzo et al., 2020), particularly hospital effluent (Gonçalves et al., 2021). The transmission of SARS-CoV-2 via the fecaloral route highlights the presence and persistence of SARS-CoV-2 in the aquatic environment (Arslan et al., 2020). Moreover, the SARS-CoV-2 RNA is relatively stable in sewage and non-chlorinated drinking water (Ahmed et al., 2020). The viral RNA was also found to be relatively stable in contrast to the rapid inactivation of infectious SARS-CoV-2 in river and in sea water (Sala-Comorera et al., 2021). The COVID-19 pandemic has a huge impact on the plastic waste management in many countries, in large due to the sudden surge of medical waste which has led to a potential significant release of MPs (Khoo et al., 2021). Recent studies indicated that MPs have a significant abundance in sewage. Therefore, the sewage treatment system may be an important site for the interaction between the MPs and the gene materials of SARS-CoV-2. Belišová et al. (2022) also confirmed the ability of SARS-CoV-2 virus particles to sorb to the surface of MPs, specifically microfibers in wastewater. The present results implied that the MPs stabilized the SARS-CoV-2 RNA fragment in the water phase, regardless of temperature and MP types. Additionally, the persistence of the SARS-CoV-2 RNA fragment when present on the MPs was different from the persistence of the SARS-CoV-1 and HBV RNA fragments. In comparison, the SARS-CoV-2 RNA fragment preferred to maintain on the MPs, which may cause the gene materials of the SARS-CoV-2 to be long lasting on the MPs.
The fourth path is via the oral route such as food and water. The results in Fig. 2 also indicated that the interaction affinity of the MPs with the SARS-CoV-2 RNA fragment in water was stronger compared with the affinity in vacuum by a factor of 10 at least. This means the MPs and viral genetic material may co-present in dairy products we eat. If the MPs are entered through the food chain (Bouwmeester et al., 2015;Mercogliano et al., 2020), the MPs enter cells via endocytosis and then are released into the cytoplasm. Particularly, the intestinal tract is the main place where MPs exist and is the channel into the circulatory system (Fournier et al., 2021;Visalli et al., 2021). In the meanwhile, it is confirmed that the SARS-CoV-2 can effectively infect intestinal epithelial cells and their precursors (Lamers et al., 2020), which reveals the fact that the intestinal tract is the potential infection site of the SARS-CoV-2 in humans. Taken together, an intercellular environment provides an opportunity for interaction between the MPs and the viral RNA segments/nucleocapsid protein. In our study, we revealed that the MPs showed stronger interaction with the SARS-CoV-2 RNA fragment than with its nucleocapsid protein. Comparison and analysis on the E int also supported the finding that the MPs interacted with the SARS-CoV-2 RNA fragment more strongly than with the SARS-CoV-1 or HBV RNA fragments. This also means that the MPs are more apt to stabilize the genetic materials of the SARS-CoV-2 in the intercellular environment, whereas this interaction may limit the transcription and replication of the viral RNA genomes.
The fifth potential route is via inanimate surfaces such as plastic, stainless steel, and glass has been established (Corpet, 2021;Gidari et al., 2021) on which the persistence of the SARS-CoV-2 is detected. For instance, Gidari et al. (2021) showed the ability of SARS-CoV-2 to persist on most common materials such as glass, stainless steel, and plastic with half-lives of 4.2, 4.4, and 5.3 h respectively. The SARS-CoV-2 is thus more stable on plastics than on steel or on glass. With the global outbreak and spread of COVID-19, disposable surgical masks as effective and cheap protective medical equipment have been widely used by the public. The random disposal of masks may result in new and greater MP pollution, because masks made of polymer materials would release MPs after entering the environment. More importantly, potential co-release of the MPs and the SARS-CoV-2 into the environment will be ineluctable. This might be expected as the result of the unreasonable disposal of the masks, especially the masks contaminated with the virus. MPs have been detected in the air. Thus, MPs can deposit upon the surface of various materials. Thus, there may be an opportunity for the interaction of MPs and the virus RNA. There is evidence that the SARS-CoV-2 RNA fragment has been detected on frozen food packaging Liu et al., 2020b), and aquatic products can be a route of transmission of COVID-19. Positive detection of COVID-19 nucleic acid in the samples of frozen food packaging is still occurring. Our theoretical investigation also indicated that the MPs stabilized the SARS-CoV-2 RNA fragment at very low temperatures ranging from 273 to 223 K. The presence of the genetic material of SARS-CoV-2 on the surfaces is not the same as the presence of the infectious virus, but indicates the transit and contact of infected individuals (Casabianca et al., 2022). Therefore, theoretical evidence of interactions between the MPs and the SARS-CoV-2 RNA fragment could support practices (e.g. strict sanitization of medical equipment, supplies, fabrics, environmental surfaces, and air contaminated with pathogens) that reduce the risk of SARS-CoV-2 infection and cut off its transmission route.
The plastisphere is a diverse microbial community of heterotrophs, autotrophs, predators, and symbionts (Zettler et al., 2013). Several studies demonstrated that the gene materials of microorganisms can be extracted from MPs and subsequently identified (Debeljak et al., 2017;Zettler et al., 2013). Regardless of environmental media and temperature, a stable binding between the MPs and the SARS-CoV-2 fragment was proven theoretically. After such a binding, the SARS-CoV-2 fragment is more difficult to degrade in the natural environment. This also means that entering the plastisphere appears to be an important process that significantly affects the global environmental fate of SARS-CoV-2.
SARS-CoV-2 belongs to the family of enveloped, single-strand RNA viruses (Mei and Tan, 2021). The viral membrane of SARS-CoV-2 surrounds a helical nucleocapsid in which the viral genome is encapsulated by the nucleocapsid protein (Savastano et al., 2020). The biological membrane, known as an envelope, contains lipids and proteins. An envelope may increase the viral sensitivity to physical influencing factors (pH, heat, dryness, etc.) as biological membranes are relatively fragile structures. The nucleocapsid protein of SARS-CoV-2 is produced at high levels within infected cells, enhances the efficiency of viral RNA transcription, and is essential for viral replication (Savastano et al., 2020). It is reported that the SARS-CoV-2 RNA is likely to persist for a long time in untreated wastewater (Ahmed et al., 2020). Consequently, it is essential to elucidate the interactions of the MPs with the nucleocapsid protein and SARS-CoV-2 RNA fragment. Further studies are warranted to evaluate the interaction of the MPs with other structural proteins of SARS-CoV-2, e.g. spike, membrane, and envelope. Furthermore, the interactions as addressed in this study are the first stepping stone to meet our precautionary demand for options to handle any new versions of the coronavirus that might emerge in the future.
It was also found that there are differences in the interaction affinity between the MPs with different compositions and SARS-CoV-2 RNA fragment ( Figs. 1 and 2). Notably, the molecular parameters of the PS monomer performed very different in affecting the interaction affinity as compared to the other MP monomers (Fig. 4). The benzene ring contained in PS allowed it to form π-π interactions with the SARS-CoV-2 RNA fragment that might modulate the interaction affinity. The differences in the composition of MPs are most directly reflected in the functional groups contained in their polymeric structural units. The properties of the MP monomer compounds can determine the mechanism of interaction of MPs with organic pollutants, which in turn exhibit a different interaction affinity for organic pollutants (Lee et al., 2014). In addition, changes in environmental conditions such as temperature can modulate the interaction between the MPs and SARS-CoV-2 RNA fragment (Fig. 3). Other factors such as pH, salinity, and dissolved organic matter which may result in differences in the interaction can also not be neglected. Accordingly, the single and combined effects of different environmental factors on the interaction of the MPs and SARS-CoV-2 will need to be considered in subsequent studies.
It is undeniable that in silico methods still have limitations in both space and time scales, which weakens their correlation with experimental observations and available experimental data. Moreover, quality assurance is required to minimize uncertainty in the calculation of toxicological data. In spite of this, in the face of the urgency of the COVID-19 pandemic, in silico methods are a useful tool to investigate the interaction of environmental pollutants such as MPs with the novel coronavirus, particularly the proposed methodologies that rely upon alternatives to biological testing with high risk of infection. Furthermore, in silico methods have the advantages of preliminary screening of high-risk combinations of multiple co-existing pollutants (e.g. SARS-CoV-2 and MPs) in the environment, and it will save valuable research time and efforts (e.g. model validation) as well as prevent infection during experimental testing.
Conclusions
In this work, we carried out MD simulations to investigate the interactions between five MPs and RNA fragments of three viruses including, SARS-CoV-2, SARS-CoV-1, and HBV at temperatures ranging from 223 to 310 K, in vacuum and in water phases. The estimated E int implied that the interactions of the MPs with the SARS-CoV-2 RNA fragment were stronger than those with the SARS-CoV-1 and HBV RNA fragments, regardless of the environmental media, temperature, and MP types. Furthermore, the electrostatic and hydrophobic processes were the predominant mechanisms for the interactions between the MPs and the SARS-CoV-2 RNA fragment, and the interaction affinity was associated with the inherent structural parameters (i.e., V M , PSA, and MTI) of the MP monomers. Our theoretical results suggest that MPs are capable of regulating the behavior and fate of the SARS-CoV-2 RNA fragment in the environment. While MPs are within air, food and water, this plastic pollution could be a secondary pathway for the transmission of human pathogenic virus and hence have consequences for the exposure of humans to SARS-CoV-2, both by the respiratory pathway (enhancing potential exposure) and the touch pathway where the plastic surface binds the SARS-CoV-2 RNA fragment and thus lowers potential exposure and infectious risks for human. It should be noted that the SARS-CoV-2 RNA fragment can be immobilized by MPs which are ubiquitous in the human environments and thus their persistence and circulation would prolong the presence of virus RNA in the environment. This in silico work serves to minimize the challenges of conducting time-consuming and labor-intensive virus experiments with a high risk of infection, while meeting our precautionary need for options to deal with any new versions of coronaviruses that may emerge in the future.
Declaration of competing interest
The authors declare no competing financial interest. | 2022-06-21T13:07:18.052Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "3ef44e80656f5fdf0ce386d1b616e4c7766f8baa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.scitotenv.2022.156812",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "24b62cd8ab146db104fbf1c266c17f120bfd7237",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250294668 | pes2o/s2orc | v3-fos-license | Multi-Objective Optimization of Sustainable Concrete Containing Fly Ash Based on Environmental and Mechanical Considerations
: Infrastructure design, construction and development experts are making frantic efforts to overcome the overbearing effects of greenhouse gas emissions resulting from the continued dependence on the utilization of conventional cement as a construction material on our planet. The amount of CO 2 emitted during cement production, transportation to construction sites, and handling during construction activities to produce concrete is alarming. The present research work is focused on proposing intelligent models for fly ash (FA)-based concrete comprising cement, fine and coarse aggregates (FAg and CAg), FA, and water as mix constituents based on environmental impact (P) considerations in an attempt to foster healthier and greener concrete production and aid the environment. FA as a construction material is discharged as a waste material from power plants in large amounts across the world. Its utilization as a supplementary cement ensures a sustainable waste management mechanism and is beneficial for the environment too; hence, this research work is a multi-objective exercise. Intelligent models are proposed for multiple concrete mixes utilizing FA as a replacement for cement to predict 28-day concrete compressive strength and life cycle assessment (LCA) for cement with FA. The data collected show that the concrete mixes with a higher amount of FA had a lesser impact on the environment, while the environmental impact was higher for those mixes with a higher amount of cement. The models which utilized the learning abilities of ANN (-BP, -GRG, and -GA), GP and EPR showed great speed and robustness with R 2 performance indices (SSE) of 0.986 (5.1), 0.983 (5.8), 0.974 (7.0), 0.78 (19.1), and 0.957 (10.1) for Fc, respectively, and 0.994 (2.2), 0.999 (0.8), 0.999 (1.0), 0.999 (0.8), and 1.00 (0.4) for P, respectively. Overall, this shows that ANN-BP outclassed the rest in performance in predicting Fc, while EPR outclassed the others in predicting P. Relative importance analyses conducted on the constituent materials showed that FA had relatively good importance in the concrete mixes. However, closed-form model equations are proposed to optimize the amount of FA and cement that will provide the needed strength levels without jeopardizing the health of the environment.
Introduction 1.Fly Ash
Substantial research has been conducted to evaluate using various supplementary cementitious materials (SCMs) such as fly ash (FA), slag, metakaolin, rice husk ash (RHA), silica fume (SF), and natural pozzolan for the partial replacement of Portland cement, which releases remarkable quantities of CO 2 [1][2][3][4].Almost 7% of global anthropogenic greenhouse gas emissions originate from cement manufacturing [5][6][7][8].Coal combustion for energy production is the primary industrial process for generating FA.The reduction in cement consumption, as well as eliminating fly ash disposal costs and environmental risk, are the most striking bonuses of FA use in concrete [9].Pozzolanic activity, low water demand, reduced bleeding, and lesser heat evolution are some of the reasons why FA has been widely adopted in the construction industry as a binder replacement [10][11][12].While fly ash continues to be produced in large volumes, especially in large cement-producing countries such as China and India, other countries in Europe and North America have discontinued coal-fired power plants for environmental considerations.In such countries, the use of fly ash continues through beneficiating old fly ash deposits, including in landfills.
The amorphous silica in FA undergoes a chemical reaction with calcium hydroxide during cement hydration and generates additional calcium silicate hydrate, further enhancing its mechanical properties and durability [13][14][15][16].Various studies have demonstrated that the strength increment of FA continues for a longer period of time compared to ordinary concrete, owing to pozzolanic reactions [17].Hence, FA can improve the long-term compressive strength of different types of concrete [18,19].From a microstructural point of view, FA concrete specimens after early-age curing exhibit a copious amount of un-hydrated spherical FA particles.Therefore, low compressive strength has been reported during the initial stages of curing.Conversely, un-hydrated FA particles are less present after long-term curing.Hence, the microstructure of concrete incorporating FA becomes denser in the long term [20].Deign codes such as ACI 211 [21] indicate that replacing 15% to 25% of cement with FA in high-strength concrete could be an optimum dosage.The particle size of FA is another key factor that affects the compressive strength of concrete.It has been reported that higher compressive strength was attained in concrete containing FA composed of a finer particle size distribution, compared to that of ordinary FA [22].In addition to compressive strength, FA fineness has a significant effect on the shrinkage of concrete.In concrete incorporating coarse FA, much lower drying shrinkage was reported [23].It has been posited that a 50% substitution of FA with OPC results in a 30% decrease in shrinkage compared to typical concrete [24].FA also affects the porosity and transport mechanisms in concrete.For instance, Supit and Shaikh [25] depicted that FA presence in concrete mitigated the amount of permeable voids by 6 to 11% compared to that in concrete containing OPC. Ravina and Mehta [26] studied the effects of FA on the properties of concrete made with FA in the range of 35 to 50%.They reported that the mixing water required for a certain slump was reduced by 5 to 10% for concrete incorporating FA.A mercury intrusion porosimetry test confirmed that FA promoted the density of the cementitious matrix.The paste-aggregate interfacial bond in the concrete is also enhanced by FA.Conversely, Mardani-Aghabaglou et al. [27] reported that concrete samples including FA had much higher permeable voids when compared to OPC counterparts.It has been stated that the void content could be increased by increasing the FA cement replacement level in the mixture [28,29].
Sorptivity is also reduced by incorporating FA in concrete.It was found that FA addition decreased the permeability of both cement paste and the transition zone around the aggregates.Several studies have been conducted on the effects of class F FA on the compressive strength of concrete.It has been indicated that compressive strength is reduced with the increment of FA content in concrete, as class F FA contains a small quantity of lime.However, compressive strength is boosted at later phases of curing as a result of pozzolanic activity [30][31][32][33].Chloride ion penetrability in concrete can be notably decreased by class F FA. Wang et al. [34] have expressed that class F fly ash is highly effective at lessening chloride ion penetrability owing to its micro-filler effect and pozzolanic activity.The influence of various replacement levels of FA on the chloride ion penetrability of concrete was investigated by Chindaprasirt et al. [22] using three different test setups.In all test procedures, as the amount of FA in the concrete increased, the chloride ion penetrability in the concrete remarkably decreased.
Moreover, the source and type of FA could be another influential factor that affects the mechanical properties of concrete.For instance, high-early compressive strength could be achieved by high calcium FA compared to low calcium FA [35].As reported by Malhotra [36], compressive strength growth differs with variation in FA content, and this is not consistent with the amorphous silica percentage; FA with high silica content results in a slower strength increase in comparison with its counterparts.
Optimization Method
Kate et al. [37] evaluated and optimized the long-term mechanical characteristics of concrete reinforced with crimped steel fibers.Multiple regression analysis was used to examine the extent of FA influence on the mechanical characteristics of concrete.It was claimed that the Taguchi methodology was an efficient strategy for reducing the total amount of exploratory research.It was also mentioned that the mechanical properties of high-strength, high-volume FA in steel-fiber-reinforced concrete offer an alternative sustainable option for the concrete industry.In another study, the optimization of the mixture proportions of green and sustainable concretes was accomplished [38].In this respect, the model to be optimized took concrete effect into account, as well as the unit cost and environmental impact.As a result, a novel prediction technique dubbed "Marine predator programming" was developed to model and anticipate certain functional features.Three forecasting strategies were used to evaluate the effectiveness of the introduced machine learning model: artificial neural networks, support vector machines, and second polynomial regression.As a result, an innovative sustainable model was developed, and mixture components of sustainable and green concrete types for various compressive strength classes were designed.The findings suggest that marine predator programming is highly capable of estimating a variety of tangible properties.Green mixtures lessened the environmental index by 74.37% and 67.83%.
Environmental Evaluation
In recent decades, a myriad of research has been performed to mitigate the adverse effects of using OPC in concrete.The most utilitarian method to understand the impact of different SCMs on the environment is life cycle assessment (LCA).LCA is remarkable in modeling the complex process of manufacturing concrete based on environmental considerations [39,40].Multifunctional processes are among the most challenging problems in LCA [41].Specifically, the production of electricity and FA from coal combustion cannot be considered independently, and is a rather multifunctional process [42].This distribution of environmental impact is known as 'allocation'.The findings of this allocation may then be engaged in the evaluation of the environmental performance of FA later in its life cycle, such as when it is mixed into concrete.The evaluation of environmental impact corresponding to product stage was investigated by Chen et al. [43].In this research, no allocation procedure was performed for products of waste status.On the other hand, economic ratio allocation approaches were applied for products of by-product status.It is noteworthy that because of the importance of replacing OPC with different SCMs, numerous LCA studies have been conducted on concrete incorporating FA [41][42][43][44][45]. Using FA in concrete makes use of such a waste product and substitutes OPC in concrete, both of which are beneficial to the environment.It is generally known that a reduction in the amount of OPC used in concrete could indeed boost the overall environmental efficiency of mix designs since, by far, OPC is the ingredient which has the greatest environmental impact [46].From a greenhouse gas emission point of view, it has been demonstrated that replacing OPC with FA is environmentally lucrative when the distance between coal and cement plants is notably large [47].Taking these key aspects into account, increased OPC replacement with FA has significant environmentally positive impacts.
Research Significance
With the colossal global growth of greenhouse gas (GHG) emissions, polar ice caps have been melting rapidly in the Antarctic and Arctic, extreme weather events have inflicted economic damage, and climate change effects have been accelerating.The cement and concrete industry contributes substantial GHG emissions [48,49].Demand for concrete continues to grow worldwide.The concrete ingredient that contributes the most to GHG is Portland cement (PC).It is estimated that, by 2050, the consumption of cement will grow from the current 4.2 to 5.2 billion tons [50,51].Using the present techniques of cement production, about 850 kg of CO 2 is released into the atmosphere per each ton of clinker produced [52].Thus, alternative materials are needed for replacing cement to decrease the negative effect of concrete production.Using recycled material or wastes in concrete could alleviate its environmental footprint.Using supplementary cementitious materials (SCMs) is a promising solution to decrease PC consumption while disposing of waste materials from diverse industries [53][54][55].SCMs have been a primary focus for enhancing concrete sustainability [56,57].Agricultural wastes such as palm oil fuel ash (POFA), rice husk ash (RHA), olive pomace ash (OOA), sugarcane bagasse ash (SBA), and industrial wastes such as fly ash (FA) and silica fume (SF) can be used as partial replacements for cement in sustainable concrete [58,59].It is annually estimated that the worldwide production of fly ash surpasses 900 million tons [60], with 580 million tons produced in China [61], 43.5 million tons of contribution by the United States [62], 169.25 million tons in India [63], and 14 million tons in Australia [51].Fly ash has been used for centuries as an ingredient in cement, but its current utilization rate is only about 53.5% [60].Waste management can lead to a number of problems, including pollution, water shortages, and the spread of disease [51].Over the past few years, much investigation has been carried out on the application of concrete with mineral admixtures.The reaction with pozzolan, by which FA improves the microstructure and physical properties of concrete, is usually slow, so the improvements which are given to the material properties of concrete and its microstructure are mainly reflected at later stages.As a result, the early strength of concrete with FA is lower [64].FA is a replacement material for construction which is beneficial because of its chemical features.For instance, by using FA instead of PC in concrete production, you can reduce the amount of both PC and water needed.The result is a more robust concrete that is stronger, more durable, and has higher mechanical performance [65].Additionally, several studies have shown that fly ash enhances the durability and workability of concrete when used in concrete.Using fly ash instead of PC induces more porosity without eroding the average pore size, according to Chindaprasirt et al. [66].Moreover, by increasing the content of fly ash, the volume of the gel pore increases by 5.7 for 10 nm.Adding large amounts of fly ash makes it harder for chloride ions in water to penetrate into the concrete, which helps prevent corrosion [67].A study performed by Hussain et al. [68] investigated whether fly ash concrete from high-strength samples had equivalent compressive strengths to plain concrete; fly ash concrete had greater compressive strengths than ordinary concrete.Fly ash (as a partial replacement for cement) decreases the initial strength of concrete, but after 56 to180 days, concrete strength increases considerably (after exposure to high temperatures) and concrete strength is greatly increased by using fly ash [69].Mabibi reported [70] that concrete's resistance and chloride migration coefficient could be improved by replacement with fly ash, as could the alkali-silicon reaction, although carbonization resistance would be reduced.As for fine aggregates and cement, Liu et al. [51] investigated shrinkage in creep and curing, compression strength, and carbon dioxide emissions from concrete containing fly ash or ground granulated blast-furnace slag (GGBS).Their proposed model accurately predicted the creep strain of concrete by including a parameter to take into account the effect of fly ash content.Various waste products were used to partially replace sand and cement in Garg et al.'s study [71].They proposed a model for predicting the compressive strength of concrete made up of fly ash and slag using an adaptive fuzzy logic model.SCC was used in Zhao et al.'s study, which employed FA at five levels (0, 20, 30, 40, and 50 percent) [72].The mechanical properties and water porosity of the FA series SCCs, as well as their transportation properties, were investigated.
In this research work, intelligent models are proposed using three different operative algorithms-ANN (GA, GRG and BP), genetic programming (GP), and evolutionary polynomial regression (EPR)-to test the compressive strength of 28-day-cured concrete containing fly ash (FA) based on environmental impact assessment considerations for minimizing global warming potential (GWP) effects.Figure 1 presents the structural and environmental benefits of adding FA as a supplementary cement in concrete, and shows the primary focus of this research work.
Buildings 2022, 12, x FOR PEER REVIEW 5 of 27 considerably (after exposure to high temperatures) and concrete strength is greatly increased by using fly ash [69].Mabibi reported [70] that concrete's resistance and chloride migration coefficient could be improved by replacement with fly ash, as could the alkalisilicon reaction, although carbonization resistance would be reduced.As for fine aggregates and cement, Liu et al. [51] investigated shrinkage in creep and curing, compression strength, and carbon dioxide emissions from concrete containing fly ash or ground granulated blast-furnace slag (GGBS).Their proposed model accurately predicted the creep strain of concrete by including a parameter to take into account the effect of fly ash content.Various waste products were used to partially replace sand and cement in Garg et al.'s study [71].They proposed a model for predicting the compressive strength of concrete made up of fly ash and slag using an adaptive fuzzy logic model.SCC was used in Zhao et al.'s study, which employed FA at five levels (0, 20, 30, 40, and 50 percent) [72].
The mechanical properties and water porosity of the FA series SCCs, as well as their transportation properties, were investigated.In this research work, intelligent models are proposed using three different operative algorithms-ANN (GA, GRG and BP), genetic programming (GP), and evolutionary polynomial regression (EPR)-to test the compressive strength of 28-day-cured concrete containing fly ash (FA) based on environmental impact assessment considerations for minimizing global warming potential (GWP) effects.Figure 1 presents the structural and environmental benefits of adding FA as a supplementary cement in concrete, and shows the primary focus of this research work.
Research Methodology 2.1. Data Collection
The extensive literature search conducted for the present research work showed that fly ash (FA) has been studied for its potential as an alternative and/or partial replacement for cement in the fight against global warming resulting from the production, transportation, and utilization of cement.The mission to save our planet from the plaguing greenhouse gas emissions (GHGE) emanating majorly from PC is in top gear, and one of those steps has been to gradually eliminate the use of PC in concrete production and construction activities entirely.Results from recent research works on the utilization of fly ash in concrete production [73][74][75][76][77][78][79][80][81][82][83][84][85] were gathered, and multiple data were collected, tabulated, and utilized to propose intelligent predictions for a design, production, and performance evaluation of FA-based concrete.
Collected Database and Statistical Analysis
At the end of the literature search, 112 records were collected from experimental tested concrete mixtures with different component ratios.Each record contained the following data: water content (W) kg/m 3 , cement content (C) kg/m 3 , fly ash content (FA) kg/m 3 , fine aggregate (sand) content (FAg) kg/m 3 , coarse aggregate content (CAg) kg/m 3 , 28-day cylinder compressive strength of the concrete (Fc28) MPa, and environmental impact factor (P).The collected records were divided into a training set (90 records) and a validation set (22 records).Table 1 includes the complete dataset, while Tables 2 and 3 summarize their statistical characteristics and the Pearson correlation matrix.Table 2 presents the minimum (Min) and maximum (Max) values of the studied data against the parameters of the 28-day-cured FA-based concrete.The average (Avg), the standard deviation (SD), and variance (Var) of the data are also presented in Table 2.These are presented for the training sets and validation sets.Table 3 shows how consistently correlated the input parameters are with the concrete strength (Fc) and environmental impact factors (P).Cement also showed a higher and more consistent correlation with the outputs (Fc and P) than any other parameter in the study.However, intelligent models are being proposed to optimize the consistency and relationship between FA and the outputs (Fc and P).Achieving this would achieve the optimal utilization of FA in the place of PC (C) to attain a safer environment and minimize the impact of concrete production and construction activities on the environment (P).Finally, Figure 2 shows the histograms for both inputs and outputs.While the studied input parameters showed a unimodal distribution of data, FAg and CAg showed a bimodal unsymmetrical distribution.Meanwhile, the output variables Fc and P showed unimodal unsymmetrical data distribution.In the Fc28 data distribution, it appears that the concrete strength in the 40MPa and 50MPa bin had the highest frequency, while in the P data distribution, the environmental impact % of 7 to 9 had the highest frequency.Meanwhile, Fc28 seems to partially skew towards the left.
Research Program and Modelling Plan
Five different artificial intelligence (AI) techniques were used to predict the confined compressive strength of the concrete short rectangular column wrapped with FRP sheets using the gathered dataset.The implemented techniques were "artificial neural network
Research Program and Modelling Plan
Five different artificial intelligence (AI) techniques were used to predict the confined compressive strength of the concrete short rectangular column wrapped with FRP sheets using the gathered dataset.The implemented techniques were "artificial neural network (ANN-BP, ANN-GRG and ANN-GA)", "genetic programming (GP)" and "evolutionary polynomial regression (EPR)".All these techniques were used to predict both 28-day compressive strength (Fc28, MPa) and the environmental impact factor (P) using water content (W, kg/m 3 ), cement content (C, kg/m 3 ), fly ash content (FA, kg/m 3 ), fine aggregate (sand) content (FAg, kg/m 3 ) and coarse aggregate content (CAg, kg/m 3 ).
Each implemented technique was based on a different approach: mimicking the human brain for ANN, the optimization of mathematical regression for EPR, and simulating the evolution of natural creatures for GP.However, for all techniques, their accuracies were evaluated in terms of the "sum of squared errors (SSE)", "root mean squared errors (RMSE)", and the "determination coefficient (R 2 )".
Genetic Algorithm (GA)
The GA is a mathematical technique which simulates the evolution process of biological creatures [60].It depends on one simple rule: "The most fitting creature will survive".To apply this principal on optimization, there must be a pool of solutions for the considered problem, fitting criteria, and a procedure to generate new solutions by mixing the existing ones [60].Biological creatures transfer their data to the next generation in an arranged series of genes called "chromosomes"; similarly, the GA presents the solution (chromosome) as an arranged list of steps (genes) [60].This allows the GA to apply genetic operations (such as crossover and mutation) to the solutions [60].Crossover is a mixing procedure used to generate two new solutions from two existing ones by swapping the head and tails of the two existing solutions [60].Mutation presents the random change in genetic data due to radiation, chemicals, and copying errors; it is applied by randomly changing a step of the considered solution.The algorithm cycle begins with generating a set of random solutions for the considered problem (population), evaluating the fitness of each solution using the fitting criteria, selecting the best fitting solutions and deleting the rest, and finally restoring the original population size by mixing the survival solutions (using crossover and mutation procedures) to generate new ones, and then the cycle starts again [60].Cycle after cycle, the fitting of the solutions increases until the accepted level is reached.
Genetic Programming (GP)
GP is an application of the previously mentioned GA technique [60].It depends on using the GA as a "multi-variable and structure-free regression technique", where the population is a set of randomly generated mathematical formulas and the fitting criteria is the "Sum of Squared Errors (SSE)" between the predicted values and the correct values of the training dataset [60].In order to apply genetic operations, each solution (formula) must be presented in genetic form (as chromosome).Instead of the steps list of the GA, the chromosome consists of two parts; the first is a list of mathematical operators (=, +, −, *, /, . . . ) and the second one is a list of variables [60].Crossover and mutation procedures are applied to both formula operators and variables separately to generate new formulas (solutions).Cycle after cycle, the SSE decreases, and the accuracy of the solutions (formulas) increases.Finally, the accuracy of the developed formula is tested using a new validation dataset.
Evolutionary Polynomial Regression (EPR)
EPR is another application of GA, and it depends on optimizing the number of terms of the "Traditional Polynomial Regression (TPR)" [60].TPR is a well-known mathematical regression technique that uses the "Least Squared Error" principle to find the optimum coefficient values of a certain polynomial function to fit a certain dataset [60].The considered polynomial may be single or multi-variable depending on the considered problem configuration (dataset) [60].The chosen polynomial degree (its highest power) depends on the complexity of the considered problem; first-degree polynomials (linear) may be used for simple problems, while for more complicated ones, second-degree (quadratic), third-degree (cubic) of higher degrees may be required [60].The number of polynomial terms dramatically increases with increasing variable numbers and polynomial degree; for example, a two-variable second-degree polynomial has only 6 terms (X 2 + Y 2 + XY + X + Y + C), while a three-variable third-degree polynomial has 20 terms, and a four-variable fourth-degree polynomial has 70 terms, and so on [60].As the number of polynomial terms increases, it becomes more difficult to apply them less practically.Hence, the EPR technique aims to optimize the TPR by eliminating the less important terms and keeping only the most effective ones using the GA technique [60].Thus, the population (solutions) consists of a set of polynomials, the fitting criteria is the "Sum of Squared Errors (SSE)", the chromosome consists of a list of polynomial terms, and the length of the chromosome is the chosen number of terms.Cycle after cycle, the most important terms accumulate in the survival chromosomes, and the less important ones are deleted.
Artificial Neural Network (ANN)
The ANN is an umbrella of a wide range of AI techniques that depend on mimicking the behavior of biological neurons [60].They all consist of nodes (cells or neurons) and link to connect the nodes, but they have different neuron arrangements and connection patterns [60]."Multi-Layer Perceptron (MLP)" is one of the earliest and most common ANN type.It is the commonly used type for regression problems [60].It consists of a number of nodes arranged in layers; the first layer is called the "Input layer", and it is used to receive the input values, while the last layer is called the "Output layer", and it is used to deliver outputs values [60].Between the input and the output layers, there are a number of intermediate layers called "Hidden layers" which are responsible for predicting the outputs from the inputs.MLP must have one hidden layer at least.Each node in a certain layer is connected to all the nodes in the previous and the next layers by links, but the nodes of each layer are not connected to each other [60].Each link has an importance factor called "Weight", and each node has a triggering formula called "Activation Function", this could be any nonlinear function, but the most popular ones are the sigmoid, the hyper-tan, and the ramp functions, which are responsible for the nonlinear capability of the ANN [60].Due to the variation in ranges of input values, all inputs must be scaled to a unified range; this process is called "Standardization" if the input variance is divided by its standard deviation (SD), "Normalization" if the inputs are scaled between 0 and 1, and called "Hyper normalization" if the inputs are scaled between -1 and 1.The scaled inputs propagate from the input layer to the output layer through the hidden layers.The output of a certain node is the result of applying its activation function on the summation of the node inputs multiplied by the corresponding link weights [60].After the output layer, the outputs must be de-scaled to their original renege.Any ANN model must be trained using a given dataset; during the training process, the weight values of the model's links are adjusted to predict the correct outputs from the inputs [60].There are many training techniques that could be used to find the optimum values for links' weights, such as "Back Propagation (BP)", the "Gradually Reduced Gradient (GRG)" and the "Genetic Algorithm (GA)".
ANN Using "Back Propagation (ANN-BP)"
BP is the earliest ANN training technique, and has become the default training technique in most ANN commercial software [60].During the training process, the data propagate forward from the input layer to the output layer to calculate the outputs' values; then, the calculated values are compared to the correct ones from the training dataset, and the errors are back-propagated from the output layer to the input layer, which is why it is called "Back Propagation" [60].During the back propagation, the error values are divided on the links according to their weights.The updated weights are equaled to the original weight by subtracting the share of error.BP is a sequential training technique where the ANN weights are updated record by record [60].This iterative process is slow, but it requires limited computational capability.
ANN Using "Gradually Reduced Gradient (ANN-GRG)"
GRG is a well-known mathematical regression technique [60].It is used to optimize the coefficients of a certain formula to fit a certain dataset by minimizing the SSE between the predicted and the correct values of the database [60].The technique begins with assuming random values for the coefficients of the formula, and then continues by gradually changing the values of the coefficients one by one while monitoring the SSE value [60].If changing the value of a certain coefficient decreases the SSE value, the process continues, and if it increases the SSE value, the change is applied with the opposite sign.This cycle continues until the minimum SSE value is achieved.This technique is used to train the ANN models by considering the whole ANN as one huge and extremely complicated formula, and the links' weights are its coefficients.The GRG is used to gradually adjust the values of the weights (coefficients) to minimize the SSE of the ANN [60].This training technique is classified as the "Batch technique" because it deals with the error of the whole dataset at once, unlike sequential procedures such as BP.Hence, it is faster than BP but requires much more computational capability to deal with the whole dataset together.
ANN Using "Genetic Algorithm (ANN-GA)" Similar to GRG, the GA technique deals with the ANN as one huge formula that needs optimization [60].The GA training technique begins with generating a random set of solutions; each solution is a list of ANN weights [60].Next, the fitness of each solution (SSE) is evaluated, and the most fitting solutions are selected and used to generate new solutions (lists of weights) by applying crossover and mutation [60].In this technique, crossover is applied at multiple points along the chromosome, not just at the middle as in the original GA technique [60].Cycle after cycle, the model converges to the optimum list of weights.The accuracy of this technique is not as sharp as BP and GRG because the initial weight values of the randomly generated solutions are not changed during training, only the combinations of the weights are changing [60].For example, if the initial random values of a certain weight are 0.214, 0.558 and 0.331 and the correct value is (0.472), then (0.558) will appear in the final model [60].Although this may not be the optimum value, it is the closest available one to the optimum.However, this error could be insignificant if the random population is large enough; for example, the error of a randomly generated population of 1000 records with a uniform probability density function is 1/1000 of the weight range, which is insignificant [60].As an AI technique, the GA is less efficient than the GRG for a problem with a limited number of variables, but as the number of variables increases, GRG becomes very complicated and requires a lot of time and computational resources, and hence GA presents a much faster and less resource-consuming technique.
Model Performance Assessment
The models were evaluated by using indices of performance evaluation including the coefficient of determination (R 2 ), the sum of squares errors, and the root mean squared error, which is embedded in Taylor's diagram.The R 2 shows how well the developed models fit the measured data.For example, an R 2 value of 0.85 shows that 85% of the studied data fit the models.Generally, the R 2 ranges between −1 and 1 on the two sides of zero (0), which shows a perfect fit, while zero shows a no fit, and literally it ranges between 0 and −1 and 0 and 1.It is statistically computed as follows: where SS regression is the sum of the squares due to regression and SS total is the total sum of the squares.Additionally, SSE is the measure of the discrepancy between the measured data and the estimated models.This is a commonly used statistical error measurement previously used in recent research works.The next section presents the results of each technique and their accuracy metrics.
Behavior of the Concrete Mixes and Environmental Impact (EI)
It is conventional that an increase in the amount of Portland cement utilized in concrete increases its mechanical properties, including compressive strength (Fc).From the collected data in Table 1, it can be observed that our results are consistent with the known results of the increase in cement amounting to increasing strength.However, the focus of this work is on underscoring what effect the addition or replacement of cement with FA has on concrete strength and, of course, the environmental impact of this mixing process.For example, a mix ratio that recorded a relatively high compressive strength of 42.5 MPa contained 459.46 kg/m 3 of cement and 68.92 kg/m 3 of FA with an environmental impact potential (EIP or P) of 9.5%.Meanwhile, another concrete mix recorded 33.6 MPa with a cement content of 283.79 kg/m 3 and a FA content of 167.24 kg/m 3 ; however, it had an environmental impact potential of 8.3%.This shows a reduced impact on the environment with a higher amount of FA compared to the first mix.However, the P was still high.The mix with the lowest EIP of all the concrete mixes recorded 3.8%; the amount of cement needed to achieve this was 131.47 kg/m 3 with a fly ash content of 68.17 kg/m 3 , which resulted in a compressive strength of 11.9 MPa.Thus, there is an urgent need to keep the amount of cement as low as possible in order to maintain a reduced impact (P) on the environment as well as develop a relatively high-strength concrete that meets the minimum requirements for constructed infrastructure.Certainly, from the results of the literature search [13][14][15][16], it has been observed that increased FA increases the mechanical properties of concrete due to the amorphous silica in FA, which undergoes a chemical reaction with Ca(OH) 2 to generate the strength-based compound of calcium silicate hydrate (CSH).However, the early-stage strength of concrete is compromised due to the delay in strength gain resulting from FA inclusion.The developed intelligent models were used to determine the optimized combinations, i.e., the minimum amount of cement and the maximum amount of FA needed to develop an FA-based concrete with the required and/or minimum strength for constructed infrastructure with the lowest amount of environmental impact.
Model (1)-Using the GP Technique
The developed GP model had six levels of complexity.The population size, survivor size, and the number of generations were 250,000, 50,000, and 500, respectively.Equations ( 2) and (3) present the output formulas for Fc28 and P. The average errors in % of the total dataset were 19.1% and 0.8%, while the coefficient of determination (R 2 ) values are 0.788 and 0.999, in order.
3.2.2.Models (2, 3, and 4)-Using ANN Techniques Three models were developed using the ANN technique.All the models had the same layout (5:7:2), normalization method (−1.0 to 1.0), and activation function (hypertan).However, each model utilized a different training algorithm as follows: model (2) used the traditional "Back Propagation (BP)" algorithm; model (3) used the well-known mathematical algorithm "Gradually Reduced Gradient (GRG)", and model (4) used the famous AI optimization technique of the "Genetic Algorithm (GA)".These three developed models were used to predict Fc28 and P values.The used network layout is illustrated in Figure 3, while the weights matrixes of each model are shown in Tables 4-6.The average errors in % of the total dataset were (5.1%, 2.2%), (5.8%, 0.8%) and (7.0%, 1.0%), and the (R 2 ) values were (0.986, 0.994), (0.983, 0.999) and (0.974, 0.999), respectively.The relative importance values for each input parameter are illustrated in Figure 4, which indicates that cement content (C) was the most important factor, then aggregate content (FAg and CAg).Fly ash and water content came last in the importance ranking.Hidden Layer Input Layer (Bias) −0.20
Model (5)-Using the EPR Technique
Finally, the developed EPR model was limited to the cubic level for Fc28 and the linear level for P. For 5 inputs there were 56 possible terms (35 + 15 + 5 + 1 = 56) for Fc28 and only 6 terms for P, as follows: The GA technique was applied to these polynomials to select the most effective 28 terms to predict Fc28 and 3 terms to predict P. The outputs are illustrated in Equations ( 5) and ( 6).
The average error in % and R 2 values were 10.1%-0.957and 0.4%-1.000for Fc28 and P, respectively.The results of all the developed models are summarized in Table 7. Figures 5-7 graphically compare the accuracies of the developed models.The relations between the calculated and predicted values are shown in Figures 8 and 9. Figures 5-7 graphically compare the accuracies of the developed models.The relations between the calculated and predicted values are shown in Figures 8 and 9. Khursheed et al. [73] investigated predictions for the compressive strength of fly ash concrete by adopting minimax probability machine regression (MPMR), a relevance vector machine (RVM), genetic programming (GP), an emotional neural network (ENN) and an extreme learning machine (ELM).In this research into the 28-day-cured compressive Khursheed et al. [73] investigated predictions for the compressive strength of fly ash concrete by adopting minimax probability machine regression (MPMR), a relevance vector machine (RVM), genetic programming (GP), an emotional neural network (ENN) and an extreme learning machine (ELM).In this research into the 28-day-cured compressive strength of concrete, it was judged that MPMR with a performance index of 0.992 was the decisive model for the forecasting of concrete strength.Meanwhile, in comparison with the model decision of Khursheed et al. [73], the present research paper used the learning abilities of ANN (BP, GRG, and GA), GP and EPR, and it has been shown that ANN-BP with a performance index of 0.986 outclassed other techniques and was adjudged the decisive technique.However, compared to the previous work [73], ANN-BP achieved 98.6% efficiency with minimal errors.Clear statistical parameters, figures of distribution, best-fit diagrams, and Taylor diagrams were used to judge the accuracy of these models.A further step has been taken in this present research work to predict the environmental impact effect of the concrete materials (P).The above intelligent techniques as used in predicting Fc28 were also used, and the outcome showed that EPR with a perfect coefficient of determination outclassed the other techniques in the following order: GP-0.999,ANN-GRG-0.999,and ANN-BP-0.994.This time, EPR was adjudged the decisive technique for predicting life cycle assessment and the environmental impact potential of utilizing the concrete constituents.Finally, FA was adjudged to have a 62% degree of importance which, being a good degree above average, could replace cement.
Conclusions
This research presents three models using five AI techniques (GP, ANN-BP, ANN-GRG, ANN-GA, and EPR) to predict both 28-day compressive strength (Fc28) and the environmental impact factor (P) of FA-based concrete using water content (W), cement content (C), fly ash content (FA), fine aggregate (sand) content (FAg) and coarse aggregate content (CAg) as the independent variables of the concrete mixes.First, it can be remarked that the concrete mixes with a higher amount of cement than FA showed higher compressive strength and environmental impact, while those with a higher amount of FA showed relatively lower strength and environmental impact.Meanwhile, the closed-form equations and the proposed models present optimization models with which the optimal amount of FA needed to achieve optimal strength and minimal environmental impact can be determined prior to the design and production of concrete.This agrees with previous research works on the utilization of high-volume fly ash in concrete [75][76][77][78][79][80][81].The results of comparing the accuracies of the developed models can be concluded in the following points:
•
Regarding Fc28, the GP model was the simplest and the least accurate one (80.9%).Then, EPR had an accuracy of 89.9%, and finally the three ANN models had almost the same accuracy of ≈94.0%; • Regarding P, all five models had almost the same accuracy (99.0%);
•
The prediction accuracy of the EPR model was lower than the ANN models, but their outputs were closed-form equations that could be used manually or as software, unlike the ANN output, which cannot be used manually;
•
The results indicate that the accuracy of the ANN model was slightly affected by the training algorithm.The back propagation (BP) showed the best level of accuracy (94.9% and 97.8%), gradually reduced gradient (GRG) came in the second with accuracies of 94.2% and 99.2%, and the genetic algorithm (GA) showed the lowest level of accuracy with 93.0% and 99.0% for Fc28 and P, respectively;
•
The summation of the absolute weights of each neuron in the input layer of the developed ANN model indicated that for both Fc28 and P, cement content (C) was the most important factor, and then aggregate content (FAg and CAg).Fly ash and water content came last in the importance ranking; • Both the GP and EPR models indicated that the environmental impact factor (P) depended only on the cementitious materials (C and FA);
•
The GA technique successfully reduced the 56 and 6 terms of conventional polynomial regression quadrilateral formula to only 28 and 3 terms for Fc28 and P, respectively, without a significant impact on accuracy; • Similar to any other regression technique, the generated formulas were valid within the considered range of parameter values; beyond this range, the prediction accuracy should be verified.
Figure 1 .
Figure 1.The multiple structural and environmental benefits of FA in sustainable concrete.
Figure 1 .
Figure 1.The multiple structural and environmental benefits of FA in sustainable concrete.
Figure 2 .
Figure 2. Distribution histograms for inputs (in blue) and outputs (in green).
Figure 2 .
Figure 2. Distribution histograms for inputs (in blue) and outputs (in green).
Buildings 2022 , 27 Figure 3 .
Figure 3. Architecture layout for the developed ANN models.Figure 3. Architecture layout for the developed ANN models.
Figure 3 .
Figure 3. Architecture layout for the developed ANN models.Figure 3. Architecture layout for the developed ANN models.
Figure 3 .
Figure 3. Architecture layout for the developed ANN models.
Figure 4 .
Figure 4. Relative importance of input parameters.Figure 4. Relative importance of input parameters.
Figure 4 .
Figure 4. Relative importance of input parameters.Figure 4. Relative importance of input parameters.
Figure 5 .
Figure 5.Comparison between developed models using the Taylor diagram for the compressive strength (Fc28) and the life cycle assessment impact point (P).
Figure 5 .
Figure 5.Comparison between developed models using the Taylor diagram for the compressive strength (Fc28) and the life cycle assessment impact point (P).
Figure 6 .
Figure 6.Comparison between developed models for Fc28 using variance diagrams.Figure 6.Comparison between developed models for Fc28 using variance diagrams.
Figure 6 .
Figure 6.Comparison between developed models for Fc28 using variance diagrams.Figure 6.Comparison between developed models for Fc28 using variance diagrams.
Figure 7 .
Figure 7.Comparison between developed models for P using variance diagrams.Figure 7. Comparison between developed models for P using variance diagrams.
Figure 7 .Figure 8 .
Figure 7.Comparison between developed models for P using variance diagrams.Figure 7. Comparison between developed models for P using variance diagrams.
Table 2 .
Statistical analysis of collected FA-concrete database.
Table 3 .
Pearson correlation matrix of the FA-concrete parameters.
Table 3 .
Pearson correlation matrix of the FA-concrete parameters.
Table 4 .
Weights matrix for the developed ANN-BP model.
Table 5 .
Weights matrix for the developed ANN-GRG model.
Table 6 .
Weights matrix for the developed ANN-GA model.
Table 7 .
Performance and accuracies of the developed models.
Table 7 .
Performance and accuracies of the developed models. | 2022-07-06T15:13:03.592Z | 2022-07-04T00:00:00.000 | {
"year": 2022,
"sha1": "1c74cd0e7d0aaef1a85871105523738bab6cf889",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/12/7/948/pdf?version=1656923098",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "da16776ed970fd0e59f22541ab5a9b875970de76",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
11846763 | pes2o/s2orc | v3-fos-license | Chemistry of Secondary Polyphenols Produced during Processing of Tea and Selected Foods
This review will discuss recent progress in the chemistry of secondary polyphenols produced during food processing. The production mechanism of the secondary polyphenols in black tea, whisky, cinnamon, and persimmon fruits will be introduced. In the process of black tea production, tea leaf catechins are enzymatically oxidized to yield a complex mixture of oxidation products, including theaflavins and thearubigins. Despite the importance of the beverage, most of the chemical constituents have not yet been confirmed due to the complexity of the mixture. However, the reaction mechanisms at the initial stages of catechin oxidation are explained by simple quinone–phenol coupling reactions. In vitro model experiments indicated the presence of interesting regio- and stereoselective reactions. Recent results on the reaction mechanisms will be introduced. During the aging of whisky in oak wood barrels, ellagitannins originating from oak wood are oxidized and react with ethanol to give characteristic secondary ellagitannins. The major part of the cinnamon procyanidins is polymerized by copolymerization with cinnamaldehyde. In addition, anthocyanidin structural units are generated in the polymer molecules by oxidation which accounts for the reddish coloration of the cinnamon extract. This reaction is related to the insolubilization of proanthocyanidins in persimmon fruits by condensation with acetaldehyde. In addition to oxidation, the reaction of polyphenols with aldehydes may be important in food processing.
Introduction
Recent studies have revealed various health benefits of plant polyphenols, and their importance in foods, beverages and natural medicine [1][2][3][4][5][6][7]. The polyphenols include different types of chemical compounds including catechins, proanthocyanidins, anthocyanins, gallotannins, ellagitannins, flavonol glycosides, hydroxycinnamoyl esters, lignoids, and stilbenoids. These subclasses of polyphenols show different chemical reactivities. The polyphenols are stable as long as they are accumulated in living plant cells. However, when the tissues undergo physiological changes, such as fruit ripening and wounding of the tissue by herbivores, some of the polyphenols are chemically converted to secondary polyphenols by enzymatic and non-enzymatic reactions. A typical and economically important example of secondary polyphenols is black tea polyphenols. During black tea production, fresh tea leaves are cut and kneaded. At this stage catechins and oxidation enzymes, which are stored in different tissues in the leaf, are mixed together and the catechins are converted into a complex mixture of oxidation products. Among plant secondary metabolites, polyphenols are most susceptible to oxidation and their reactivity is closely related to plant defense systems against oxidative stress. Therefore, post harvest chemical change of polyphenols occurs ubiquitously in vegetables and fruits to a greater or lesser extent. In most cases, the reactions involving the production of secondary polyphenols are complex, and many of the reaction products still remain to be chemically identified. However, the reaction mechanisms are interesting and attractive from the viewpoint of natural product chemists. This review describes the production mechanisms of black tea polyphenols from simple tea catechins. In addition, some reactions of polyphenols with coexisting compounds such as various aldehydes are also introduced.
Oxidation of (+)-catechin and (−)-epicatechin
Enzymatic or non-enzymatic oxidation is the most common reaction involved in the production of secondary polyphenols. Polyphenols having ortho-diphenol (catechol) aromatic rings are widely distributed in nature and susceptible to oxidation. The structure of dehydrodicatechin A (Figure 1) was first reported in 1969 [8], and generation of related catechin dimers and trimers by in vitro experiments followed [9,10]. These compounds are produced by nucleophilic addition of electron-rich phloroglucinol A-rings to the electron-deficient ortho-quinone of B-rings. In the case of dehydrodicatechin A, further oxidation of the catechol ring and addition of hydroxyl groups to a double bond and a ketone occurs (Scheme 1). These types of oxidation products are partly responsible for the browning of fruits and beer [11]. The in vitro oxidation of (+)-catechin via an oxidation enzyme usually yields a complex mixture, including oligomers, because each catechin molecule has three reaction sites: the C-6 and C-8 of the A-ring and the C-6 of the B-ring [12]. Catechin oxidation products related to these compounds have also been isolated from Quercus ilex [13] and some crude drugs [14,15].
Black Tea Polyphenols
The polyphenols from fresh tea leaf are quite unique. Their concentrations in the leaf are very high (13-25% of dry weight [16]) and most of the polyphenols are composed of only four monomeric catechins, that is, (−)-epicatechin (EC), (−)-epicatechin-3-O-gallate (ECg), (−)-epigallocatechin (EGC), and (−)-epigallocatechin-3-O-gallate (EGCg). Commonly, monomeric catechins coexist with proanthocyanidins in many other plants; however, the concentration of proanthocyanidins in the tea leaf is low. Coexistence of pyrogallol (3,4,5-tryhydroxyphenyl)-and catechol (3,4-dihydroxyphenyl)type catechins is also characteristic of tea leaf polyphenols. In addition, over 50% of the catechins are esterified with gallic acid and EGC and EGCg together account for over 70% of the total tea catechins [16]. The polyphenol composition of commercial green tea consumed mainly in East Asia is similar to that of fresh tea leaves because steaming or roasting at the initial stage of green tea production inactivates enzymes involved in the oxidation and hydrolysis of the chemical constituents of the leaves. In contrast, the enzymes play important roles in black tea manufacturing. The complex enzymatic reactions produces the color and flavor characteristic of each black tea brand. As for polyphenols, when fresh green tea leaves are crushed at the initial stage, the four major catechins are enzymatically oxidized.
Oxidation of a mixture of these four catechins in the tea leaf proceeds in different ways from that observed for (+)-catechin alone. Coupling products between A-and B-rings as observed in the oxidation of (+)-catechin have not been found so far in black tea polyphenols. The most important catechin oxidation products in black tea are theaflavin and its mono and digallates [17][18][19]. Theaflavin possesses a characteristic benzotropolone moiety, which is produced by condensation between a catechol-type B-ring of EC and a pyrogallol-type B-ring of EGC. The reaction mechanism was presumed to be as shown in Scheme 2 [20]. In vitro oxidation of a mixture of EC and EGC with polyphenol oxidase suggested that the enzymes preferentially oxidize EC to EC-quinone, and the electron deficient EC-quinone reacted with the electron-rich EGC B-ring. Subsequent oxidation and decarboxylation afforded theaflavin. During the reaction, generation of a bicyclo[3.2.1]octane-type intermediate as shown in Scheme 2 was presumed [17,20]. Scheme 2. Production of theaflavins from epicatechin and epigallocatechin [17,20] Theaflavins are not final products and thus, are further oxidized (Scheme 3). EGC was consumed faster than EC, because, as described below, oxidative coupling between two molecules of EGC occurs. When EGC is exhausted in the reaction mixture, the EC-quinone begins to oxidize theaflavin. At that time, the brilliant reddish orange color of the reaction mixture dramatically changes to dark green [10] ( Figure 2). This is probably caused by stacking of the EC-quinone on the benzotropolone ring of the theaflavins. The oxidation of theaflavins proceeds via the electron withdrawing action of the EC-quinone from the benzotropolone ring. Several oxidation products of the theaflavins are known. The major product is theanaphthoquinone [21,22]. However, this product has not been isolated from commercial black tea to date. In the actual black tea fermentation, coexisting substances may react with theaflavin quinones or theanaphthoquinone. Dehydrotheaflavin, bistheaflavins A and B were also only isolated as in vitro oxidation products of EC and EGC [23,24]. Compared with B-rings, the reactivity of the galloyl esters is low. However, oxidative coupling between EC-quinone and the galloyl groups of the theaflavin gallates has been observed ( Figure 3) [25][26][27]. The reaction elongates the molecules of oxidation products and may be related to the formation of polymeric oxidation products. In tea fermentation, ECg is less reactive compared with other catechins and its concentration decreases slowly. However, enzymatic oxidation of ECg alone yielded condensation products with a benzotropolone ring produced by coupling between galloyl groups and the catechol B-rings [28][29][30]. The coupling products between the A-and B-rings as observed in the oxidation of (+)-catechin or EC ( Figure 1) were not isolated. Pyrogallol-type B-rings of EGC and EGCg have the lowest redox potential among the aromatic rings of tea catechins [31] and are susceptible to oxidation. In addition to direct oxidation with enzymes, the pyrogallol rings are oxidized by EC-quinone. Importance of the EC-quinone and related catechol-quinones, such as chlorogenic acid quinone, as an oxidizing agent in pyrogallol oxidation is supported by the oxidation of myricitrin [32]. Enzymatic oxidation of myricitrin alone is very slow. However, myricitrin was oxidized rapidly in the presence of (+)-catechin or chlorogenic acid.
The oxidations of EGC and EGCg are important because these two catechins account for over 70% of total tea catechins in tea leaves. The following in vitro experiments demonstrate that the production of unstable quinone dimers named dehydrotheasinensins is the most important reaction in the oxidation of these catechins [33,34]. After the fresh tea leaves were crushed and kneaded, theaflavins were produced in the leaves; however, theasinensins, which are also major black tea polyphenols, were not detected [35,36]. Production of theasinensins was only observed after the leaves were heated at 80 °C ( Figure 4). The results indicated that the theasinensins are produced by degradation of heat-susceptible intermediates. The presence of the intermediates was first confirmed by trapping them as phenazine derivatives by condensation with o-phenylenediamine. The structure of the derivatives indicated that the unstable intermediates are quinone dimers of EGC and EGCg. The total concentration of the intermediates in the leaves was estimated to be comparable to that of theaflavins. One of the intermediates was synthesized and purified by in vitro enzymatic oxidation of EGCg and named dehydrotheasinensin A [33]. This compound has a hydrated form of a triketone structure. Although there are two isomers of an EGCg dimer: one is theasinensin A with a R-biphenyl bond and the other is theasinensin D with a S-biphenyl bond [36], reduction of dehydrotheasinensin A with thiol compounds, such as mercaptoethanol, gave only theasinensin A. The results indicated that oxidative coupling between two EGCg molecules proceeds stereoselectively. In contrast, dehydrotheasinensin A slowly decomposed in a neutral aqueous solution to give a mixture of theasinensins A and D, galloyl oolongtheanin [36], and another oxidation product having a carboxylic acid. Theasinensins are reduction products and other two are oxidation products; therefore, the production of theasinensins from dehydrotheasinensin A proceeds vis a redox dismutation (Scheme 4). However, concentration of theasinensin A in black tea is higher than that of theasinensin D and galloyl oolongtheanin, suggesting that dehydrotheasinensins are reduced by some reducing substances, such as ascorbic acid, present in the tea leaf. It should be noted that dehydrotheasinensin A is not produced by enzymatic oxidation of theasinensin A [37]. The major oxidation product of EGC is dehydrotheasinensin C, a desgalloyl form of dehydrotheasinensin A, which decomposes to give theasinensins C and E and desgalloyl oolongtheanin, which are desgalloyl analogs of the products generated from dehydrotheasinensin A. Oxidation of EGC is partly different from that of EGCg, and a characteristic oxidation product named proepitheaflagallin was isolated (Scheme 5) [38]. In addition to the usual 2D NMR spectroscopic analysis, the structure of proepitheaflagallin was determined by analysis of the condensation products with o-phenylenediamine. The structures of the quinoxaline derivatives indicated that proepitheaflagallin exists as a mixture of several tautomers. Occurrence of decarboxylation was also noted. In production of proepitheaflagallin, the free hydroxyl group at C-3 of the flavan-3-ol skeleton plays an important role by forming an acetal ring with the B-ring carbonyl carbons. This product is unstable and degraded to give the known black tea pigment epitheaflagallin (Scheme 6) [39]. In addition to epitheaflagallin, hydroxytheaflavin was also produced by the degradation. This product has not been identified in commercial black tea. Recently, production of proepitheaflagallin B having a bicyclo[3.2.1]octane-type structure was demonstrated by in vitro oxidation of EGC [40]. This unstable compound decomposed to afford proepitheaflagallin. Interestingly, structure of the intermediate was related to that presumed in theaflavin synthesis (Scheme 2). Formation of a hemiacetal ring between the C-3 hydroxyl and carbonyl groups in proepitheaflagallin B stabilizes the molecule under conventional chromatographic conditions. Isolation of this product provides the first substantial evidence that a bicyclo[3.2.1]octane-type intermediate is produced during catechin oxidation.
Interestingly, oxidation of EGC with a 2(R),3(R)-cis configuration and (+)-gallocatechin (GC) with a 2(R),3(S)-trans configuration proceeds differently [41], and enzymes preferentially oxidize EGC rather than GC. Oxidation of EGC proceeds rapidly and, as mentioned above, the major oxidation product is dehydrotheasinensin C while proepitheaflagallin is the minor product. In contrast, oxidation of (+)-GC is slow and the major product is a 2,3-trans analog of proepitheaflagallin (Scheme 7). Only a small amount of the dehydrotheasinensin-type product was detected, even though a substantial amount of GC quinone was present in the reaction mixture. This was probably because the C-3 hydroxyl group hinders the intermolecular interaction between GC and GC quinone. The stereochemistry of the C-ring also affected the regioselectivity of the coupling. In the oxidation of EGC, the hydrophobic interaction between two sets of the A-and C-rings may accelerate the coupling reaction. Oxidation of (−)-GC was similar to that of (+)-GC, suggesting that the enzyme only oxidized the B-ring to o-quinone, and the subsequent dimerization reaction was non-enzymatic. [42]. Recently we found that theacitrin C decomposes to give a monomeric pigment named theacitrinin A together with 2,3,5,7-tetrahydroxychroman-3-O-gallate [33,43]. This reaction mechanism is related to that of the production of epitheaflagallin from proepitheaflagallin (Scheme 8) and the presence of the tetrahydroxychroman gallate in commercial black tea was confirmed [33]. Some oxidation products related to theasinensins were produced by in vitro enzymatic oxidation of EGC and EGCg ( Figure 5). Dehydrotheasinensin AQ [44] and dehydrotheasinensin E [10] are yellow pigments and deduced to be produced by isomerization of dehydrotheasinensins A and C, respectively. Dehydrotheasinensin AQ was detected in commercial black tea. EGCg dimer A was first isolated as an in vitro oxidation product of EGCg and later isolated from commercial black tea [23]. EGCg dimer B was isolated from a mixture after treatment of EGCg oxidation products with mercaptoethanol as a reducing agent of quinones [44]. A trimer of EGCg was also produced by oxidative coupling between the EGCg B-ring and the galloyl group of theasinensin A [44]. The coupling proceeded via a dehydrotheasinensin-type intermediate; however, the coupling is not stereoselective [37]. So far, EGCg trimers have not been found in black tea. In vitro experiments indicated that the reactivity of galloyl groups is much lower than that of pyrogallol and catechol B-rings.
When thinking about the health benefits of black tea polyphenols as antioxidant [45], anticancer [46,47] and anti-inflammatory [48,49] agents, it should be considered that the direct absorption of the polyphenols by the digestive tract is expected to be lower than for tea catechins [50] because the black tea polyphenols, represented by the theaflavins and theasinensins, have larger molecular weights than those of monomeric tea catechins. In contrast, the inhibition of digestive enzymes may have considerable importance, since polyphenols with high molecular weights have an inherent ability to interact with proteins by forming hydrophobic and hydrogen bonds [51,52] which usually results in enzyme inhibition [53]. Amylase and lipase are digestive enzymes that hydrolyze starch and triglyceride, respectively, and inhibition of these enzymes has been linked to the decreased incidence of common diseases caused by diets rich in carbohydrates and fat. The inhibitory activities of theaflavins against these enzymes have been reported [54][55][56]; however, the concentrations of theaflavins in black tea infusions were much lower than other water-soluble polyphenols [20,57].
Activity guided separation of black tea extract indicated that polymer-like polyphenols, in addition to theaflavins, have strong inhibitory activities towards lipase [58]. The uncharacterized polyphenols are detected as a broad hump on the HPLC baseline and are probably identical to the thearubigins ( Figure 6). Thearubigins are major components of the color of black tea infusions [59,60], which are known to be heterogeneous mixtures of catechin oxidation products and have not yet been chemically characterized [57]. The 13 C-NMR spectrum of the polymeric substance showed signals attributable to flavan-3-ol A-and C-rings and galloyl groups. Absence of the B-ring carbon signals suggested that the polymerization occurred at the B-rings. Recently, conjugation of catechin quinones with proteins was suggested [61]. The elemental analysis indicated that the nitrogen content of the polymeric substance is much lower (less than 0.3%) than that expected for protein conjugation. The mechanism of polymerization is still ambiguous.
Many plants have the ability to oxidize catechins even though the plants do not contain catechins [10]. When pure tea catechins are mixed with Japanese pear or loquat fruits homogenates, theaflavins, theasinensins and thearubigins are produced [10,33,37,44]. Oxidation of polyphenols is accompanied by reduction of an oxygen molecule to generate the superoxide anion and hydrogen peroxide, which are known to show antimicrobial activity [62][63][64][65]. In plant defense systems, generation of these reactive oxygen species plays important roles including the precipitation of proteins and enzyme inhibition in the damaged tissue via the resulting oxidation products of polyphenols. Production of hydrogen peroxide in tea leaf during enzymatic oxidation may also play an important role in the antimicrobial activity [66,67]. From the autoxidation of EGCg, production of dehydrotheasinensin A was also observed [33].
During black tea production, enzymatic oxidative dimerization of pyrogallol-type catechins is important because of their high susceptibility to oxidation and abundance in the tea leaf. Production of unstable intermediates, such as dehydrotheasinensins and proepitheaflagallin, was demonstrated. However, degradation of the intermediates remained to be clarified as the major degradation products, such as the theasinensins, only account for about half of the total degradation products. The unknown degradation products may hold the key to the solution of thearubigin formation in black tea chemistry.
Whisky
In whisky production, distilled spirits are aged for several years in oak barrels, and the constituents of the wood dissolve into the spirit to determine its color, flavor and taste. The wood of oak species, such as Quercus robur, Q. petraea, and Q. alba, contains significant amounts of ellagitannins [68]. During barrel production, the oak wood passes through several processing stages, including seasoning and toasting processes, and the ellagitannin composition of the wood undergoes various chemical changes [69]. Therefore, the solute of the whisky is different from the original oak wood polyphenols. The major ellagitannins are C-glycosidic ellagitannins, castalagin and vescalagin and their dimers and oligomers [68]. These tannins decompose during the toasting or charring process. In addition, during the aging process, oxygen molecules penetrate into the spirits through the barrel wood and oxidize the solutes. Therefore, the polyphenols in whisky are a mixture of products generated through a complex chemical process. Recently, oxidation products of castalagin named whiskytannins A and B were isolated from commercially bottled Japanese whisky along with carboxyl ellagic acid, gallic acid, ellagic acid, brevifolin carboxylic acid, 6-O-and 2,3-di-O-galloyl glucoses, 2,3-(S)-hexahydroxydiphenoylglucose, and castacrenin B [70]. In this experiment, castalagin and vescalagin were not detected. The structure of whisky tannins suggested that they are generated by regioselective oxidation of the pyrogallol ring attached to the glucose C-1 of castalagin and subsequent addition of ethanol and benzylic acid-type rearrangement (Scheme 9). It was reported that in the ethanol solution of vescalagin, a C-1 epimer of castalagin was converted to β-1-O-ethylvescalagin [71]; however, this ethanol adduct was not isolated from Japanese whisky. [70]. To mimic the oxidation occurring during the charring process of barrel making, pyrolysis of ellagitannins was examined. Pyrolysis of castalagin, as a mimic of decomposition during the charring process in barrel production, yielded ellagic acid, dehydrocastalagin, castacrenin F, and phenolcarboxylic acid trislactone having an isocoumarin structure (Figure 7) [70,72]. Interestingly, pyrolysis of vescalagin afforded the deoxy product instead of the oxidation product [72].
Oolong Tea and Black Tea
During the tea fermentation process, coupling of the catechin A-ring with coexisting carbonyl compounds occurs. 8-C-Ascorbyl-(−)-epigallocatechin-3-O-gallate (Scheme 10) was first isolated from oolong tea, which is a kind of semi-fermented tea [73]. This compound is a coupling product between EGCg and dehydroascorbic acid.
8-C-ascorbyl-(-)-epigallocatechin-3-O-gallate
EGCg dimers produced by reaction with formaldehyde were also isolated from the oolong tea, and named oolonghomobisflavans [73]. Usually, the C-C bond between the aldehyde and A-rings are unstable because nucleophilic substitution at the methine or methylene carbon between two phenyl groups easily occurs. Oolonghomobisflavans (Figure 8) are relatively stable compared with the analogous EGCg-acetaldehyde coupling products [74,75]. Aldehydes are also produced from amino acids by Strecker degradation in the presence of carbonyl compounds. Tea contains a characteristic amino acid named L-theanine (5-N-ethylglutamine) accounting for over 50% of the total amino acids of the tea leaves. The concentration of theanine is known to decrease during tea fermentation [76,77]. The catechin quinones possibly react with amino acids and generate Strecker aldehydes [78,79]. Evidence of the production of theanine Strecker aldehyde during tea fermentation was obtained from commercial black tea as a coupling product with theasinensin A [80]. The structure was confirmed by semisynthetic of the coupling product by preparation of 1-ethyl-5-hydroxy-2-pyrrolidinone, a cyclic form of the Strecker aldehyde, and subsequent coupling with theasinensin A (Scheme 11). 1-Ethyl-5-hydroxy-2-pyrrolidinone and related adducts of monomeric catechins were not detected in the black tea. Actually, attempts to produce a similar adduct with EGCg under the same reaction conditions failed. In the case of theasinensin A, the C-C bond between pyrrolidinone and the phloroglucinol A-ring was probably stabilized by the presence of another EGCg unit. Scheme 11. Reaction of theasinensin A with theanine Strecker aldehyde [80].
Insolubilization of Proanthocyanidins in Persimmon Fruits
Some plants use animals to disperse their seeds in exchange for delicious and nutritious fruit flesh. In the case of persimmon fruits, until the seeds acquire germinating ability, the fruits are protected by the bitter and astringent taste of proanthocyanidins. After the seeds acquire the germinating ability, the astringency decreases and the color of the fruits change to reddish orange. At this stage, acetaldehyde is secreted from the seeds and penetrates into the tannin cell [81]. The acetaldehyde concentration also artificially increases by treatment of the astringent persimmon fruit with ethanol or carbon dioxide under anaerobic conditions [75,82]. Acetaldehyde reacts with C-8 or C-6 of the proanthocyanidin Arings and connects two proanthocyanidin molecules resulting in their insolubilization and a decrease in astringency (Scheme 12). The covalent bonding of acetaldehyde in insolubilized proanthocyanidins was supported by thiol degradation experiments, which degrade proanthocyanidins into their component flavan-3-ol units. [74]. Thiol degradation of the extract of the astringent fruits with mercaptoethanol-HCl afforded the thioethers of flavan-3-ols. However, after the persimmon fruits were treated with ethanol under anaerobic conditions, the extract of the fruits did not yield the thio ethers. Direct treatment of the plant debris remaining on the filter paper with mercaptoethanol-HCl afforded the bisthioethers of the flavan-3-ol acetaldehyde adducts in addition to the usual thioethers. [74].
Cinnamon Bark
Reaction of proanthocyanidins with aldehydes has also been observed when plant tissues are wounded. When the fresh bark of Japanese cinnamon (C. sieboldii) was peeled from a branch, the color of the wood surface was immediately changed from white to reddish brown ( Figure 9). Since this dynamic change in color was not observed when the branch was heated in advance, the reaction was apparently catalyzed by enzymes.
a b
We found that a similar color change was also observed when a mixture of (+)-catechin and cinnamaldehyde, a dominant essential oil of the cinnamon bark, was heated at 100 °C [83]. Even at room temperature, the color changed to red very slowly to give a complex mixture of condensation, monomeric (A) and dimeric (B) products (Scheme 13). The monomeric product A is susceptible to autoxidation to give a red pigment containing benzopyrylium ion moieties which are related to anthocyanidins pigments. On the surface of cinnamon wood (Figure 9), the oxidation of the procyanidin-cinnamaldehyde conjugates is probably catalyzed by oxidation enzymes. Scheme 13. Reaction of (+)-catechin and cinnamaldehyde and generation of red pigment [83]. The production of dimeric product B indicated that dimerization and oligomerization of procyanidins occurs. This was supported by the MALDI-TOF MS of the reaction products of procyanidin B1 [(−)-epicatechin-(4β→8)-(+)-catechin] and cinnamaldehyde ( Figure 10). Production of the benzopyrylium ion was suggested by the appearance of the ion peaks and color of the reaction mixture. The condensation of the proanthocyanidins with cinnamaldehyde in the cinnamon extract was confirmed by a 13 C-NMR spectrum of polymeric proanthocyanidins obtained by sizeexclusion chromatography [84], which showed signals arising from the phenyl groups of the cinnamaldehyde units.
Application of Catechin-aldehyde Conjugation
It is well known that tea catechins show strong radical scavenging activities [85][86][87][88]. However, they are hydrophilic compounds and do not dissolve in a lipid layer. Some efforts have been made to synthesis lipid soluble derivatives of tea catechins by applying the reaction of EGCg with formaldehyde or alkylaldehyde and then subject the products to nucleophilic substitution with thiol compounds [89,90] (Figure 11). Recently, the reaction of a conjugated aldehyde with a C-8(6) carbon and C-7(5) hydroxyl group of flavan-3-ols was applied to prepare lipid-soluble derivatives of catechins [91]. In addition to naturally occurring conjugated aldehydes (trans-2-hexenal and citral), a non-conjugated unsaturated aldehyde (citronellal) and allyl alcohols (geraniol and phytol) are also afforded products as shown in Figure 12. Although most of the reactions gave complex mixtures, the triglyceride soluble fractions of the reaction mixture of EGCg and citronellal, geraniol and phytol showed strong radical scavenging activities, ten times stronger than that of the triglyceride fraction of EGCg.
Reaction of Polyphenols with Aldehyde in Other Foods
Polyphenols in red wine undergo complex reactions with coexisting substances. Dimerization and polymerization of catechin, procyanidin, and anthocyanins in the presence of various aldehydes have been demonstrated in wine-like model solutions [92]. Condensation of malvidin 3-O-glucoside and acetaldehyde affords dimeric pigments [93] (Figure 13). Reaction of catechin with glyoxylic acid yields characteristic pigments with a xanthylium chromophore [94] and 8-formyl catechin [95]. In addition, several pigments named oaklins were generated by the reaction between catechin and coniferyl aldehyde or sinapyl aldehyde extracted from oak wood [96]. These aldehydes are produced by degradation of oak wood lignin and extracted into wine during aging in a barrel. One of the main oaklins, 11-guaiacylcatechinpyrylium, was also detected in a commercial table red wine aged in oak barrels. Reaction with furfural is also reported [97][98][99].
Cacao and Coffee
Cocoa and coffee beans contain proanthocyanidins and caffeoyl esters, respectively. Cocoa beans (the seeds of the Theobroma cacao) are processed for chocolate manufacturing via fermentation and the roasting process [100]. The cocoa beans contain catechins and dimeric to oligomeric procyanidins with 4-8 or 4-6 inter-flavan linkages. In addition, the presence of A-type procyanidins with 4-8 and 2-O-7 linkages and their glycosides were reported [101]. From the cocoa liquor produced from fermented and roasted cocoa beans, the catechin C-glycoside and A-type procyanidins glycosides were isolated [102] (Figure 14). During the processing, the proanthocyanidins decreased and were perhaps insolubilized, however, the chemical mechanism is not clear.
Conclusions
The reason why plants accumulate polyphenols is thought to be related to the plant defense system [104][105][106], and the functions of the polyphenols depend on their chemical reactivity and physicochemical properties. The structural diversity of plant polyphenols in nature suggests that polyphenols have many different and wide-ranging functions. Some polyphenols, such as catechins and proanthocyanidins, are susceptible to enzymatic and non-enzymatic oxidation depending on the plant. Polyphenol oxidation in plant tissues, as observed in black tea production, proceeds along with a reduction in oxygen molecules or polyphenol quinones. Reactivity of these quinones with proteins and other coexisting compounds [61] also plays a significant role during the post harvesting period. The secondary polyphenols produced in plants after physical damage of the tissue is probably related to the plant defense system, though many of the products have not been characterized chemically. Artificial processing including drying, fermentation and roasting, are different from the normal reactions such as insolubilization and polymerization occurring in living plants and thus, produce different compounds. Recent scientific studies have confidently indicated that polyphenols in foods have various health benefits and thus, it continues to be important to identify mechanisms of their production and their chemical structures. | 2014-10-01T00:00:00.000Z | 2009-12-28T00:00:00.000 | {
"year": 2009,
"sha1": "9752ca6fe9bf6c0273cdb13eaf4f4ecd2094780d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/11/1/14/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6275b344bdf5e831af3a5d63c0c9bbe2cc1c0c9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
256594542 | pes2o/s2orc | v3-fos-license | The remodeling roles of lipid metabolism in colorectal cancer cells and immune microenvironment
Lipid is a key component of plasma membrane, which plays an important role in the regulation of various cell biological behaviors, including cell proliferation, growth, differentiation and intracellular signal transduction. Studies have shown that abnormal lipid metabolism is involved in many malignant processes, including colorectal cancer (CRC). Lipid metabolism in CRC cells can be regulated not only by intracellular signals, but also by various components in the tumor microenvironment, including various cells, cytokines, DNA, RNA, and nutrients including lipids. In contrast, abnormal lipid metabolism provides energy and nutrition support for abnormal malignant growth and distal metastasis of CRC cells. In this review, we highlight the remodeling roles of lipid metabolism crosstalk between the CRC cells and the components of tumor microenvironment.
Introduction
Colorectal cancer (CRC) is the most common malignant tumor of digestive tract in the world. Its incidence rate ranks the third in the incidence rate of all cancers and the second in the death caused by cancer [1,2]. It has been reported that the incidence of CRC is positively correlated with socioeconomic development. By 2030, the number of CRC cases in developed countries is projected to increase to 2.2 million and the number of deaths to 1.1 million [3]. In recent years, although some progress has been made in the early diagnosis and systemic treatment of CRC, the five-year survival rate of patients with CRC is still only about 50%. This is mainly because CRC often has no obvious symptoms in the early stage, and most CRC patients are diagnosed at a late stage and even have metastases. However, the mechanism of the occurrence, development and invasion of CRC is still not completely clear. Therefore, searching for detection methods and tumor markers for early diagnosis of CRC, and further exploring the exact molecular mechanism of its occurrence, development and invasion are hot topics in current research. Epidemiological studies have found that diet, obesity and diabetes are risk factors for CRC [4]. A large number of studies have shown that the abnormal lipid metabolism of cells is involved in the occurrence and development of colorectal cancer, and is significantly related to the clinical treatment effect and prognosis [5].
Lipids are hydrophobic macromolecules, which are divided into several groups according to structure of ketoacyl and isoprene: FAs (fatty acids), phospholipids, TGs (cholesterol, triglycerides), sphingolipids, and cholesteryl esters [6]. Lipids are essential nutrients for cells, acting as the structural components of cell membranes, material transport, energy suppliers, signaling molecules, apoptosis and other aspects [7]. Abnormal lipid metabolism refers to the abnormal anabolism and catabolism of lipids in the body, resulting in too much or too little lipids in each tissue, thus affecting the body function [8]. Although normal cells regulate anabolic and catabolic pathways to adapt to changes in nutrient supply, tumor cells can exhibit uncontrolled proliferation even in the presence of nutrient deficiency. The tumor microenvironment is hypoxic, acidic, and nutrient deficient, resulting in metabolic reprogramming of tumor cells and adjacent stromal cells to promote tumor cell survival, proliferation, and metastasis [9]. Many studies have shown that the abnormal lipid metabolism in cells is significantly related to the induction and metastasis of cancer. At the same time, clinical studies have shown that the abnormal lipid metabolism is closely related to the poor prognosis of CRC patients [10]. Both malignant progression and accelerated proliferation of CRC cells require more energy, which induces changes in lipid metabolism to allow CRC cells survive. Abnormal lipid metabolism causes changes in various genes and proteins, as well as the dysregulation of cytokines and signaling pathways [11]. Attentionally, lipid metabolism in CRC cells can be regulated not only by intracellular signals, but also by various components in the tumor microenvironment, including various cells, cytokines, DNA, RNA, and nutrients including lipids [12].
In this review, we discussed the changes of lipid metabolism in colorectal cancer and its role in the genesis and development of colorectal cancer, and also discussed the remodeling of lipid metabolism pathway in CRC cells and tumor microenvironment. The review provides a summary for better understanding and targeting lipid metabolism therapy and improving prognosis.
Nutrient Sources for Lipid Metabolism in CRC
As the main area of lipid synthesis, glucose is first converted into pyruvate through glycolysis, and further forms citrate in mitochondria, which is released into the cytoplasm of cells as the precursor of FA and cholesterol synthesis [13] (Fig. 1). Studies have shown that the expression of several glucose transporters (GLUT) and related enzymes involved in regulating glycolysis and lipid synthesis in CRC cells is significantly up-regulated. As an important member of the GLUT family, GLUT1 is significantly up-regulated in CRC, and is significantly related to the occurrence, development and prognosis of colorectal cancer [14]. Further research found that GLUT1 gene regulates TGF-β/PI3K AKT mTOR signal affects many biological behaviors of CRC cells [15]. Moreover, GLUT3, a homologous family member of GLUT1, was highly expressed in CRC and negatively linked to CRC patient prognosis [16].
Besides de novo synthesis, another major way for cells to obtain FAs is to extract lipids from the external environment ( Fig. 1). CD36 transports FAs into cells and plays essential roles in cell growth, metastasis, angiogenesis, immune response, adhesion, and epithelial-mesenchymal transition (EMT) in cancers [17]. For instance, precious study revealed that CD36 inhibits GPC4 ubiquitination via ubiquitin proteasome pathway β-Catenin/c-myc signaling pathway, ultimately down regulates the level of c-myc mediated aerobic glycolysis [18]. In addition, inhibition of FASN (fatty acid synthase) can increase the expression of CD36 in cells, thereby increasing the proliferation of CRC cells, suggesting that the combination of inhibition of CD36 and increase of FASN expression can further increase the anticancer effect [19].
Lipid Metabolism in CRC Cells
Disorder of lipid metabolism is a typical feature of CRC, and its most significant change is increased de novo lipid synthesis [20]. First, citrate generates saturated fatty acids (SAFA) under the action of ACLY (ATP-citrate lyase), ACC (acetyl-CoA carboxylase) and FASN. Then, under the action of FADS (fatty acid desaturase), SCD (stearoyl-CoA desaturase) and ELOVL (fatty acid elongase), MUFA (mono-unsaturated fatty acids) and PUFA (polyunsaturated fatty acids) are generated, which generates saturated or mono-unsaturated phospholipids [21] (Fig. 1).
It is well known that phospholipids are one of the main components of cell membranes, which play an key role in the growth and proliferation of cells [22]. It maintains the homeostasis of the cell membrane; on the other hand, it carries out the information transmission of the cell membrane. Phosphatidylcholine (PC) is one of the components of the glycerophospholipid family, which can synthesize other phospholipids such as phosphatidylserine and sphingomyelin, and is one of the main components of cell membranes [23] ( Fig.1). Studies have shown that in CRC tissue, the content of PC is significantly increased, which promotes the growth of CRC cells and regulates intracellular signaling pathways [24]. In addition, phospholipase A2 is the key enzyme in the hydrolysis of PC to lysophosphatidylcholine (LPC) [25]. Studies have shown that LPC can activate macrophages, make macrophages stay in M1 type, produce IL-12, IL-1β, TNF-α and IL-6 and other proinflammatory factors, enhancing the inflammatory response, which cause the occurrence of CRC [26,27]. Zhao reported that plasma LPC Levels may represent potential biomarkers for CRC [28].
In the process of FAs synthesis, the key regulatory factors of FA production, including SREBPs (sterol regulatory element binding protein transcription factors), ACLY, ACC, FASN and SCD1, are significantly up-regulated in CRC [29,30]. SREBPs are important transcription factors regulating lipid balance, regulate the synthesis of cholesterol and FAs by promoting the transcription of downstream FASN, ACC, ACLY, SCD1 and other genes [31]. Wen et al. showed that knockdown of SREBPs could significantly inhibit cell proliferation, reduce the proliferation ability of CRC cells, and inhibit the growth of CRC xenografts [32]. FASN is a key downstream factor of SREBP-1 regulated FA de novo synthesis, and its expression is significantly elevated in primary CRC and liver metastatic CRCr tissues [33]. On the one hand, FASN can enhance the respiration of cells and maintain the energy homeostasis in cells. It can promote the invasion and metastasis of CRC through Wnt signaling pathway, thereby shortening the survival time of patients [34]. Interestingly, inhibition of SREBPs expression by biotechnology or targeted inhibitors can effectively inhibit tumor growth by inducing CRC cell apoptosis, which makes SREBPs a potential target for tumor targeted therapy [35]. It was found that the expression of ACL Y was significantly up-regulated in CRC, and the down-regulation of gene level and the inhibition of drugs showed significant inhibition of cancer cell growth [36]. HOXA13 (Homeobox A13), a member of the HOX (Homeobox) family, facilitated CRC metastasis by transactivating ACLY. Knockdown of ACLY inhibited HOXA13-medicated CRC metastasis, whereas ectopic overexpression of ACLY rescued the decreased CRC metastasis induced by HOXA13 knockdown [37]. ACC expression is also up-regulated in CRC, and inhibition of ACCs can significantly reduce FA synthesis and inhibit tumor growth in the xenograft model. ACC inhibitors TOFA (5-tetradecyloxy-2-furoic acid), soraphen A, and ND646 showed significant anti-cancer effects in transplanted tumor models [38,39].
FAs are mainly divided into three classes: SAFA, MUFA and PUFA. Due to their different structures, FAs play different roles in the occurrence and development of tumors [40]. Studies have shown that oleicacid, palmitic acid, and linoleicacid can reduce the risk of CRC, while arachidonic acid (AA) and octadecanoic acid significantly increased the risk of CRC [41]. Free AA can produce PGE2, prostaglandin D2 and thromboxane, among which PGE2 is rich in colorectal tumors, which can be up-regulated by β-Catenin, activating PI3K, AKT kinase and RAS mitogen activated protein kinase pathway promote the occurrence of CRC [42].
In addition, newly synthesized FAs are activated under the action of acetyl-CoA synthetase (ACS), and then generate G under the action of GPAT (glycerol-3-phosphate acyltransferase), AGPAT (1-acylglycerol-3-phosphate Oacyltransferase), DGAT (diacylglycerol O-acyltransferase) and PAP (phosphatidic acid phosphatase) [43]. TG is one of the most abundant lipids in human body. Studies have shown that the serum level of TG is positively correlated with the incidence of colorectal adenoma [44]. Excessive TG exists in cells in the form of lipid droplets to provide energy supply and material basis for membrane synthesis. Xiao et al. showed that the accumulation of lipid droplets was significantly increased in human CRC tissues. However, the mechanism between TG and CRC pathogenesis needs to be further studied [45].
At the same time, acetyl-CoA can produce cholesterol through mevalonate pathway, and the accumulation of cholesterol in cells can stimulate macrophages and other immune cells, thus causing inflammatory reaction, At the same time, the content of cholesterol in serum is positively related to the occurrence of CRC [46]. P53 is a common mutant gene in tumors, FreePastor et al. showed that p53 can stimulate the transcription of SREBP, accelerating the synthesis of cholesterol, stimulate the growth of tumor cells [47]. SOAT1 (Cholesterol O-acyltransferase 1), also known as ACAT1 (acyl-CoA acyltransferase 1), converts alcohol into cholesterol ester (Fig. 1). Studies have shown that SOAT 1/is highly expressed in CRC and is inversely proportional to the survival of colorectal cancer patients [48]. Gene knockout of SOAT 1 or the use of its inhibitor can significantly inhibit tumor growth in animal models [49]. In addition, TGs and cholesterol esters in cells are stored in lipid droplets (Fig. 1), which have been observed in many tumors, including CRC. Many studies have confirmed that there are obvious lipid metabolism abnormalities and reprogramming in CRC [50].
Signaling Pathways Activated by Abnormal Lipid Metabolism in CRC
Recent studies have shown that lipid metabolites can regulate a variety of signaling pathways [51], which are involved in malignant transformation, cell proliferation, migration, EMT and tumor angiogenesis. Oncogenic signaling pathways can also directly or indirectly regulate metabolites and transcription factors involved in lipid metabolism [52].
PI3K-AKT signaling pathway
Studies have shown that the abnormal activation of PI3K/ AKT signaling pathway not only participates in the genesis and development of many tumors, but also plays an important role in tumor lipid metabolism reprogramming [53]. Activation of AKT leads to de novo lipid synthesis, which involves two basic processes. One is the shuttle of metabolic intermediates to provide carbon sources for lipid synthesis. Second, AKT can directly or indirectly promote the essential cofactors of lipid metabolism by activating transcription factors or activating related kinases [54,55]. In turn, AKT activation stimulates mammalian target of rapamycin (mTOR), leading to SREBP processing and activation [56]. SREBP induces transcription of target genes, which include many factors involved in lipid metabolism and fatty acid uptake. In addition, this process can promote metabolic reprogramming of glycolysis, which activates de novo fat generation [57]. The study by Zhang showed that Hepatocyte growth factor (HGF) affects SREBP dependent cholesterol biosynthesis pathway by regulating c-Met/PI3K/ AKT/mTOR axis in CRC cells [58]. The sphingolipid metabolism related proteins LIM and LASP1 (SH3 protein 1) affect mitochondrial membrane sites through PI3K/Akt/ mTOR, thereby inducing the occurrence and progression of CRC [59]. Glycogen synthase kinase 3 (GSK3) is the downstream of AKT, and AKT can inhibit the expression of GSK3, thereby promoting the phosphorylation of SREBP and inhibiting the expression of SREBP [60]. Research by Jutao Feng shows that AKT/GSK can trigger the EMT of CRC cells via up regulation of Snail [61]. Insulin plays an important role in lipid synthesis. Insulin can activate the expression of SREBP-1 through the PI3K/AKT/mTORC1/ S6K1 signaling pathway, accelerating the de novo synthesis of FAs [62]. The study by Liem M showed that insulin can induce continuous cell proliferation by activating PI3K/Akt signaling pathway in cells [63]. In addition, as a member of the PDGF family, VEGF-A can also induce angiogenesis in both physiological and pathological conditions [64]. Research shows that the expression of VEGF-A is significantly related to the prognosis of colorectal cancer patients [65]. Studies have shown that VEGF-A activates PI3K and phosphorylates PIP2 to PIP3 through binding with EGF receptor (EGFR), which is an important second messenger involved in AKT recruitment, activates mTOR, and finally causes cell growth and proliferation [66,67]. PTEN is a negative regulator of PI3K/AKT signal, which is overexpressed in about 60%-70% of colon cancer patients. It regulates lipid synthesis through PI3K-AKT-mTORC-SREBP pathway, making PIP3 dephosphorylated to PIP2 [68]. As a member of PTPs (protein tyrosine phosphatases), PTPRO (protein tyrosine phosphatase receptor type O) has been revealed that PTPRO expression is notably downregulated in CRC liver metastasis compared to the primary cancer, and such a downregulation is associated with poor prognosis of patients with CRC. PTPRO silencing induced the activation of the AKT/mTOR signaling axis, thus promoting de novo lipogenesis by enhancing the expression of SREBP1 and its target lipogenic enzyme ACC1 by activating the AKT/mTOR signaling pathway [69].
PPAR signaling pathway
PPARs (Peroxisome proliferator-activated receptors) are ligand-activated transcription factor belonging to a nuclear hormone receptor superfamily, including PPARα, PPARβ/δ and PPARγ [70]. PPAR, as the main lipid sensor and lipid metabolism regulator, plays a key physiological role [70,71]. PPARα It is mainly involved in fatty acid metabolism, while PPARγ is mainly involved in the regulation of fat production, energy balance and lipid biosynthesis [72,73]. PPARβ/δ is involved in the oxidation of fatty acids in skeletal muscle and myocardium, and it can also participate in the regulation of cholesterol level [74]. Studies have shown that PPAR pathway blocking can induce apoptosis and inhibit the growth of CRC like organs in vitro [75].
As a member of PTPs, the role of PTPRO in cell signal transduction has attracted more and more attention. It has been proved that PTPRO attenuation is achieved by suppressing PPAR α and its downstream enzyme, peroxidase acyl coenzyme A oxidase 1 (ACOX1), to reduce FA oxidation rate [69]. High expression PPARβ/δ is closely related to the occurrence and development of colorectal cancer. In this process, arachidonic acid stimulates PPARβ/ δ, leads to up regulation of cyclooxygenase (COX)-2 and excessive production of prostaglandin (PG) E2, an activator of colon cancer cells [76].
AMPK signaling pathway
As a metabolic sensor, AMP activated protein kinase (AMPK) is activated when ATP level in cells is low. Recent studies have confirmed that AMPK can regulate intracellular fatty acid oxidation, lipid synthesis and lipolysis through substrate phosphorylation [77]. AMPK is a heterotrimeric protein, which is composed of three subunits, each of which has multiple phosphorylation sites. AMPK participates in the synthesis and decomposition of lipids by influencing gene transcription and protein phosphorylation, and affects the metabolic process in cells [78]. SREBP1 targeted ACC1, FASN and SCD1 are lipogenic enzymes that promote de novo synthesis of cytoplasmic FA [79]. ACC1 catalyzes the carboxylation of acetyl CoA to malonyl CoA, which is the main substrate of FASN. FASN catalyzes the de novo synthesis of long-chain FAs in the cytoplasm through the condensation of acyl CoA and malonyl CoA [80]. FASN results in an increase in the synthesis of saturated fatty acids, which are then converted into intracellular monounsaturated fatty acids through SCD1 [81]. The accumulated data showed that ACC was phosphorylated by purified AMPK at three sites: Ser79, Ser219, Ser80, Ser1216, Ser1200 and Ser1215 [82]. Li et al. showed that SREBP1 was directly phosphorylated by AMPK at Ser372, suppressing the proteolytic cleavage of precursor SREBP1 into mature SREBP1, leading to the suppression of hepatic steatosis in diet-induced insulin-resistant mice [83]. TIGAR (TP53induced glycolysis and apoptosis regulator), which is a downstream target gene of p53, play a prominent role in tumorigenesis at multiple levels [84]. Liu showed that depletion of TIGAR in CRC cells promote lipid peroxidation through decreasing SCD1 expression by AMPK-dependent phosphorylation pathway [85].
The anti-tumor effect of the powerful FASN inhibitor 3664 (TVB-3664) is related to the changes in lipid composition, including the significant reduction of FAs and phospholipids and the increase of lactoceramide and sphingomyelin in xenograft (PDX) from CRC patients sensitive to FASN inhibition, which are regulated by AMPK signaling pathway [86].
Some studies demonstrate that metformin show the potential cytotoxicity on CRC-butyrateresistant (BR) cells, the molecular mechanism of which is taht AMPK phosphorylation is significantly upregulated, whereas the ACC is downregulated, which led to caspase activation and apoptosis [87].
Abnormal Lipid Metabolism on Immune Microenviroment in CRC
A large number of studies have shown that tumor cells reprogram their metabolic patterns to meet their own proliferation needs in the severely nutrient deficient tumor microenvironment (TME) [88]. This not only includes the Warburg effect, which was first discovered, but also the vigorous reprogramming of lipid metabolism plays an important role in tumorigenesis and development [89]. The changes of lipid metabolism of tumor cells are not only driven by their own needs, but also regulated by other cells, and will also affect the function and metabolism of surrounding cells. TME is divided into non immune microenvironment dominated by tumor cells and fibroblasts (CAF) and immune microenvironment dominated by immune cells [90]. TME contains many types of immune cell subsets, such as CD4 + T cells, CD8 + T cells, B cells, and dendritic cells (DCs), macrophages, natural killer (NK) cells, etc. Among them, DCs, CD4 + , and CD8 + effector T cells and NK cells are activated to inhibit tumor and prevent immune escape and disease progression [91]. Other immune cells, such as tolerant DCs, immunoregulatory T cell (Treg), tumor associated macrophages (TAMs) inhibits anti-tumor immune response, thereby promoting tumor proliferation, invasion, metastasis and angiogenesis [92]. In conclusion, the communication between CRC cells and their surrounding microenvironment is shaped to promote the growth and progress of CRC cells, and escape immune monitoring through various ways [93].
Metabolic changes in T cells
Tumor infiltrating T lymphocytes play a key role in tumor immunity effect. T cells are divided into CD4 + T cells and CD8 + T cells. CD4 + T cells are divided into anti-tumor and pro-inflammatory T helper type 1 (Th1) cells, immunosuppression Th2 cells, Th17 cells and Tregs, which secretes immunosuppressive cytokines such as IL-10 and transforming growth factor β (TGF-β), regulating the functions of T cells [94]. The excessive proliferation of tumor cells leads to the lack of nutrition and oxygen in TME, which will reshape the metabolic pathway of T cells from relying on glycolysis for energy supply to relying on FA oxidation and oxidative phosphorylation to maintain the effect function [95]. At the same time, the uptake of lipids increases, and the abnormal accumulation of lipids in cells.
However, the accumulated lipid will damage the function of mitochondria in T cells, thus affecting energy supply and immune function. At present, CD4 + T cells, CD8 + T cells and Tregs are the most frequently studied cell subsets in terms of the effect of lipid metabolism on T cells [96].
In TME, some effector molecules, including interferon γ (IFN-γ) and tumor necrosis factor α (TNF-α), secreted by tumor-infiltrating CD8 + T cells, are decreased, while some molecular markers on cancer cells, such as T cell immunoglobulin and mucin domain-containing 3 (TIM-3) and programmed death 1 (PD-1) are increased, indicating that infiltrating CD8 + T cells are in a state of exhaustion and cannot play a normal anti-tumor effect [97]. Studies have shown that the increased expression of genes related to FAs metabolism, oxidative stress and ATP production in CRC CD8 + TIL of diabetes patients may inhibit the function of CD8 + T cells [98]. In addition, the response of CRC CD8 + TIL from diabetes patients to cytokine signaling, lipid and glucose is reduced, which in turn will affect its protective function in TME, which is conducive to tumorigenesis and immunosuppression [99]. Acyl-CoA dehydrogenase short-chain (ACADS), a crucial enzyme in the FA metabolism pathway located in mitochondria, expression levels of which were positively related to B cells, CD4 + T cells, CD8 + T cells, and Tregs in CRC tissues [100]. The research from Song M. showed that Marine ω-3 PUFAs, including eicosapentaenoic acid, docosahexaenoic acid, and docosapentaenoic acid, possess potent immunomodulatory activity and can protect against cancer development [101]. High marine ω-3 PUFA intake was associated with lower risk of CRC with high-level, but not low-level, FOXP3+ T-cell density, suggesting a potential role of ω-3 PUFAs in cancer immunoprevention through modulation of Tregs [102]. In addition, studies showed that a multikinase inhibitor, H89, could promote naïve CD4 + T-cell differentiation into Th1, with a decrease in Treg differentiation, and an increase in CD8 + T-cell activation and cytotoxicity [103]. In addition, H89 induces overexpression of genes involved in anti-tumor immune response (such as IL-15RA), and its depletion counteracts the anti-tumor effect of H89. At the same time, H89 regulates Akt/PP2A pathway axis and participates in TCR and IL-15 signal transduction. The results show that H89 is a potential strategy for immune system activation, which can prevent and treat CRC [103].
FAs in TME support the proliferation and differentiation of Th17 cells. In order to explore the role of FAs synthesis in Th17 cells, Cluxton et al. added ACC or FASN inhibitor to CD4 + T cell culture medium, the proliferation of Th17 cells was inhibited, which proved that Th17 cell proliferation depended on FAs synthesis [104]. In addition, a similar study is that Berod et al. found that the development of Th17 cells depends on ACC mediated de novo synthesis of FAs and glycolysis lipogenesis metabolic pathway [105]. Blocking ACC can inhibit the formation of Th17 cells and promote the differentiation of Tregs.
Metabolic changes in TAMs
TAM is one of the main immune cells in TME, and different TAM subsets can induce or inhibit anti-tumor immunity [106]. TAM is divided into M1 type which inhibits tumor and M2 type which promotes tumor. In TME, the differentiation of macrophages into M1 type depends on aerobic glycolysis [107]. TAM tends to M2 like macrophages, which not only inhibits anti-tumor immune response by regulating the activation and apoptosis of T cells and NK cells, but also promotes tumor cell proliferation, metastasis, angiogenesis and immunosuppression [108]. For example, 1-acylglycero-3phosphate O-acyltransferase 4 (Agpat 4) silencing induced CRC cells and polarized macrophages to release LPA through LPA receptors 1 and 3 to the M1 like phenotype. This M1 activation is characterized by an increase in p38/p65 signal transduction and proinflammatory cytokines, which promotes the infiltration and activation of CD4 + and CD8 + T cells in the tumor microenvironment [109]. The accumulation of lipid and the increase of FA oxidation in TAM are necessary for its immunosuppressive activity. In turn, the accumulation of lipid and the increase of FA oxidation will induce TAM to polarize toward M2 phenotype and promote tumor development [110].
In TME, abnormal lipid metabolism regulates the M2 immunosuppressive phenotype of TAM in a variety of ways. First, abhydrolasedomain containing 5 (ABHD5) is a co activator of TG lipase and participates in TG lipolysis [111]. Studies on CRC have shown that TAM contains a large number of lipid droplets and the expression of ABHD5 is up-regulated [112]. ABHD5 inhibits the production of reactive oxygen species in TAM by inhibiting the inflammatory pathway of NLR family containing 3 protein domains (NLRP3), thereby affecting its phagocytosis and killing functions [113]. Therefore, PGE2 is known to play an important role in the polarization of macrophages. The enzymatic degradation of PGE2 involves NAD+dependent 15 hydroxyprostaglandin dehydrogenase (15-PGDH) [114]. Eruslanov et al. reported that the overexpression of 15-PGDH in mouse CRC cells transformed M2 oriented TAMs into M1 oriented macrophages, indicating that PGE2 can induce the phenotype of macrophages to change from antitumor M1 macrophages to tumor promoting M2 macrophages [115]. Finally, extracellular vesicles (EVs) are membrane bound vesicles containing different biomolecules and participate in intercellular signal transmission. More and more evidence shows that cancerogenic EVs are absorbed by macrophages and regulate their phenotype and cytokine distribution. Ineta et al. found that EVs derived from CRC cell lines increased CXCL10 and TNF in monocytes-α and IL-23 secretion, and promote the polarization of macrophages [116]. Interestingly, serum CXCL10 and TNF-α Elevated levels are associated with low survival in CRC patients [117]. In addition, Treg and Th17 cells are one of the target cells of IL-23. Therefore, CRC EVs may affect cancer progression through Treg and Th17 cell activation.
Metabolic changes in DCs
DCs is the main antigen presenting cell, which can process and present antigen polypeptides and express them on major histocompatibility complex (MHC) for T cells to recognize and induce antigen specific immune response [118]. In TME, High fatty acid concentration, up regulation of FASN, activation of TLR and other signals can promote FAs synthesis and lipid accumulation in DCs [119]. First of all, in TME FAs metabolism will affect the development, maturation and function of DCs. Free fatty acid receptor 2 (FFAR2), a receptor for short-chain FAs has been showed that loss of FFAR2 promotes colon tumorigenesis in mice by promoting exhaustion of CD8 + T cells, and overactivating DCs, leading to their death. Additionally, the lipid accumulated in the DCs will also lead to the obstacle of antigen cross expression [120]. Tumor infiltrating DCs usually accumulates in the liposome, which can covalently bind with heat shock protein 70 (p70) to prevent peptide MHC complex from transporting to the cell surface, and then lead to the accumulation of peptide MHC complex in the late endosome/lysosome [121]. O'Toole et al. found that CRC patient DCs secreted low levels of IL-12p70 and failed to upregulate expression of maturation markers in response to LPS, suggesting that lipid accumulation in DCs can reduce antigen processing capacity and weaken the ability to stimulate T cell response, leading to DCs dysfunction [122].
Metabolic changes in cancer-associated fibroblasts (CAFs) Cancer associated fibroblast (CAF) is one of the main components of stromal cells around cancer cells. Recent studies have shown that CAF secretes chemokines, growth factors, extracellular matrix and matrix metalloproteinases to regulate the occurrence, development and metastasis of tumors [123]. In addition, recent studies have shown that the metabolic interaction between CAF and tumor cells affects tumor metastasis [89]. First of all, in TME, CAF needs metabolic reprogramming to adapt to the severe lack of nutrition and oxygen, and the increased demand for energy necessary to maintain the high proliferation of CRC. At present, research on lipid metabolism reprogramming of CAFs mainly focuses on de novo synthesis and catabolism of FAs [123,124]. Gong et al. reported that compared with normal fibroblasts, FAs, diglycerides (DGs), phosphatidic acid (PA), phosphatidylinositol (PI), LPC and phosphatidylethanolamine (PE) were significantly upregulated in CAFs, accompanied by higher levels of FAs and phospholipids excretion [123]. Metabolic reprogramming of CAF leads to increased FASN expression, while increased FAs uptake in CRC cells leads to metastasis. Peng Shaoyong et al. used lipomics to reveal that CAF increases the fluidity of cell membrane by up regulating the unsaturated acyl chain in the PC in cells, thereby increasing the migration of CRC cells. In addition, using non targeted metabonomics, it has been found that CRC cells can absorb lipid/lipid metabolites from CAF to compensate for the low expression of SCD [125]. In addition, CRC cells can also secrete FA metabolites to affect the migration function of CAF. For example, CRC cell-derived 12 (S)-HETE, an proinflammatory AA metabolite, triggers a signal that is transduced by PLC, IP3, free intracellular Ca 2+ , Ca 2+calmodulin kinase II, RHO/ROCK and MYLK, leading to the activation of myosin light chain 2, and the subsequent mobility of CAF [126].
Conclusion
CRC is a complex disease with multiple genes, multiple steps and multiple stages, which is marked by genetic changes, signal pathway disorders and metabolic remodeling. Although the mortality rate of CRC has declined in the past 10 years, it still threatens human health, especially with the increase of incidence rate of young patients. The disorder of lipid metabolism has become a recognized feature of several cancers, mainly due to advances in technology in this field. Specifically, high-throughput methods such as lipomics, chemical imaging and functional genomics make lipid identification and characterization possible. Some studies use this method to describe the lipid metabolism characteristics of various cancers, including CRC [127].
In this review, we summarize the characteristics of abnormal lipid metabolism in CRC and describe the signaling pathways activated by abnormal lipid metabolism that lead to the occurrence, development, and metastasis of CRC. In addition, abnormal lipid metabolic interactions between cancer cells and Tmes during CRC progression are discussed. Remodeling of lipid metabolism of immune cells or competition with cancer cells for FAs can lead to tumor immunosuppression and immune escape. At the same time, other components of TME, such as CAF, can also support the metabolic needs of CRC cells by secreting FAs, thus leading to tumor invasion and metastasis (Fig. 2).
A growing number of studies have confirmed that lipid metabolism has been involved in various metabolic pathways of CRC cell biology [128]. The biosynthesis, uptake and modification of lipid not only affect the proliferation and survival of CRC cells, but also affect tumor cell migration, invasion and tumor angiogenesis through more complex signal pathways and remodeling of TME. At present, some inhibitors of FA metabolizing enzymes and transcription factors have started preclinical and clinical anti-tumor treatment research [129]. The screening of natural active compounds targeting tumor lipid abnormal metabolic pathways and related enzymes and the development of new drugs will also open up new areas for the treatment of CRC. Author Contributions: The authors confirm contribution to the paper as follows: study conception and design: Jiateng Zhong, Huifang Zhu; data collection: Jingyu Guo, Xinyu Zhang, Shuang Feng, Wenyu Di; analysis and interpretation of results: Jiateng Zhong, Yanling Wang, Huifang Zhu; draft manuscript preparation: Huifang Zhu. All authors reviewed the results and approved the final version of the manuscript.
Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study. | 2023-02-05T16:05:50.821Z | 2023-02-03T00:00:00.000 | {
"year": 2023,
"sha1": "58d2f5f1bd1dde0aa597ccb2e3bb5f19feb2de1a",
"oa_license": "CCBY",
"oa_url": "https://file.techscience.com/files/or/2022/TSP_OR-30-5/OncolRes-30-05-27900/OncolRes-30-27900.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4dc75b4f7904e796d52c19ba9d36033552205b8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
258677013 | pes2o/s2orc | v3-fos-license | Evaluation of Serial Testing After Exposure to COVID-19 in Early Care and Education Facilities, Illinois, March–May 2022
Objective: To understand SARS-CoV-2 transmission in early care and education (ECE) settings, we implemented a Test to Stay (TTS) strategy, which allowed children and staff who were close contacts to COVID-19 to remain in person if they agreed to test twice after exposure. We describe SARS-CoV-2 transmission, testing preferences, and the number of in-person days saved among participating ECE facilities. Methods: From March 21 through May 27, 2022, 32 ECE facilities in Illinois implemented TTS. Unvaccinated children and staff who were not up to date with COVID-19 vaccination could participate if exposed to COVID-19. Participants received 2 tests within 7 days after exposure and were given the option to test at home or at the ECE facility. Results: During the study period, 331 TTS participants were exposed to index cases (defined as people attending the ECE facility with a positive SARS-CoV-2 test result during the infectious period); 14 participants tested positive, resulting in a secondary attack rate of 4.2%. No tertiary cases (defined as a person with a positive SARS-CoV-2 test result within 10 days after exposure to a secondary case) occurred in the ECE facilities. Most participants (366 of 383; 95.6%) chose to test at home. Remaining in-person after an exposure to COVID-19 saved approximately 1915 in-person days among children and staff and approximately 1870 parent workdays. Conclusion: SARS-CoV-2 transmission rates were low in ECE facilities during the study period. Serial testing after COVID-19 exposure among children and staff at ECE facilities is a valuable strategy to allow children to remain in person and parents to avoid missing workdays.
Early care and education (ECE) programs (eg, Head Start, center-based childcare, other preschool learning centers) play a vital role in the lives of young children and their families by providing early learning and development opportunities and by allowing parents to work. 1,2 The COVID-19 pandemic has caused increased absences and staffing shortages in ECE facilities throughout the United States due to isolation and home quarantine requirements, which have led to negative effects, including loss of wages for parents who are unable to attend work and diminished educational and social opportunities for children.
ECE facilities face complex challenges when preventing and controlling the spread of SARS-CoV-2, the virus that causes COVID-19, among staff and young children. Most attendees are aged <5 years, making them ineligible for COVID-19 vaccinations during the first 2 years of the pandemic, when most people were required to quarantine at home for at least 5 days after an exposure to COVID-19. 3 Centers for Disease Control and Prevention (CDC) guidance for COVID-19 prevention in ECE settings emphasized using multiple prevention measures, including staff vaccination and nonpharmaceutical interventions, such as wearing face masks and social distancing, which can be difficult for young children to adhere to consistently. 4,5 Test to Stay (TTS) is a modified quarantine strategy adopted by kindergarten through grade 12 (K-12) schools during the COVID-19 pandemic that allowed asymptomatic staff and students who were not up to date with COVID-19 vaccination at the time of exposure to continue to attend in person if they adhered to serial testing. TTS implementation among K-12 schools allowed SARS-CoV-2 transmission to remain low and benefited students by preserving in-person learning days. [6][7][8][9] To our knowledge, no studies have evaluated TTS and the implementation of serial testing after an exposure to COVID-19 in ECE settings; therefore, we conducted an evaluation of the implementation of TTS in 32 ECE facilities in Illinois. We described secondary and tertiary transmission rates of SARS-CoV-2, identified testing preferences among TTS participants, and estimated the number of in-person attendance days and parent workdays saved due to children and staff remaining in person after exposure to COVID-19.
Facility Enrollment
The Illinois Department of Public Health (IDPH) identified 2 local health departments (LHDs), the Chicago Department of Public Health and the Lake County Health Department, to implement TTS in ECE facilities from March 21 through May 27, 2022. The 2 LHDs identified 32 ECE facilities (23 and 9, respectively), which included private ECE centers, faith-based ECE programs, and Head Start programs located across geographic locations representing 19 zip codes. The 2 LHDs identified participating ECE facilities based on the ability of facilities to implement TTS and to report contact tracing and case investigation data to the LHD during the study period. Each ECE facility completed an enrollment form, which recorded age, sex, race, ethnicity, and vaccination status of staff and children, plus the COVID-19 prevention strategies implemented at the facility. This activity was reviewed by CDC and did not require institutional review board approval. Data collection was consistent with applicable federal law and CDC policy (
TTS Protocol
We defined an ECE index case as a person attending the ECE facility with a positive SARS-CoV-2 test result (by polymerase chain reaction [PCR] or rapid antigen test) during the infectious period (from 48 hours before to 10 days after symptom onset or, if asymptomatic, 10 days after a positive test result) if not first identified as a close contact at the ECE facility during their incubation period (10 days before symptom onset or, if asymptomatic, 10 days before a positive test result). We defined a community index case as a person outside the ECE facility (eg, household member, teammate) with a positive SARS-CoV-2 test result who exposed a staff member or child participating in TTS during their infectious period. We defined a secondary case as a person with a positive SARS-CoV-2 test result within 10 days after exposure to any index case. We defined a tertiary case as a person with a positive SARS-CoV-2 test result within 10 days after exposure to a secondary case. We included close contacts in the dataset more than once if they were exposed to COVID-19 and participated in TTS more than once during the study period.
We considered a person to be a close contact if they came within 6 feet of a person with COVID-19 for a cumulative total of 15 minutes or more during a 24-hour period, regardless of face mask use. 3 Close contacts aged <18 years were not required to quarantine if they had completed a primary COVID-19 vaccine series. ECE staff close contacts were not required to quarantine if they were up to date with COVID-19 vaccination, defined as having received a primary series and all boosters recommended when eligible. Close contacts with a positive COVID-19 test in the past 90 days who remained asymptomatic were not required to quarantine.
Close contacts were eligible to participate in TTS if they were unvaccinated, partially vaccinated, or not up to date with COVID-19 vaccination; aged ≥2 years at the time of exposure; remained asymptomatic while enrolled in TTS; and gave personal consent (staff) or parental consent (children). Children aged <2 years were ineligible for TTS because at-home antigen tests had not been authorized for this age group and they were unable to wear face masks. Eligible close contacts could participate in TTS if their exposure occurred either while attending the ECE or outside the ECE setting. Staff TTS participants were required to wear face masks, whereas child participants were encouraged to wear face masks to the best of their ability. TTS participants were tested twice within 7 days of last exposure to COVID-19, with the second test occurring between days 5 and 7 after exposure.
Children identified as close contacts who did not meet TTS eligibility criteria or opted out of TTS were required to quarantine at home for at least 5 days, depending on their respective ECE facility's protocol. If the child returned to the ECE facility before day 10, wearing a face mask was required to the best of the child's ability. Staff who were identified as close contacts but did not meet TTS eligibility criteria or opted out of TTS were required to quarantine at home for at least 5 days and then wear face masks consistently after return to the ECE through day 10 after exposure.
For this analysis, we excluded data from facilities when substantive deviations occurred from the original TTS protocol developed by IDPH, such as increasing the testing cadence to once daily, requiring staff who were up to date with COVID-19 vaccination to be tested after an exposure, or not conducting contact tracing.
COVID-19 Testing
ECE facilities implemented COVID-19 testing for TTS participants on Mondays and Thursdays of each week. Participants had 3 testing options to choose from: (1) the SHIELD Illinois saliva-based PCR test collected at home by parents and brought to the ECE facility for pickup and testing, (2) the SHIELD Illinois saliva-based PCR test collected at the ECE facility by SHIELD Illinois staff, or (3) a rapid antigen test collected at home by parents, which was provided by the ECE facility or purchased privately. Turnaround time for PCR test results ranged from 24 to 48 hours. Turnaround time for antigen test results collected at home varied; parents or staff provided either verbal confirmation of the test result or a picture of the test result to the ECE facility.
Case Investigation and Contact Tracing
ECE facilities collected data from March 21 through May 27, 2022. ECE staff conducted contact tracing and case investigation with assistance from their respective LHDs and entered data on demographic characteristics, exposure dates, type of quarantine, and test results into an electronic spreadsheet. ECE staff then submitted the electronic spreadsheets to the LHD, which conducted data verification and cleaning before entering data into REDCap, a secure web-based data collection database hosted by IDPH. Contact tracers then reviewed data in REDCap to ensure completeness and conducted 10-day follow-up telephone calls with cases and close contacts.
Contact tracers attempted to reach cases and close contacts by telephone at least twice; if a case or close contact was aged <18 years, the parents were interviewed. Contact tracers called index and secondary cases at least 10 days after their positive COVID-19 test date to inquire about symptom onset and whether additional SARS-CoV-2 transmission occurred outside the ECE setting (eg, among household members). Contact tracers called TTS-eligible contacts at least 10 days after their last date of exposure to COVID-19 to verify test results, ask about symptoms, and confirm whether they participated in TTS or quarantined at home.
In-Person Days Saved
Parent workdays saved may include days parents would have missed work to provide care for their child or would have needed to arrange for non-ECE childcare to continue to work outside the home. We calculated the number of in-person days saved by assuming an average of 5 days of quarantine averted for each TTS participant. We assumed that delays in the reporting of positive test results may have created a lag in notification of exposures, resulting in shorter quarantine periods or that, in some instances, parent preference or ECE facility protocol resulted in a longer home quarantine of 7, 10, or 14 days.
Statistical Analysis
We calculated counts and proportions of demographic characteristics of TTS participants, the type of COVID-19 test, and the reported testing location. We computed Clopper-Pearson exact 95% CIs for secondary and tertiary attack rates. We used SAS version 9.4 (SAS Institute, Inc) for all analyses. (Figure). The index cases also generated 90 close contacts who were exempt from quarantine due to up-to-date vaccination status (65 staff and 25 children aged ≥5 years) and 59 TTS-ineligible close contacts (32 children aged <2 years and 27 children with an unspecified reason who quarantined at home). Of the 331 ECE children and staff who participated in TTS, 14 were identified as secondary cases, of whom 7 were symptomatic. The 14 secondary cases in turn generated 54 TTS-eligible close contacts, of whom 52 (96.3%) participated in TTS and 2 (3.7%) quarantined at home; 8 additional close contacts (5 staff members and 3 children aged ≥5 years) were exempt from quarantine due to up-to-date vaccination status. The 52 close contacts who participated in TTS did not generate any tertiary cases in the ECE facility; however, 3 tertiary cases were identified among household members of the 14 secondary cases. Two household tertiary cases were up to date with COVID-19 vaccination, and 1 was unvaccinated.
Thirteen of the 29 TTS-eligible close contacts who elected to quarantine at home reported COVID-19 test results; 1 had a positive test result. Of the 70 staff who were up to date with COVID-19 vaccination and, thus, not required to quarantine, 30 reported COVID-19 test results; 1 had a positive test result. Twenty of 28 children aged ≥5 years who were fully a Test to Stay allows children and staff who were close contacts to COVID-19 to avoid home quarantine and remain in person by testing twice after exposure. b "Other" location may include medical clinics and physicians' offices. Abbreviations: ECE, early care and education; PCR, polymerase chain reaction. a Test to Stay allows children and staff who were close contacts to COVID-19 to avoid home quarantine and remain in person by testing twice after exposure. Each testing series corresponds to a unique person-exposure event. b "Other" location may include medical clinics and community testing sites.
vaccinated (and, thus, not required to quarantine) reported COVID-19 test results; none had a positive test result.
Case Investigation and Contact Tracing
Contact tracers completed 10-day follow-up telephone calls for 11 of 35 (31.4%) index cases, 5 of 14 (35.7%) secondary cases, and 151 of 400 (37.8%) TTS participants with negative test results or who were eligible for TTS but participated in home quarantine.
In-Person Days Saved
Assuming an average quarantine of 5 days averted among 383 staff and children who participated in TTS, we estimated that 1915 in-person ECE days were saved. Of the 9 TTS staff participants, we estimated that 45 staff workdays were saved; among the families of the 374 children who participated in TTS, we estimated that 1870 parent workdays were saved.
Discussion
This study evaluated transmission of SARS-CoV-2 in 32 ECE facilities in Illinois that implemented serial testing of close contacts to COVID-19 from March 21 through May 27, 2022. Although the participating ECE facilities had varied testing and reporting strategies, our results indicate that the secondary transmission rate was low among participants who tested after an exposure to COVID-19, and no tertiary transmission occurred in the ECE facilities. These findings are consistent with low SARS-CoV-2 transmission rates reported among K-12 schools implementing TTS. 6,7,9 In addition to implementing TTS, the ECE facilities that participated in our study followed IDPH recommendations to limit SARS-CoV-2 transmission, including optimizing ventilation and requiring symptomatic children to stay home, which likely also kept transmission low. During the study period, both Chicago and Lake County experienced a substantial increase in community COVID-19 incidence (7-day rolling average number of cases per 100 000 people); incidence rates increased 368% (from 62.9 to 294.3) in Chicago and 462% (from 60.6 to 340.3) in Lake County. 10 The reported increase in community incidence was not reflected among participating ECE facilities, thus supporting previous findings that SARS-CoV-2 transmission in ECE settings remains low during periods of high community incidence. 11 Our findings showed that most eligible close contacts chose to remain in-person after an exposure to COVID-19. As a result, a substantial number of ECE in-person attendance days and workdays were saved among participants and their parents. Staff participants and parents of TTS participants in our study chose the type of test and the location of specimen collection, with most choosing to test at home. When COVID-19 exposures occur in ECE settings, offering home testing to monitor transmission may be beneficial for ECE staff, because it will decrease the time it takes to collect specimens and track results. ECE facilities included in our study designated Mondays and Thursdays for testing, which eliminated the time it would have taken for staff and parents to track the unique testing schedule of each participant based on the date of their last exposure, while also ensuring that participants received 2 tests during the 7-day period after exposure, with the second test occurring 5-7 days after exposure.
Our study was conducted when quarantine was recommended for people not up to date with COVID-19 vaccinations after an exposure and when children aged <5 years were ineligible for COVID-19 vaccination and were required to quarantine at home after an exposure. Despite COVID-19 vaccine being approved for children aged 6 months to 4 years in June 2022, vaccine uptake has remained low among children; in July 2022, fewer than one-third (29.8%) of children aged 5-11 years were fully vaccinated. 10 In August 2022, CDC updated guidelines to no longer recommend quarantine after an exposure to COVID-19, instead recommending the use of well-fitted face masks through day 10 after exposure and a COVID-19 test on day 6 or later if symptoms have not developed, regardless of vaccination status. 12 While a TTS model such as the one we describe here may not be required to allow children to remain in person after an exposure to COVID-19, testing will remain crucial in reducing transmission of SARS-CoV-2 and is especially important in settings such as ECE facilities, where close contacts may not be able to use face masks consistently.
Limitations
Our study had some limitations. First, the 10-day follow-up call response rate was only 35.8%, which may have resulted in missing information, such as household tertiary cases. Second, we did not identify tertiary transmission in the ECE facilities during the study period; however, COVID-19 incubation and infectious periods vary and overlap, making it difficult to explicitly identify tertiary SARS-CoV-2 transmission when several exposures occur among a group of classmates. Third, when recruiting ECE facilities for the study, some facilities declined participation because testing support or available funding was not guaranteed after the study ended. This factor may have resulted in a selection bias toward ECE facilities with more resources compared with facilities that chose not to participate. Finally, our findings demonstrate the implementation of TTS in a real-world setting. True transmission rates of SARS-CoV-2 among ECE facilities may differ depending on local guidance, ability to conduct contact tracing, quality of home specimen collection, validation of self-reporting results, and sensitivity of home tests.
Conclusions
To our knowledge, our study is the first to examine the use of serial testing of students and staff after an exposure to COVID-19 in the ECE setting. The SARS-CoV-2 transmission rate was low among ECE facilities during the study period. Both children and parents benefit from children attending ECE programs, and low transmission rates allow children to remain in person for childcare and education and enable parents to work outside the home without having to arrange alternative childcare. In addition to implementing proven infection prevention strategies to support safe in-person learning, ECE facilities might consider implementing testing programs that offer home testing for close contacts of COVID-19-positive people to monitor transmission of SARS-CoV-2 and allow staff and young children to remain safely in person. | 2023-05-15T06:16:17.311Z | 2023-05-13T00:00:00.000 | {
"year": 2023,
"sha1": "e64382fda3a52bdd04c50207e065c929e4db0786",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10185474",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "200a9daa0e7d99097dde1f5dfd089d86c63b05db",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4665210 | pes2o/s2orc | v3-fos-license | Biological potency and characterization of antibacterial substances produced by Lactobacillus pentosus isolated from Hentak, a fermented fish product of North-East India
Lactic acid bacteria (LAB) isolated from various foods are important due to their potential to inhibit microorganisms, including drug-resistant bacteria. The objectives of this investigation were to isolate and identify antibacterial substances producing LAB from Hentak, a traditional fermented fish product of Manipur (North-East India), and to optimize the production of antagonistic substances present in cell free neutralized supernatant (CFNS) against enteric bacterial pathogens using the ‘one factor at a time’ (OFAT) method. Out of 10 LAB, the most potent bacterium producing antibacterial substances was isolated and identified as Lactobacillus pentosus strain LAP1 based upon morphological, biochemical and molecular characterization. MRS (de Man, Ragosa and Sharpe) medium was determined to provide better bactericidal activity (AU/ml) than other tested media against the indicator enteric bacteria, including Staphylococcus epidermidis MTTC 3615, Micrococcus luteus MTCC 106, Shigella flexneri MTCC 1457, Yersinia enterocolitica MTCC 840 and Proteus vulgaris MTCC 1771. The culture conditions (pH: 5, temperature: 30 °C and inoculum volume: 1 %) and medium components (carbon source: lactose and nitrogen source: ammonium chloride) were observed to be the most influential parameters of significant antagonistic activity of CFNS against the enteric pathogens. MRS medium supplemented with Tween20 effectively stimulated the yield of antibacterial substances. The CFNS of strain LAP1 exhibited sensitivity to proteolytic enzyme (pepsin) treatment and heat treatment (60 °C for 60 min, 100 °C for 30 min and 121 °C for 15 min) and lost its inhibitory properties. The CFNS was active at an acidic (pH 3.0) to neutral pH (pH 7.0) but lost its antagonistic properties at an alkaline pH. The CFNS obtained from strain LAP1 scavenges the DPPH (1,1-diphenyl-2 picrylhydrazyl) significantly in a concentration-dependent manner within the range of 8.8 ± 0.12–57.35 ± 0.1 %. The OFAT-based approach revealed the baseline for statistical optimization, the scale-up process and efficient production of CFNS by L. pentosus strain LAP1, which could be used as a potential antibacterial and free radical scavenging agent.
Background Fermented foods are among the essential constituents of the human diet. Fermented food products are considered a good source of industrially important microorganisms (Rejiniemon et al. 2015;Jagadeesh 2015;Ilavenil et al. 2015). Similar to other states in North-East India, Manipur has a rich tradition in food processing and preservation technologies. Fermented foods of aquatic origin are still widely prepared and consumed in Manipur. Hentak is a highly consumed fermented fish product in Manipur and is mainly prepared at the household level in a cost-effective manner. However, there is a lack of knowledge regarding the bacteria involved in the increased shelf life of these products and the health benefits of those bacteria in humans. Therefore, an attempt had been made to isolate bacteria that produce antibacterial substances from Hentak and to characterize their inhibitory activity against human enteric pathogens.
Probiotics are non-pathogenic, known to compete with pathogens for available space by secreting lytic enzymes, organic acids and bacteriocins, inhibiting the growth of pathogens by disrupting their virulent gene expression, attachment and cell to cell communication, although widely adopted, is not acceptable to the European Food Safety Authority because it embeds a health claim which is not measurable (Pena et al. 2007;Ravi et al. 2007;Verschuere et al. 2000). Lactic acid bacteria (LAB) or probiotics from fermented foods are major resources for antimicrobial biosynthesis. Gram positive and non-sporulating bacteria play a prominent role in the production of growth inhibitory substances. LAB are safe and play an important role in food fermentation and preservation. The genus Lactobacillus belongs to the lactic acid bacteria that are rod shaped, Gram positive and non-spore forming. Lactobacillus pentosus is a lactic acid bacterium commonly used as starter culture for the fermentation process (Ruiz- Barba et al. 1994). Certain strains of L. pentosus exert probiotic properties, improve mucosal immunity and create resistance towards bacterial infections (Kotani et al. 2010;Izumo et al. 2011).
Environmental factors such as pH, temperature and medium composition can influence the production of antagonistic substances from lactic acid bacteria. Several reports have been discussed regarding antibacterial components, especially bacteriocin production by LAB and its optimizing by altering several physical factors and medium composition (Parente et al. 1994;Moortvedt-Abildgaard et al. 1995;De Vuyst and Vandamme 1992). Lactic acid bacteria and their specific components could be an eco-friendly antibacterial substitute for synthetic antibiotics. There is a continuous effort by worldwide researchers to optimize culture conditions and other parameters for the efficient production of antibacterial components from LAB that mitigate the growth of human pathogens. Therefore, in light of the over demand of antibacterial substances for therapeutic applications, the present study had been undertaken to investigate the influence of various culture conditions and medium components on the production of CFNS by a 'one factor at a time' (OFAT)-based approach using lactic acid bacteria isolated from Hentak.
Sample preparation
Fresh water small fish, Ngasang (Esomus danricus), were smoked and sun dried until they crumbled. The petioles of an aroid plant, Khonagu (Alocasia macrorhiza) were cut into small pieces, washed with water and sun dried for an hour. The crumbled fish powder was crushed with plant material in a 1:1 ratio using a stone mortar and pestle to make a paste. The mixture was kneaded with clean hands to produce ball-shaped pieces, and fermentation was allowed by keeping the mixture at room temperature for 5-6 days in an earthen pot containing a thin layer of banana leaves. The ball-shaped pieces were taken out from the pot and mixed with onions and mustard oil. The mixture was kneaded again using a stone mortar and pestle and made into a ball shape. The ball-shaped pieces were kept again inside the earthen pot containing banana leaves for 2-3 days. The fermented non-salted fish product, Hentak, was brought to the laboratory for the bacterial isolation process. The involvement of fish in the experiments was approved by the Government of India Ethical Committee (IAEC-LC 05/13).
Isolation of lactic acid bacteria
One gram of Hentak was ground with sterilized distilled water using a mortar and pestle cleaned with ethanol (95 % w/v). The mixture was centrifuged at 8000×g for 15 min in order to remove heavy particles, and the supernatant was collected. The supernatant was serially diluted (10 −1 -10 −5 ) for bacterial enumeration, and 1 ml of the suspension was poured onto sterilized MRS agar (g/l-proteose peptone 10.0, beef extract 10.0, yeast extract 5.0, dextrose 20.0, polysorbate 80 1.0, ammonium citrate 2.0, sodium acetate 5.0, magnesium sulfate 0.1, manganese sulfate 0.05, dipotassium phosphate 2.0, pH 6.5, Agar 18) plates. After spreading the suspension, the plates were incubated at 30 °C for 48 h. The total number of viable colonies was counted and expressed as colony forming units (CFU/ml). Based upon morphology, various colonies were selected for the isolation of pure bacterial cultures on MRS agar slants.
Assay for antibacterial substance production
The isolated lactic acid bacteria were screened individually for the production of antagonistic substances. The lactic acid bacteria were inoculated individually into sterilized MRS broth and incubated for 48 h at 30 °C. The indicator microorganisms were inoculated into Nutrient broth and Mueller-Hinton broth for 24 h at 37 °C and swabbed onto Mueller-Hinton agar (MHA) plates. Agar plates were punched using a sterilized, flamed and alcohol-dipped cork borer, and 5 mm wells were created. The lactic acid bacteria were centrifuged at 8000×g for 10 min, and the culture supernatant was subjected to membrane filtration (0.22 µm). The sterilized cell free supernatant was neutralized (pH 7.0) using 1 N NaOH in order to exclude the antibacterial effect of organic acids in the medium. The cell free neutralized supernatant (CFNS) was treated individually with catalase (Sigma, India; 1 mg/ml) and incubated at 37 °C for 2 h in order to eliminate the inhibitory effect of hydrogen peroxide. After catalase treatment, the CFNS obtained from lactic acid bacteria was then assayed for antibacterial assay against indicator bacteria using the agar well diffusion method. The growth inhibitory activity was expressed in arbitrary units (AU/ml). One AU was defined as the reciprocal of the highest level of dilution resulting in a clear zone of growth inhibition (Bhaskar et al. 2007).
Identification and molecular characterization of the isolate
The potent bacterium was identified using morphological and biochemical tests and further characterized using molecular tools. The genomic DNA of the potential isolate was isolated and purified using a QIAquick ® kit (Qiagen Ltd., Crawley, UK). The amplicon sequencing was performed using universal primers 27F (5′ AGA GTT TGA TCG TGG CTC AG 3′) and 1492R (3′ GCT TAC CTT GTT ACG ACT T 5′). The 16S rRNA sequence of the isolate was subjected to BLAST, NCBI. Then, the sequence of the isolate was deposited into NCBI Genebank, and an accession number was assigned. The potential isolate was used for further experiments.
Media optimization
Lactobacillus pentosus strain LAP1 was inoculated individually into 250 ml conical flasks containing sterile production medium (50 ml) such as Nutrient broth, Mueller-Hinton broth, Luria-Bertani broth, MRS broth and Peptone broth to compare the production of antibacterial substances. The flasks were incubated at 30 °C for 48 h in an orbital shaker (120 rpm). The CFNS was obtained, and the arbitrary units (AU/ml) were estimated as described above against the indicator bacteria.
Optimization of culture conditions and medium components using the OFAT method
The suitable production media was optimized using various culture conditions (pH, temperature and inoculum volume) and medium components (carbon sources and nitrogen sources) utilizing the OFAT method after working out a series of experiments. The fermentation conditions and medium components were substituted one by one by keeping other factors constant in the production medium. The antibacterial substance production by strain LAP1 was examined by adjusting the pH (4, 5, 6, 7 and 8) of the production medium using 1 N HCl and 1 N NaOH. Similarly, the production of antagonistic substances with strain LAP1 was optimized by varying their respective conditions such as incubation temperature (20-70 °C) and inoculum volume (0.5-2 %) at the optimized pH. Likewise, the various media components such as carbon sources (maltose, fructose, sucrose, lactose, and xylose individually at 1.0 % w/v) and nitrogen sources (ammonium acetate, ammonium chloride, ammonium nitrate, ammonium sulphate and sodium nitrate individually at 0.5 % w/v) were substituted in the production medium in order to achieve maximum production of antibacterial substances. An appropriate control medium was also maintained. All of the flasks were aseptically inoculated with the isolate and kept in an orbital shaker (120 rpm) for 48 h. The CFNS was collected after centrifugation at 8000×g for 10 min, followed by membrane (0.22 μm) filtration of the supernatant and neutralization. The CFNS was collected and the antibacterial activity (AU/ml) was examined against the most susceptible indicator bacteria as described above.
Effect of supplements on antibacterial substance production
The production of antibacterial substances by L. pentosus strain LAP1 was assessed under optimized culture conditions in the suitable production medium supplemented with Tween20 (1 % v/v), Tween40 (1 % v/v) Tween80 (1 % v/v) and glycerol (1 % v/v). An appropriate control medium was also maintained, and the antagonistic activity of CFNS was determined as described above using the most susceptible indicator organisms.
Characterization of CFNS
The CFNS from strain LAP1 was characterized with respect to pH, heat treatment and proteolytic enzymes. The stability of CFNS at different pH values (pH 3, 5, 7, 8 and 10) was tested by adjusting the pH of the supernatant with either 1 N HCl or 1 N NaOH. The adjusted supernatants were incubated for 4 h at room temperature, and the activity was calculated using indicator bacteria. The CFNS of the isolate was subjected to heat treatment at temperatures of 60 °C for 60 min, 100 °C for 30 min, and autoclaving (121 °C/15 min). CFNS and H 2 O 2 -eliminated CFS (cell-free supernatant) without any heat treatment served as a control. Aliquots of each treatment were taken after the required incubation period, and the activity of heat treated CFNS was determined against indicator bacteria as described earlier using the agar well diffusion method. Similarly, the sensitivity of inhibitory substances produced by the isolate to proteolytic enzyme such as pepsin (1 mg/ml) was determined. The reaction mixtures were then incubated at 37 °C for 1 h, and the antagonistic activity of the supernatant was determined as described above.
Determination of DPPH free radical scavenging activity
The DPPH (2,2-diphenyl-1-picrylhydrazyl) assay is one of the most commonly used methods to detect free radical scavenging activity. The DPPH scavenging assay for the CFNS of strain LAP1 was measured by the method of Chen et al. (2005) with some modifications. Various concentrations (100-1000 µl) of CFNS were mixed with 1 ml of 0.05 mM DPPH solution. The reaction was incubated in the dark at room temperature for 30 min. DPPH solution was used as a control, and a combination of CFNS and methanol was used as the blank. The DPPH scavenging capacity of the CFNS of the isolate was calculated by measuring the decrease in absorbance at 517 nm compared to the control. The DPPH scavenging capacity was calculated as:
Statistical analysis
All of the experiments were performed in triplicate, and the data were calculated as the Mean ± SD with MS-Excel.
Isolation of lactic acid bacteria
Countable colonies of lactic acid bacteria were observed from dilutions of 10 −3 -10 −5 . The lactic acid bacteria from Hentak ranged from 209.0 ± 5.03 to 85.0 ± 4.0 CFU/mL at 10 −3 -10 −5 dilutions, respectively. In the present study, 10 lactic acid bacteria were isolated on MRS agar plates based upon distinct morphologies (data not shown).
Screening for antibacterial substance production
Screening for potential antagonistic activity of all the isolates against the indicator bacteria was performed using the agar well diffusion assay. Twenty percent of the isolates were found to be effective against most of the indicator bacteria. Based upon the diameter of the zone of inhibition shown by the catalase-treated CFNS of the most potent isolate, susceptible bacteria, including Staphylococcus epidermidis, Micrococcus luteus, Shigella flexneri, Yersinia enterocolitica and Proteus vulgaris, were selected for further optimization (data not shown).
Identification and molecular characterization of the isolate
The most potent bacterium underwent morphological identification, biochemical property characterization and molecular characterization using 16S rRNA sequencing (data not shown). An amplicon of 1519 bp was observed using PCR amplification and sequencing. The sequence was subjected to a multiple sequence alignment using the BLAST analysis of NCBI. The 16S rRNA sequence showed a homology of 100 % with L. pentosus. The sequence was deposited in GenBank, maintained by NCBI, USA (Accession No: KU945826), and the organism was identified as L. pentosus strain LAP1.
Media optimization
Strain LAP1 was cultured in various media in order to ensure the maximum production of antibacterial substances. MRS broth was found to be the most favourable medium for the maximal production of antagonistic substances (98.7 ± 1.1-163.3 ± 2.13 AU/ ml). The other production media resulted in minimal yield of antibacterial constituents compared to MRS medium (Fig. 1). The antibacterial substance yield of various media was as follows: MRS broth > Mueller-Hinton broth > Nutrient broth > Luria-Bertani broth > Peptone broth. Peptone broth was found to be the least effective medium for antibacterial substance production from strain LAP1, ranging from 30.4 ± 2.33 to 12.1 ± 2.31 AU/ml against the most susceptible indicator bacteria.
Optimization of culture conditions and medium components
Subsequent investigation was carried out to optimize the production of antibacterial substances (AU/ml) from strain LAP1 using the OFAT method. The culture conditions, such as pH and temperature, were optimized for maximum production of growth inhibitory substances. The production of antibacterial components was enhanced by adjusting the pH of the MRS broth. Among the tested pH, the maximum production in terms of antagonistic activity was recorded at pH 5.0 and ranged from 166.6 ± 1.65 to 240.5 ± 3.18 AU/ml. However, a further decrease or increase of pH was found to mitigate the production of antibacterial substances significantly. The minimum production was recorded at pH 8.0 and ranged from 84.1 ± 2.08 to 121.4 ± 2.17 AU/ml against the control range (pH 7.0) of 96.7 ± 1.67 to 164.3 ± 3.08 AU/ml (Fig. 2). Figure 3 shows the effect of incubation temperature on antibacterial substance production from strain LAP1. The maximum production of 175.6 ± 2.34 to 245.5 ± 2.41 AU/ml was recorded at 30 °C, and a temperature lower or higher than 30 °C markedly decreased the production of antibacterial substances. The minimum yield was within the range of 18.3 ± 2.08 to 22.1 ± 2.17 AU/ml at 70 °C over the control range.
Different inoculums of strain LAP1 did not show any significant effect on the antagonistic activity of CFNS obtained against the indicator enteric bacteria (Fig. 4). The antibacterial substance production was higher (168.4 ± 2.41 to 305.4 ± 2.43 AU/ml) at the 1 % inoculum level. However, no further increase in production was observed at lower (0.5 %) or higher volumes of inoculum (2 %).
Strain LAP1 produced growth inhibitory components at a higher level (178.3 ± 2.41-310.4 ± 2.43 AU/ml) when the carbon source of MRS medium was substituted with lactose. On the other hand, the minimum antagonistic activity (41.3 ± 1.67-54.2 ± 3.08 AU/ml) was observed in xylose supplied medium over the control MRS medium ranging from 174.6 ± 1.23 to 244.5 ± 2.43 AU/ml (Fig. 5). Similar to the carbon source, the nitrogen source also favoured the optimal production of antagonistic Fig. 2 Effect of pH on the production of antibacterial substances by strain LAP1. An acidic pH (pH 5) favours the increased production of antibacterial substances from the isolate. Each point represents the mean ± standard error of three independent experiments substances from strain LAP1 (Fig. 6). The production of antibacterial substances from the isolate was higher (164.3 ± 1.65-302.3 ± 3.18 AU/ml) in the presence of ammonium chloride. However, the minimum production (98.3 ± 2.34-162.3 ± 2.41 AU/ ml) was obtained in ammonium nitrate supplied medium over the control range (175.6 ± 1.1-240.5 ± 2.13 AU/ml).
Effect of supplements
MRS medium supplemented with Tween20, Tween40, Tween80 and glycerol markedly affected the production of antibacterial substances by the candidate bacterium. The largest amount of antibacterial components (272.2 ± 1.65-472.3 ± 3.18 AU/ml) was produced in the MRS medium supplemented with Tween20 compared to the other tested supplements. Incorporation of Tween40, Tween80 and glycerol decreased the antagonistic activity of CFNS compared to the control range (Fig. 7).
Characterization of the CFNS of strain LAP1
The stability of the catalase-treated CFNS of strain LAP1 at different pH and temperatures and in the presence of proteolytic enzymes is presented in Table 1. The antibacterial substances showed activity (AU/ml) at pH 3, 5 and 7 (control). However, elevating the pH toward alkaline conditions diminishes the antagonistic activity of the CFNS against the indicator bacteria. The CFNS of the isolated strain did not show any antagonistic activity against the indicator bacteria at pH >7.0. Heating the CFNS of strain LAP1 at 60 °C for 60 min, 100 °C for 30 min, and 121 °C for 15 min completely abolished the antagonistic activity of the bacteriocin against all of the indicator bacteria tested. Likewise, all of the potential proteinaceous components present in the CFNS of strain LAP1 Fig. 6 Effect of various nitrogen sources (% w/v) on the production of antibacterial substances by strain LAP1. Ammonium chloride addition increased antibacterial substance production when substituted for ammonium citrate (control). Each point represents the mean ± standard error of three independent experiments MRS medium supplemented with Tween20 resulted in enhanced antibacterial substance production compared to control (MRS medium without any supplements). Each point represents the mean ± standard error of three independent experiments were completely inactivated by the pepsin, resulting in the disappearance of the zone of inhibition on the agar plates inoculated with indicator bacteria. Figure 8 shows the antioxidant activity of the CFNS of the isolate using DPPH free radicals. The scavenging potential of the cell free neutralized supernatant of the isolate increased significantly in a dose dependent manner (100-1000 µl). The antibacterial substance showed DPPH scavenging activity in the range of 8.8 ± 0.12-57.35 ± 0.1 % when compared to ascorbic acid (60.2 ± 0.11-92.1 ± 0.8 %).
Discussion
The LAB produce a variety of antibacterial substances, including bacteriocins and bacteriocin-like components that inhibit the growth of pathogenic bacteria (Yasmeen et al. 2015;Ekhay et al. 2013). The isolation and screening of bacteria from natural sources is a successful way to obtain strains with valuable medical applications (Yang et al. 2012). All
Thermal stability
Control ( The results were compared with the free radical scavenging potential of the control (vitamin C). Each point represents the mean ± standard error of three independent experiments of the isolates of preliminary study in the present context revealed a varying degree of antagonistic activity against indicator organisms by secreting different types of antibacterial substances. The growth inhibition of indicator bacteria by catalase-treated CFNS provided evidence that the antagonistic activity might be due to the production of antibacterial components (Yasmeen et al. 2015). Previous studies had reported extensively on the dominance of LAB in fermented foods such as meat, fish, fruits, vegetables and dairy products (Grosu-Tudor et al. 2014;Hwanhlem et al. 2011). Media play a very important role in the successful isolation of lactic acid bacteria and in maximizing the production of antibacterial substances from LAB. In the present context, strain LAP1 produced the maximum amount of inhibitory components in MRS medium. Our study favours earlier reports, which suggested that MRS medium was a better medium for the growth of probiotic bacteria and the production of antibacterial substances (Yang et al. 2012;Ten Brink et al. 1994). The low production of antibacterial substances recorded in other media suggests that the high yield of growth inhibitory components from the isolate depends upon the specific nutrients supplied in the medium for biomass production.
In the present investigation, the production of antibacterial substances from strain LAP1 was enhanced by optimizing the pH of the medium (pH 5.0). Our study strongly favours the findings of Iyapparaj et al. (2013), who demonstrated the maximum production of bacteriocin from lactic acid bacteria at pH 5.0. On the other hand, our results showed partial agreement with the findings of Zamfir et al. (2000), Aasen et al. (2000), Yang and Ray (1994) and Todorov and Dicks (2005), who observed maximum bacteriocin production in the range of pH 4.5-6.0. Maximum bacteriocin production was observed at an initial pH of 5.8, while a further increase in pH decreased the antagonistic activity (Verellen et al. 1998). In another similar study, a change in pH from lower to higher decreased the production of antibacterial substances in LAB (Cheigh et al. 2002). The variation in the production of growth inhibitory components with the change in the pH of the production medium might be due to changes in the biomass of the bacteria, post-translational modification or modification of the genes responsible for antagonistic characteristics (Liu and Chung 2005). In general, the production of antibacterial substances from strain LAP1 was stimulated at pH 5.0. Likewise, the effect of incubation temperature is a very critical parameter for the production of antibacterial substances such as bacteriocin (Delgado et al. 2007;Leaes 2011). Growth temperature and antagonistic substance production from lactic acid bacteria are often correlated, as indicated in the present report. Our study favours the findings of Iyapparaj et al. (2013) and Moonchai et al. (2005), who reported that the production of antibacterial substances from LAB was maximal at 30 °C. The present investigation was in complete agreement with the finding of Ekhay et al. (2013), who demonstrated that maximal antibacterial substance production by the bacterium correlates with the optimal cell growth temperature. However, the maximum production of growth inhibitory proteinaceous components was achieved at a temperature which was far from the incubation temperature required for cell growth (Messens and De Vuyst 2002). The inoculum volume (1 %) of strain LAP1 showed improved antagonistic activity of the CFNS against the indicator bacteria, but the rate of antibacterial substance production was not much influenced. This clearly indicated that the synthesis of growth inhibitory components from strain LAP1 was correlated with the specific cell biomass. Further extensive investigation is required to evaluate culture parameters to correlate the production of antibacterial substances and cell growth for specific strains.
In the present context, lactose was found to be the most effective sole substrate that favoured the enhancement of antibacterial component production from strain LAP1 towards the indicator bacteria. In agreement with our study, Iyapparaj et al. (2013), Abo-Amer (2011) and Moreno et al. (2003) showed maximum bacteriocin yield by LAB in the presence of lactose as a source of carbon in the production medium. On the other hand, the antagonistic activity of bacteriocin was increased when glucose was added to the medium (Ekhay et al. 2013;Todorov 2008). Previous reports and the present investigation clearly indicate that a specific substrate can induce or inhibit the antagonistic activity of the CFNS in a strain-dependent manner. According to the results obtained in our study, the rate of antibacterial substance production from strain LAP1 was affected by the addition of different nitrogen sources, but the antagonistic activity was not much influenced by the addition of ammonium chloride into the production medium. The investigation favours the finding of Ekhay et al. (2013), who demonstrated that the incorporation of inorganic nitrogen into the medium had no effect on the increased bacteriocin production. However, the present study was not in agreement with the finding of Iyapparaj et al. (2013), who reported that the increase in antagonistic activity was attributed to an inorganic nitrogen source, such as ammonium acetate. The production of antibacterial substances, such as bacteriocin, was also found to be inhibited due to the higher concentrations of nitrogen incorporated into the medium .
MRS medium supplemented with Tween20 induced the synthesis of antagonistic substances (Castro et al. 2011), as was also shown in the present study. The increased production by strain LAP1 in the presence of Tween20 in MRS broth might be because Tween20 is a non-ionic surfactant agent, and hence it has the ability to overproduce growth inhibitory components by affecting the bacterial cell membrane and secreting antibacterial substances directly into the medium. In other reports, broth supplemented with glucose and Tween80 inhibited the growth of indicator bacteria in a broad range by inducing the production of antibacterial substances (Iyapparaj et al. 2013;Verellen et al. 1998).
The CFNS showed stability and activity at both acidic and neutral pH (control). The results of the present study suggested that the antagonistic properties of the CFNS of the isolate against the indicator bacteria were due to the potent antibacterial substances, not because of the acidic environment. The stability of the antibacterial components at a low pH may be important in medicine as a potential antibacterial agent. These results are comparable with the reports of Messens and De Vuyst (2002), Yang et al. (2012) and Yasmeen et al. (2015), who demonstrated stability and better antagonistic activity of bacteriocins at an acidic pH.
Incubating the CFNS of strain LAP1 at different temperatures completely abolished the inhibitory properties of the antibacterial substances. These results demonstrated that the heat-labile antibacterial substances might be responsible for the inhibitory activity of the CFNS of the isolate. The results obtained in the current study provide one more significant step towards the study of the CFNS of L. pentosus as an antibacterial agent.
The sensitivity of the antibacterial substances towards proteolytic enzymes strongly established the proteinaceous nature of the CFNS obtained from L. pentosus strain LAP1. The result was in complete agreement with the findings of Bromberg et al. (2004), Sabia et al. (2014) and Yasmeen et al. (2015), who found that pepsin inhibited the antagonistic activity of most of the antibacterial substances produced by lactic acid bacterial strains.
Free radicals are the end product of metabolic process, and antioxidants are known to scavenge the free radicals produced inside the body. In the current study, the CFNS of strain LAP1 showed significant antioxidant properties (8.8 ± 0.12-57.35 ± 0.1 %) compared to ascorbic acid (60.2 ± 0.11-92.1 ± 0.21 %) in a concentration-dependent manner (100-1000 µl). Similar DPPH inhibition activity by LAB cell free supernatant was observed by Uugantsetseg and Batjargal (2014), who found that the antioxidant activity of the CFNS of isolates was in the range of 26.1-38.4 %. The DPPH free radical scavenging potential of the CFNS obtained from strain LAP1 may be involved in its antioxidant properties, and that is directly correlated with the concentration of antibacterial substances due to the presence of proteinaceous compounds and secondary metabolites in the CFNS of the isolate.
Conclusion
From the present investigation, it is clear that Lactobacillus pentosus strain LAP1 isolated from Hentak produced antibacterial substances with growth inhibitory properties against human enteric pathogens. The maximum production of antibacterial substances was obtained in MRS broth supplemented with Tween20 utilizing optimized culture conditions and medium components. Additionally, the CFNS obtained from the isolate demonstrated antioxidant activity by scavenging DPPH in a dose-dependent manner. The OFAT optimization data on antibacterial substance production provides strong preliminary information for further investigation on the statistical optimization and bio-preservative role of CFNS for cost-effective industrial applications. The antagonistic substances from L. pentosus strain LAP1 could be used not only as a barrier to the growth of enteric pathogens but also for developing food products with antioxidant properties. An extensive study needs to be performed to explore the potency of antibacterial substances as an alternative therapy against disease-causing enteric bacteria. | 2018-04-03T06:13:46.494Z | 2016-10-07T00:00:00.000 | {
"year": 2016,
"sha1": "4acf7f4ff97c1d2739275df520fee00b6db48126",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40064-016-3452-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4acf7f4ff97c1d2739275df520fee00b6db48126",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17801076 | pes2o/s2orc | v3-fos-license | Osteoporosis Self-Assessment Tool Performance in a Large Sample of Postmenopausal Women of Mendoza, Argentina
The Osteoporosis Self-assessment Tool (OST) is a clinical instrument designed to select patients at risk of osteoporosis, who would benefit from a bone mineral density measurement. The OST only takes into account the age and weight of the subject. It was developed for Asian women and later validated for European and North American white women. The performance of the OST in a sample of 4343 women from Greater Mendoza, a large metropolitan area of Argentina, was assessed. Dual X-ray absorptiometry (DXA) scans of lumbar spine and hip were obtained. Patients were classified as either osteoporotic (N = 1830) or nonosteoporotic (n = 2513) according to their lowest T-score at any site. Osteoporotic patients had lower OST scores (P < 0.0001). A receiver operating characteristic (ROC) curve showed an area under the curve of 71% (P < 0.0001), with a sensitivity of 83.7% and a specificity of 44% for a cut-off value of 2. Positive predictive value was 52% and negative predictive value was 79%. The odds ratio for the diagnosis of osteoporosis was 4.06 (CI95 3.51 to 4.71; P < 0.0001). It is concluded that the OST is useful for selecting postmenopausal women for DXA testing in the studied population.
Introduction
Osteoporosis is a systemic skeletal disorder characterized by low bone strength (arising from both low bone mass and microarchitectural deterioration), which increases the risk of fractures. Osteoporosis is a major public health problem and an important contributor to the global burden of noncommunicable disease [1].
Currently the recommended method for the diagnosis of osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA) [2]. According to the World Health Organization criteria, osteoporosis is operationally defined "as a BMD that lies 2.5 standard deviations or more below the average value for young healthy women. " [2].
Since, due to cost and availability, DXA scans are not recommended for screening purposes, several tools based on known clinical risk factors have been developed to identify those patients with high risk of osteoporosis, in whom actual BMD testing would be most useful in terms of diagnosis, treatment, and followup [3,4]. Some of these clinical tools, or aids in decision making, include many factors, making calculation of risk cumbersome [5,6]. Arguably the simplest decision rule is the Osteoporosis Self-assessment Tool (OST) which only takes into account body weight and age, which in adult populations are, respectively, related inversely and directly to the risk of osteoporosis [7].
The OST was developed for predicting risk of femoral neck T-score at or below 2.5 in Asian postmenopausal women [8] and later validated for Caucasian European and US postmenopausal women [9]. In these populations, the performance of the OST was similar to those of more complex clinical risk assessment tools [3,[10][11][12] Although a related tool, called OsteoRisk, has been validated for Latin American postmenopausal women [13], no direct assessment of the OST has been yet performed in this region.
The current prevalence of osteoporosis and the incidence of osteoporotic fractures in Latin America are similar to those of Southern Europe [14][15][16], but lower than those of Northern Europe and the United States [1,2]. However, a significant increase in the incidence of osteoporotic fractures is expected to occur in Latin America in the next few years, according to a World Health Organization report [2]. This highlights the need for improving clinical assessment and selection of women for BMD testing.
In this report, the performance of the OST in a sample of postmenopausal women from western Argentina was assessed.
Participants. The province of Mendoza in Western
Argentina has a population of 1,742,000 inhabitants according to the 2010 census [17]. About 62% of the population lives in Greater Mendoza, the fourth largest metropolitan area of the country, which includes about 133,000 women aged 50 years or older. The current sample included 4343 women referred to the Bone Densitometry Unit of the Nuclear Medicine School for a first (diagnostic) DXA scan of lumbar spine and hip. Women with Paget's disease, primary hyperparathyroidism, or severe hip osteoarthritis were excluded.
The research protocol was reviewed and approved by the Committee of Teaching and Research of the Nuclear Medicine School. The study was planned and conducted in full accordance with the current version (2008) of the Declaration of Helsinki.
Measurements and Procedures.
The height and weight of each patient were measured while she stood without shoes, wearing light clothing. The body mass index (BMI) was calculated as her weight in kg divided by her height in m squared (kg/m 2 ).
Patients were asked about previous fragility fractures, glucocorticoid, estrogen or bisphosphonate treatment, a diagnosis of rheumatoid arthritis, a history of hip fracture or DXA diagnosis of osteoporosis in their parents, smoking status, alcohol intake and physical activity. Calcium intake, was assessed through a Spanish version of the food frequency questionnaire developed and validated by Magkos et al. [18], using Argentine food composition tables for the calcium content of each item included [19].
DXA scans of the lumbar spine (L1-L4) and one hip (usually the left) were performed using a Lunar Prodigy equipment (GE Healthcare Lunar, Madison, WI). Measurements were performed by one of two technicians, both of whom were certified by the International Society for Clinical Densitometry. Stability of the bone densitometer throughout the study (in vitro long-term precision) was checked through daily measurement of a spine phantom according to the manufacturer. Short-term in vivo precision was estimated by DXA scans repeated after repositioning the patient, with two measures at each site in 30 patients, according to the International Society for Clinical Densitometry Official Positions 2007 [20].
Phantom measurements showed stability of the DXA equipment throughout the study, with a coefficient of variation of 0.5%. The combined in vivo precision for both technicians was 1.5% for the lumbar spine, 1.8% for the femoral neck, and 1.4% for the total hip.
Patients were classified as normal, osteopenic, or osteoporotic according to the World Health Organization criteria [2], based on the lowest T-score at the lumbar spine, the femoral neck, or the total hip. Reference values were taken from the National Health and Nutrition Examination Survey (NHANES III), which is the recommended reference database for Argentine patients [21].
The OST score was calculated as 0.2 (weight in kg − age in years) and rounded up to the closest integer. For example, a 64-year-old woman weighing 50 kg has an OST score of 0.2 (50 − 64) = −2.8, which would be rounded up to −3, and a 52-year-old woman weighing 67 kg has an OST score of 0.2 (67 − 52) = 3.
Since diagnosis of osteoporosis by DXA is based on a T-score at −2.5 or below at any of the recommended sites (lumbar spine, femoral neck, or total hip), the lowest Tscore was taken to dichotomously assign each result to a nonosteoporotic or osteoporotic group.
Statistical Analysis.
Data were analyzed with the commercial statistical software Prism 5.04 for Windows and InStat3 (GraphPad, San Diego, CA). The D' Agostino and Pearson Omnibus Normality test was routinely used to assess whether data departed significantly from a Gaussian distribution. If this was the case, data are presented as median (25-75 interquartile range). Otherwise, data are expressed as mean ± standard deviation. Comparison of OST scores between women with a DXA diagnosis of osteoporosis (Tscore of −2.5 and below at any site) and those without it was performed with Mann-Whitney's test. Simple linear regression was employed to assess the relationship between OST score and the lowest T-score for each patient (lumbar spine, femoral neck, or total hip). A receiver operating characteristic (ROC) curve was used to assess the area under the curve (AUC), sensitivity, and specificity. Negative and positive predictive values were calculated. The diagnostic odds ratio was calculated by Chi-square test, and results are displayed as mean (95% confidence interval = CI95). Significance level was set at 0.05.
Results and Discussion
The characteristics of the sample are shown in Table 1. Out of 4,343 patients, a total of 2,513 women were classified as nonosteoporotic while the remainder 1,830 women were classified as osteoporotic.
Among the main risk factors detected, other than advanced age or low weight, low calcium intake (less than 1000 mg/day) was found in 70% of women, essentially corroborating the result of a previous study in the same population [22]. Fragility fractures were recalled in 16,5% of the patients, sedentarism in 15%, a family history of osteoporosis in 10%, long-term glucocorticoid therapy in 6.2%, and rheumatoid arthritis in 1.8%. Twelve percent of the patients were cigarette smokers at the time of the study, but high alcohol intake was reported by less than 1%.
In Table 2 the absolute number and the proportion of women whose T-score was at −2.5 or below at the lumbar spine, the femoral neck, the total hip, or a combination of two or all three sites are shown. Of the 1,830 women with diagnosis of osteoporosis, T-scores of −2.5 or below were found in 1,207 at the lumbar spine, in 569 at the femoral neck, and in 1063 at the total hip. These figures correspond to the total number of patients with T-scores at −2.5 or below at each site. For example, the figure of 1,207 for the lumbar spine includes 557 women with T-score at −2.5 or below at the lumbar spine only, plus 125 women with T-score at −2.5 or below at both lumbar spine and femoral neck, plus 266 women with T-score at −2.5 or below at both lumbar spine and total hip plus 259 women with T-score at −2.5 or below at lumbar spine, femoral neck and total hip. For the whole group, OST scores ranged from -11 to +15 (−11 to 7 in osteoporotic and −7 to 15 in nonosteoporotic women). Women with a diagnosis of osteoporosis had significantly lower OST scores than those without it. Median OST scores were, respectively, 0.0 (−2 to +2) versus 2.0 (0.0 to 4.0); < 0.0001. The result of the ROC analysis is shown in Figure 1. AUC was 0.71 ( < 0.0001). Table 3 displays sensitivity and specificity for cut-off values from -3 to 3. For an OST score cut-off value of 2, the positive predictive value was 52% and the negative predictive value was 79% in the present sample. women with an OST score of 2 or lower are considered at high risk, and those above 2 are deemed at low risk, the unadjusted odds ratio for a diagnosis of osteoporosis by DXA of the high risk group versus the low risk group is 4.06 (CI95 3.51 to 4.71). The AUC obtained from a ROC analysis can range (expressed as a percentage) from 0 to 100, with 50 being the line of identity. Since sensitivity and specificity are both independent of disease prevalence, the same applies to the AUC [23]. AUC at or above 70% are deemed acceptable for a screening test. In the present study, the AUC was 71%.
The sensitivity and specificity of any given test vary inversely according to the chosen cut-off value. In previous reports, reviewed by Rud et al. [9], the sensitivity of the OST for prediction of T-scores at −2.5 or below for any region (lumbar spine, femoral neck, or total hip) has a median of 86% (range of 53% to 95%) in white women and 82% (range 79% to 82%) in Asian women. In the present study, using a cut-off value of 2, the sensitivity in Argentinian women was 83.7%, which is intermediate between the medians for white and Asian women.
On the other hand, the specificity of the OST for any site has a median of 40% (range 34% to 72%) for white women but a higher median, of 64% (range 60% to 78%), for Asian women [9]. The estimated specificity in the present study with a cut-off at 2 was 44%, closer to the specificity for white women than for Asian women.
The reason why values of sensitivity and particularly the specificity for Argentinian women were between those for white and Asian women is not clear, but it may be related to the fact that about 80% of the Argentine population has European ancestry, with minor but significant contributions from other ethnic groups [24]. Table 3: Sensitivity and specificity of the OST score for predicting a T-score of −2.5 or below at any site, according to the cut-off value. Figure 2: A plot of the OST score versus the lowest T-score (at any site). There is a significant linear relationship between both scores ( = 0.114; < 0.0001). The rectangle highlights the patients with T-scores at −2.5 or below who had OST scores above 2 ( = 298).
One limitation of this study concerns whether the sample is representative of Mendoza's postmenopausal women. Participants referred by their physicians for BMD measurement might have more risk factors than postmenopausal women in the general population. In a recent prospective study of 720 postmenopausal women undergoing their first DXA scan, 44% were at or above 65 years of age. Of those below that age, 55% had at least one risk factor (F. D. Saraví, unpublished data).
Another reason why the sample may not accurately depict the general population of postmenopausal women of Great Mendoza is socioeconomic status and educational level. Recent estimates place the fraction of the population below the poverty line at about 10% for Argentine urban areas [25]. Additionally, according to official statistics, 37% of the population does not have health insurance [26]. Although poor women or those without health insurance can still get a DXA scan through agreements between our center and the public hospital system, in practice their access is limited. These women may differ from the ones included in the present study on their educational level, nutrition, lifestyle, and prevalence of osteoporosis.
There are several clinical instruments for the assessment of the risk of osteoporosis. Most of them consider additional factors other than age and weight, for example, the ABONE (age, bulk, No estrogen) [27]; the Osteoporosis Risk Assessment Instrument (ORAI) [5], which incorporates age range, body weight (dichotomously), and estrogen therapy; the Simple Calculated Osteoporosis Risk Estimation (SCORE), which includes race other than black, rheumatoid arthritis, nontraumatic fractures, age, weight, and estrogen therapy [28]; and the Osteoporosis Index of Risk (OSIRIS), which takes into account body weight, age, history of nontraumatic fractures, and estrogen therapy [29].
Journal of Osteoporosis 5 Comparisons of these instruments have been performed by several researchers. Geusens et al. [30] found that OST predicted bone mass equally well than ORAI and SCORE in women from the United States and the Netherlands. Similarly, a comparison performed in a large sample of Belgian women found that OST "performed as well as the more complex risk assessment indices (SCORE, ORAI, and OSIRIS) in identifying women at low risk of osteoporosis" [11]. In a 2004 review article, Wehren and Siris also stated that OST, "the simplest of the instruments, performs as well as more complex tools" [3]. Essentially the same was found in a study of 986 postmenopausal Moroccan women [31]. In a study of Canadian women, the performance of OST was as good as that of ORAI [5]. In a systematic review, it is stated that OST shows higher accuracy than ORAI and SCORE concerning the "any region" BMD target. The authors noted, however, that overall "accuracy is similar in white women, albeit the trade-off between sensitivity and specificity may differ between OST and comparator CDRs" (Clinical Decision Rules) [12]. A very recent publication compared OST, ORAI, and ABONE and reported that OST performed best in US white women [32]. In the American College of Preventive Medicine Position Statement on screening for osteoporosis it is stated about the OST, "The simplicity of this screening tool and its validation in both genders and in various races account for its popularity and widespread use in selecting patients for confirmatory BMD testing" [4].
Conclusions
In the studied sample of postmenopausal women from Mendoza, Argentina, the OST showed a performance comparable to that reported for European and US white women. The overall performance of the OST was adequate for a clinical screening method simple enough to be used both by patients and physicians. Of course, its use does not preclude careful consideration of other clinical risk factors for osteoporosis.
Conflict of Interests
The author declares no conflict of interests. | 2016-05-12T22:15:10.714Z | 2013-03-04T00:00:00.000 | {
"year": 2013,
"sha1": "ef21066d3dbfd27664d8052dd669ef34343d75fe",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jos/2013/150154.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2f7cf81530c8219b52fdf5bd97c066c237353f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268641555 | pes2o/s2orc | v3-fos-license | The Electrochemical and Structural Changes of Phosphorus-Doped TiO2 through In Situ Raman and In Situ X-Ray Diffraction Analysis
Doping is a widely employed technique to enhance the functionality of lithium-ion battery materials, tailoring their performance for specific applications. In our study, we employed in situ Raman and in situ X-ray diffraction (XRD) spectroscopic techniques to examine the structural alterations and electrochemical behavior of phosphorus-doped titanium dioxide (TiO2) nanoparticles. This investigation revealed several notable changes: an increase in structural defects, enhanced ionic and electronic conductivity, and a reduction in crystallite size. These alterations facilitated higher lithiation rates and led to the first observed appearance of LiTiO2 in the Raman spectra due to anatase lithiation, resulting in a reversible double-phase transition during the charging and discharging processes. Furthermore, doping with 2, 5, and 10 wt % phosphorus resulted in an initial increase in specific capacity compared to undoped TiO2. However, higher doping levels were associated with diminished capacity retention, pinpointing an optimal doping level for phosphorus. These results underscore the critical role of in situ characterization techniques in understanding doping effects, thereby advancing the performance of anode materials, particularly TiO2, in lithium-ion batteries.
■ INTRODUCTION
−3 These batteries consist of four primary components: the anode, cathode, separator, and electrolyte, with the anode being crucial for overall battery performance.−9 Among the various TiO 2 structures, anatase is noteworthy, offering a specific capacity comparable to graphite (around 330 mAh g −1 ) and a discharge voltage plateau near 1.7 V vs Li + /Li.However, the performance of anatase is somewhat limited by the tetragonal LiTiO 2 phase, which impedes lithium-ion diffusion. 10,11uring lithiation, a Li-poor tetragonal anatase Li x TiO 2 with less than 0.03 and an I41/amd space group forms initially.A second phase transition to the orthorhombic Li 0.55 TiO 2 phase in the Imma space group occurs at x = 0.55, where only half the octahedral sites are filled with lithium ions. 10At deeper lithiation stages, a tetragonal structure emerges when these octahedral sites are fully occupied. 12The rapid advancement in nanostructured TiO 2 synthesis has led to various enhancements, such as optimized morphology, particle size, 13 porosity, 14 and carbon coating, all contributing to improved electrochemical behavior.
Notably, hierarchical structures with more exposed surfaces, open channels for electrolyte penetration, and shortened ion diffusion paths significantly impact the electrochemical activity.Wu et al. prepared petal-like TiO 2 with a particle size of 12 nm and a BET surface area of 28.4 m 2 g −1 , which demonstrated excellent cycling stability and a high specific charge capacity of 326 mAh g −1 at 20 mAh g −1 . 15Similarly, TiO 2 nanowires have achieved a reversible capacity of about 305 mAh g −1 . 16urthermore, the study of 3D porous materials, with their high porosity and specific surface area, has been gaining traction.These materials can accommodate more Li-ions, offering higher capacities. 17Despite these advancements, TiO 2 's low electrical conductivity and poor rate capability remain challenges.Doping TiO 2 with elements like F, P, N, Zr, etc. 18,19 has been shown to enhance electrochemical performance by facilitating more channels and active sites for Li-ion transport.For instance, Sn-doped TiO 2 nanotubes exhibited higher capacities and improved performance compared to undoped versions. 20 produce Ti 0.97 Zr 0.03 O 1.98 F 0.02 , which has a reversible capacity of 163 mAh g −1 at 1C but only 34 mAh g −1 when compared to Ti 0.97 Zr 0.03 O 2 .The delithiation capacity of Zr 4+ /F − codoped TiO 2 is still as high as 138 mAh g −1 even at a cycling rate of 10C. 18Phosphorus-doped TiO 2 synthesized using recycled human urine displayed an initial discharge capacity of 214 mAh g −1 , maintaining 159 mAh g −1 even after 100 cycles. 21,22n our study, we explored the structural changes in anatase TiO 2 and phosphorus-doped TiO 2 nanoparticles during lithiation and delithiation.Samples with varying P-doping percentages were prepared and evaluated as anode materials for lithium-ion batteries.We analyzed the impact of P-doping on the crystal structure and morphology using X-ray diffraction (XRD), scanning electron microscopy (SEM), and Raman spectroscopy.Additionally, in situ Raman spectroscopy and in situ XRD were employed to investigate the operational mechanism of titanium oxide during charging and discharging as well as the influence of doping on the lithiation process.
■ RESULTS
Characterization of the P-Doped TiO 2 .The XRD pattern and Raman spectrum of P-doped TiO 2 are depicted in (Figure 1).The four analyzed materials primarily consist of anatase TiO 2 .The nondoped TiO2 shows pronounced crystallinity, with distinct peaks at 2θ values of 25.37°, 37.97°, 48.06°, 54.22°, 54.53°, 62.59°, 68.97°, 70.07°, and 75.15°.These peaks correspond to the anatase planes (101), (004), ( 200), (105), (211), ( 204), ( 116), (220), and (215).A similar peak pattern is observed in phosphorus-doped TiO 2 nanoparticles.In the nondoped sample, minor traces of brookite TiO 2 are identified by a peak at approximately 30.84°in the XRD pattern and weak Raman bands between 240 and 370 cm −1 .These brookite traces are absent in Pdoped TiO 2 .The crystallite sizes for all samples were calculated using the Debye−Scherrer formula, focusing on the (110) peak, and included in the figure.Pure TiO 2 showed a crystallite size of 7.7 nm, whereas the 2%, 5%, and 10% Pdoped samples exhibited sizes of 7.2, 7.0, and 6.5 nm, respectively.This reduction in crystallite size leads to peak broadening in the XRD patterns and more phonon confinement, as evidenced by the broadening of the Raman peaks. 23he peak fitting of all the Raman spectra as well as the peaks' positions and fwhm are presented in Figure S2 and Table S1, the results confirm the phonon confinement by the increasing fwhm and red-shift of the Raman bands.
To evaluate the effect of phosphorus doping on the electrochemical performance of TiO 2 nanoparticles, cells were fabricated with TiO 2 and P-doped TiO 2 (2%, 5%, and 10%) as anodes.The second cycle galvanostatic voltage profiles (Figure 2-a) indicate that the discharge capacity and Coulombic efficiency, along with lithiation reversibility, improve with increasing doping levels.However, capacity retention analysis (Figure 2b) shows that 2% P-doping offers the best retention, which is clearer in the non-normalized capacity retention presented in (Figure S3), suggesting a slight doping enhances long-term reversibility without significantly affecting capacity loss in the initial cycles.In contrast, higher doping levels correlate with a consistent decrease in capacity retention despite initially higher capacities.Therefore, the optimal P-doping level for these materials appears to be around 2%, striking a balance between increased capacity and maintained capacity retention.
In situ Raman analysis on various P-doping levels of nanosized TiO 2 was conducted to investigate crystal structure changes and electrochemical reactivity during lithium insertion/extraction.This examination provides a comprehensive understanding of the behavior of these materials under different conditions.
In Situ Raman Investigation of P-Doped TiO 2 in the First Cycle.In the initial cycle, the in situ Raman results for the TiO 2 electrode (shown in Figure 3a) align with previous observations by Hardwick et al. 24 The emergence of Raman active modes A1g, 3Eg, and 2B1g around 1.75 V versus Li + /Li signals the formation of the first Li-poor tetragonal anatase Li x TiO2 phase (P1), identified by bands at 141 (Eg), 195 (Eg), 396 (B1g), 512 (A1g), 522 (B1g), and 628 cm −1 (Eg).Lithium insertion leads to shifts in TiO 2 anatase (P1) bands − the strong (Eg) band shifts to a higher wavenumber, while other bands exhibit red shifts, continuing until x = 0.07.Further lithium insertion results in a gradual decrease of the (P1) phase bands until the (Eg) band disappears, coinciding with the onset of a voltage profile plateau at approximately 1.75 V, marking the transition to the orthorhombic Li 0.5 TiO 2 (P2) phase, indicated by new Raman bands.
In 2% and 5% P-doped TiO 2 materials (Figure 3b,c), similar phase transformations occur.Notably, the Raman measurement for the 2% P-doped sample did not commence at the start of discharge, but spectral analysis and reaction reversibility confirm the initial presence of the Li-poor anatase phase.As lithium insertion continues, the intensity of the (P2) phase bands diminishes and broadens, with weak, wide peaks emerging in the 200−300 cm −1 range and near 520 cm −1 .These are attributed to the Li-rich LiTiO 2 phase (P3).Previous studies report that this phase's formation on particle surfaces hinders full anatase Li 0.5 TiO 2 lithiation due to low ionic conductivity. 8,10,25At the discharge end, (P2) and (P3) 11,24 However, in 2% and 5% P-doped TiO 2 , the Li-rich phase (P3) is more pronounced at the discharge end, indicating that P-doping enhances titanium oxide's ionic conductivity, leading to deeper and more reversible lithiation.This is evidenced by the clearer Raman spectra of the (P3) phase in (Figure 3b,c), The effect is more pronounced in 5% P-doped TiO 2 , suggesting better conductivity enhancement.Also, this is the first reported clear Raman spectra of the Li-rich TiO 2 phase (P3). 11Moreover, in the 10% P-doped material (Figure 3d), The third phase (P3) was clearly visible after the voltage plateau at 1.5 V with no traces of the (P2) phase as observed in the 2% and 5% P-doped materials.This shows that higher lithiation levels are achieved due to a high P-doping ratio and therefore confirms the increase in the obtained first discharge capacity (Figure 2).
During the charging process and lithium extraction, the phase transitions are reversed and the (P2) peak reappears and becomes stronger.However, in nondoped material, the (P2) phase does not fully vanish at the charge end, highlighting TiO 2 anatase's limited reversibility, a finding supported by Hardwick et al. 24 Conversely, P-doped titanium oxides exhibit significantly improved reversibility, with the tetragonal nonlithiated TiO 2 phase fully reappearing at th charge end and no orthorhombic phase (P2) traces, explaining the increased first columbic efficiencies in P-doped materials (Figure 2).
The Raman bands for all phases are broader in 5% P-doped TiO 2 than in 2% P-doped TiO 2 , implying larger crystallite size and lower structural defects in the 2% P-doping material. 26he 10% P-doped material shows slight improvements in the lithiation reaction compared to the 2% and 5% P-doped materials, with broader Raman peaks due to the smaller crystallite size.At the charge end, traces of the third phase are still visible.
In summary, P-doping enhances lithiation levels by improving the ionic conductivity of the third phase (P3) of LiTiO 2 , overcoming challenges to full lithiation. 25n Situ Raman of P-Doped TiO 2 in the Fourth Cycle.The Raman spectrum during the fourth cycle of the nondoped TiO 2 (Figure 4a) reveals coexistence of both the Li-poor tetragonal anatase (P1) and orthorhombic Li 0.5 TiO 2 (P2) phases at the beginning of discharge.This observation indicates partial irreversibility, as some lithium ions inserted during the first cycle were not fully extracted.Notably, although the (P1) phase disappears early during lithium insertion and reemerges near 3 V at the end of charging, the (P2) phase remains predominant.Additionally, the Li-rich LiTiO 2 phase (P3) becomes evident near 500 cm −1 toward the end of lithiation at approximately 1 V, suggesting that TiO 2 does not achieve full lithiation.The dominance of the (P2) phase and the incomplete lithiation/delithiation reversibility contribute to the lower capacity of nondoped TiO 2 compared to its P-doped counterparts.
In contrast, the P-doped TiO 2 maintains a reversible reaction during cycling (Figure 4b,c).At the end of the charge process of the 2% P-doped TiO 2 , only (P1) vibration bands were seen, indicating complete delithiation of the material.Furthermore, the Li-rich (P3) phase is consistently achieved at lower voltages (1 to 1.5 V vs Li + /Li), though the (P2) phase bands remain present in the 5% P-doped TiO 2 .This suggests a lithiation level of x varying from 0 to nearly 0.9, particularly at the particle surface.Despite not reaching theoretical capacity, possibly due to larger particle sizes (around 100 nm), significant improvements are observed compared to nondoped TiO 2 , which achieves theoretical capacity only with small particles around 7 nm. 7he analysis of 10% P-doped TiO 2 (Figure 4d) shows favorable reversibility across all three phases.However, the broadened peaks across all phases indicate very low crystallinity and substantial structural defects. 27Traces of the (P3) phase are present in the background of all spectra in the fourth cycle.Due to area normalization and relative weakness compared to the (P1) and (P2) phase spectra, the (P3) phase is less distinct.However, its presence will be further elucidated through hierarchical clustering and linear regression analysis of each spectrum against the Raman spectra of the phases.The deep lithiation observed in the 10% P-doped TiO 2 leads to higher initial capacities but reduced cycling stability, which is attributed to significant volume expansion during lithium-ion insertion.This phenomenon explains the lower capacity retention observed in highly P-doped materials (Figure 2).
Quantification of Phases upon Cycling.The data from the in situ Raman spectroscopy of TiO 2 and P-doped TiO 2 , collected over the first four cycles, was processed using Python.This involved organizing, smoothing, background subtraction, normalization, and peak detection within each spectrum.Python's flexibility and toolset enabled more detailed data extraction, including phase detection and changes during cycling.The methodologies and specific Python packages used are elaborated in the Methods section.
A key tool in detecting the existing phases is hierarchical clustering (HC).Utilizing 1 − c (where c is the correlation) as the distance measure, HC was applied to all spectra.The linkage distance threshold was adjusted to prevent misclassification of the combined spectra as new phases, and outlier clusters with only two or three spectra were disregarded.
The HC analysis identified three distinct clusters of spectra.For each cluster, one spectrum was chosen as representative, based on the maximum sum of correlations within its cluster and minimum correlation with spectra from other clusters.These representative spectra for each phase are shown in (Figure 5), and their peak fitting is presented in (Figure S2).The blue spectrum represents the nonlithiated anatase TiO 2 phase, identifiable at high voltages near 3 V.Peak fitting using a Lorentzian distribution identified five Raman modes observed at 141, 396, 512, 522, and 629 cm −1 in addition to a visible one around 200 cm −1 consistent with recent studies. 28Two small peaks around 890 and 900 cm −1 are attributed to the electrolyte, similarly to the results reported by Hardwick et al.The second spectrum is attributed to the orthorhombic Li 0.5 TiO 2 phase, corresponding to the plateau at 1.75 V.It was investigated by many recent works. 11,24,29The peak fitting of the spectrum discerns 16 Raman bands at 97, 127, 153, 173, 190, 224, 238, 321, 335, 348, 360, 419, 534, 563, 605, 638, 810, 907, and 973 cm −1 .Laskova et al. 29 have theorized 42 Raman active vibrations (14A g + 14B 1g + 7B 2g + 7B 3g ) using DFT calculations for orthorhombic Li 0.5 TiO 2 , which is higher than 9 modes (3A g + 3B 2g + 3B 3g ) predicted by Hardwick et al. 24 using factor group analysis.Using the experimental spectra, 20 modes are detected, the peak positions are very similar, but there are still many differences, the B 1g and B 2g peaks at around 172 cm −1 are not separated as in previous works, 11,24,29 and the B 2g peak at 237 cm −1 has weaker intensity but appears more on the (P3) phase.
The third (red) spectrum is attributed to the tetragonal LiTiO 2 phase, appearing at lower voltages near 1 V and toward the end of the voltage profile.This spectrum is weaker, especially in nondoped TiO 2 , with Lorentzian peak fitting revealing 12 peaks.Some peaks of the orthorhombic phase are visible in the background, but new, stronger peaks are observed around 213, 260, and 491 cm −1 , which are the main characteristics of this phase; the broadness of these peaks means that they can be composed of many other peaks, but the weakness of the entire spectra is insufficient to characterize them.
Raman spectroscopy, a robust vibrational spectroscopic method, was employed for quantitative phase analysis. 30Linear regression was used with the Raman spectra of the three identified phases to determine phase fraction changes during charge/discharge cycles.For each spectrum measured during delithiation, linear coefficients for the three phases were calculated and normalized to sum to 1, estimating the phase fraction.It is important to note that this analysis is most reliable when coefficients are near 100% for one phase and 0% for the others; intermediate values provide comparative rather than absolute data.The state of charge was linearly calculated from the material's charge, associating a 100% state of charge at 3 V and 0% at 1 V.The linear regression results in Figure 6 are in accordance with the results described in Figure 3. Initially, all materials exhibit similar behavior, particularly during the first discharge.However, with increased P-doping levels, TiO 2 materials achieve higher lithiation states.Over repeated cycles, the fraction of the tetragonal LiTiO 2 phase increases with higher phosphorus doping levels.Notably, the highly doped material does not fully revert to the pure anatase phase at a 100% state of charge, hinting at long-term performance degradation and lower capacity retention, as observed in Figure 2. Particularly in 10% P-doped TiO 2 , increased cycling leads to deeper lithiation, potentially reducing conductivity due to significant unit cell volume changes and a loss of electrical contact between particles.
The Effect of Phosphorus-Doping Level on the Vibrational Modes of Li 0.5 TiO 2 .During the first cycle, as depicted in Figure 7.The overlapped B 2g and B 3g bands of the Li 0.5 TiO 2 phase at the different P-doping percentages were compared during the charge process of the first and fourth cycles.In the first cycle, with increasing P-doping, these bands shift toward lower wavenumbers and their intensities significantly decrease.This phenomenon is likely due to reduced particle size and increased structural defects.(Facile Li diffusion) Phonons interact with charge carriers through electron−phonon coupling.−34 During the subsequent delithiation processes, lithium ions are easily extracted from the material's surface, but some remain trapped within the inner core, 35,36 the solid electrolyte interphase (SEI), 37 and/or in the Li-rich LiTiO 2 phase.This irreversibly trapped lithium acts as a doping agent, causing changes in the Raman bands.Particularly, peak positions for both nondoped TiO 2 and P-doped TiO 2 were observed to converge to around 165 cm −1 .For the nondoped and the 2% P-doped the band shifts to lower wavenumbers, which can be assigned to the effect of the lithium-ion trapped in the structure.For 5% and 10% P-doped TiO 2 , the peak was blueshifted, and this can be attributed to the effect of phosphor doping.
The vibrational modes of the Li 0.5 TiO 2 crystal structure can be affected by dopants and/or impurities that disrupt the crystal lattice symmetry.Figure 7 shows that trapped lithium ions induce stress, evidenced by the red shift of these bands, contrasting with the blue shift caused by phosphorus doping.The shift of the peak between the first and fourth cycles is shown in Figure 8.An optimal P-doping level is identified around 3.3 wt % (highlighted in red), where the combined effects of trapped lithium-ion and P-doping result in minimal peak movement during cycling.
Another effect, which is the intensity change and broadening, is observed in Figure 7, for the nondoped, 2% and 5% Pdoped materials Li 0.5 TiO 2 the change is due to the conductivity change of the material which causes a change in the phonon lifetime due to the phonon−electron coupling.The increasing intensity of nondoped anatase is caused by the decreasing conductivity due to the creation of the insulating LiTiO 2 mainly on the surface.P-doping activation by lithiation causes intensity reduction and slight broadening, increasing the material's conductivity.However, at high P-doping levels, conductivity decreases due to extensive lithiation, leading to significant unit cell volume changes.These volume changes, confirmed by subsequent in situ XRD experiments, result in structural instability, creating "dead" lithium and poor electrical contact.
In Situ XRD of P-Doped TiO 2 .In situ Raman spectroscopy, with its limited penetration depth (typically in the range of a few microns 38,39 ), is ideal for studying the surface layer and interface of battery materials.Its analysis focuses on a small area with a radius of about 2 μm, showing that it is sensitive to tiny inhomogeneities in the material lithiation related to the distance to the current collector mesh and electric interaction with the nearby particles.In contrast, in situ XRD has a deeper penetration depth than Raman spectroscopy, 40 often ranging from several microns to tens of microns, and the analysis are performed on a bigger area of the material, around 1.5 cm long and 0.5 cm wide.This makes XRD more suited for investigating overall material characteristics, including crystal structures and phase transitions.In our study, special attention was given to the selection of Raman analysis points, and in situ XRD was performed to gain deeper insights into the reaction mechanisms.In addition, the in situ XRD cell used a higher mass of active material (∼15 mg) compared to the in situ Raman cell (∼3 mg).A beryllium current collector was used for XRD, differing from the copper mesh used in Raman spectroscopy.To improve the electrical conductivity, 20% Carbon Black was added to the material.
In situ X-ray diffraction patterns recorded during the initial two cycles for each P-doping level are presented (Figure 9).For the nondoped material (Figure 9a), the crystalline peaks of tetragonal TiO 2 are evident, specifically the (101) peak at 25.4°and the (004) peak at 38.1°.These peaks are maintained through the initial voltage drop and a small plateau around 1.75 V.As lithiation progresses, a slight shift and decrease in intensity of the (004) peak are observed until it disappears entirely.Concurrently, a new peak emerges, shifted by approximately 1.7°to higher angles, corresponding to the (004) diffraction peak of Li 0.5 TiO 2 .This shift indicates a decrease in the unit cell's c parameter from about 9.42 to 9.03 Å. Simultaneously, the (101) diffraction peak gradually fades, replaced by a new, weaker peak aligning with the (011) peak of the orthorhombic Li 0.5 TiO 2 phase.As lithiation continues, the (004) diffraction peak of Li 0.5 TiO 2 slightly shifts to higher angles and diminishes in intensity.Meanwhile, a new peak, associated with the (004) diffraction peak of LiTiO 2 , emerges and intensifies in both strength and width, reaching a maximum at the voltage limit of 1 V.The (011) peak transitions back to the (101) peak of LiTiO 2 around 24.5°, signifying a structural shift to a tetragonal crystal structure akin to that of anatase TiO 2 .In this LiTiO 2 structure, the presence of lithium ions causes the unit cell parameter to expand along the a and b axes, leading to a reduced unit cell c parameter and shifting the (004) peak to higher angles.All three phases exhibit slight peak shifts characteristic of solid-solution behavior, although this effect is minimal for the nondoped material in the first cycle.
During the first charging cycle, the transformation of the material back to TiO 2 is initiated, observable at a voltage increase following a plateau at 1.9 V.However, this transformation exhibits asymmetry compared to the lithiation process.Notably, the (004) peak of Li 0.5 TiO 2 briefly appears during delithiation but with a significantly lower intensity than during lithiation.Additionally, the (011) peak, typically not visible, suggests the emergence of an amorphous behavior in the material.A considerable amount of lithium remains trapped within the material structure and is not fully extracted, as evidenced by the reduced electric capacity during charging and the diminished intensity of the TiO 2 (004) peak compared to its precycling state.This results in lower columbic efficiency.It is important to note that the quantity of lithium extracted from the material closely matches that inserted during the initial transformation step from TiO 2 to Li 0.5 TiO 2 .This observation points to the partial irreversibility of lithium insertion in the tetragonal LiTiO 2 phase during the first cycle.
In the second cycle of the nondoped material, while the phase transformations mirror those of the first cycle, several differences are noted.The movements of the diffraction peaks indicate more pronounced solid-solution behavior.The Coulombic efficiency in this cycle is improved compared to the first, yet the capacity mirrors that of the initial delithiation.This underscores the likelihood that a significant portion of the material is irreversibly transformed during the first lithiation into the tetragonal LiTiO 2 phase, subsequently impeding further lithiation.
In situ XRD results for the various P-doping levels, as shown in Figure 9b−d, reveal that all doping levels undergo the general transformation sequence: TiO 2 → Li 0.5 TiO 2 → LiTiO 2 .However, notable differences are observed among these doping levels.First, materials with P doping achieve deeper lithiation states, evidenced by both the calculated capacities and the increased intensity of the (004) peak of the tetragonal LiTiO 2 phase.This is particularly pronounced in the 10% P-doped material, which also exhibits a more significant peak shift.Second, the Coulombic efficiency in the first cycle shows considerable improvement across P-doped samples, suggesting a reduction in the kinetic barriers associated with the third phase, LiTiO 2 .Moreover, the diffraction peaks of the orthorhombic Li 0.5 TiO 2 phase become more pronounced during delithiation, which is especially noticeable in the 2% P-doped sample.This observation points to a minimized disparity in phase transformations between charging and discharging.
These in situ XRD findings corroborate the results obtained from in situ Raman experiments.They particularly highlight the enhanced depth of lithiation in P-doped materials and the improvement in first-cycle Coulombic efficiency due to more reversible phase transformations.
■ DISCUSSION Doped TiO 2 materials, including those with P doping, have demonstrated superior lithium-ion storage capacity and rate performance compared to undoped TiO 2 .−44 However, detailed studies on the phase transformation mechanism and control of lithium insertionextraction processes have been limited.According to our in situ Raman spectroscopy and XRD analyses (Figures 3 and 9), P-doped TiO 2 anatase materials achieve higher lithiation levels during the first discharge process compared to nondoped TiO 2 .In nondoped TiO 2 , the formation of the LiTiO 2 phase on particle surfaces hinders higher lithiation states due to its low electronic conductivity.In contrast, P doping significantly enhances the conductivity of LiTiO 2 , facilitating deeper lithiation states and higher initial specific capacities.
This study highlights the impact of P doping on the reversibility of lithiation/delithiation reactions.Nondoped TiO 2 does not revert completely to its initial structure at the end of the first cycle, with orthorhombic Li 0.5 TiO 2 coexisting with anatase TiO 2 .However, for 2% and 5% P-doped TiO 2 , the anatase phase is fully recoverable at the end of charging in the initial four cycles, demonstrating enhanced reaction reversibility.Nonetheless, traces of the tetragonal LiTiO 2 phase are detectable in 10% P-doped TiO 2 .
Through hierarchical clustering applied to in situ data, we successfully identified separate phases during the double phase transformation "tetragonal TiO 2 ↔ orthorhombic Li 0.5 TiO 2 ↔ tetragonal LiTiO 2 ".For the first time, we can distinctly identify the Raman spectra of the Li-rich LiTiO 2 phase.Additionally, our analysis reveals that nonreversible lithium ions trapped in the structure impose stress, as evidenced by the red shift and decreased vibrational energy in the B 2g , B 3g bands.
The P-doping of TiO 2 was found to decrease the crystallite size and increases the material conductivity, which was shown by the bands broadening and blue shifts.
The incorporation of phosphorus in TiO 2 leads to a reduction in the crystallite size and an increase in the material conductivity.This change is reflected in the broadening and blue shifting of the Raman bands.Initially, P-doping results in a red shift and broadening of these bands, correlating with reduced crystallite size and enhanced conductivity.However, upon cycling, a lithiation-triggered effect emerges, characterized by a blue shift in the bands.The interplay between the stress induced by trapped lithium and the lithiation-triggered effect of phosphorus doping creates a dynamic equilibrium.An optimal P-doping level is suggested, where phosphorus alleviates the stress induced by lithiation.Nevertheless, it is observed that higher doping levels may reintroduce stress into the material.
■ CONCLUSIONS
This study provides critical insights into the effects of phosphorus (P) doping on the electrochemical behavior of titanium dioxide (TiO 2 ) in lithium-ion batteries.Key findings include the enhanced lithiation capability of P-doped TiO 2 , with in situ Raman spectroscopy and XRD analyses revealing higher lithiation levels in P-doped materials compared to nondoped counterparts.This enhancement is attributed to the improved conductivity of the LiTiO 2 phase due to P doping, which facilitates deeper lithiation and higher initial specific capacities.
A significant improvement in the reversibility of lithiation/ delithiation reactions was observed in P-doped TiO 2 .While nondoped TiO 2 exhibited incomplete reversibility, P-doped TiO 2 , particularly at 2% and 5% doping levels, demonstrated a more reversible recovery of the anatase phase during the initial cycles.The study also highlighted the contrasting effects of trapped lithium and P doping on the vibrational modes of TiO 2 , with an optimal P-doping level at around 3.3 wt % that balances these effects to enhance overall electrochemical performance.
These findings have important implications for the development of TiO 2 -based anodes in lithium-ion batteries.The enhanced understanding of phase transformations and conductivity improvements through P doping opens avenues for designing more efficient and durable anode materials, potentially leading to batteries with greater capacities and longer life cycles.
■ METHODS Materials Preparation.Anatase titanium oxide TiO 2 and P-doped TiO 2 nanoparticles were synthesized using the sol− gel method, following the procedure outlined by Karim et al. 22,45 A volume V1 of titanium(IV) isopropoxide solution (99.999% trace metals base, Sigma-Aldrich) was mixed with 100 mL of ethanol absolute (Sigma-Aldrich), and then a volume V2 of phosphoric acid solution was added to the mixture (Sigma-Aldrich) under magnetic stirring.Table 1 presents the volumes for each doping level.
Alginate solution was prepared by dissolving 1 g of sodium alginate powder in 100 mL of distilled water before being added dropwise to the first solution.The mixture was stirred for three h at room temperature.The solid was collected after being repeatedly centrifuged at 10.000 rpm for 10 min and washed with distilled water.The powder was dried for a whole night at 120 °C then calcined at 400 °C for 5 h to crystallize both TiO 2 and the P-TiO 2 nanoparticles.
Electrochemical Analysis and In Situ Cell Assembly.The electrodes were assembled in an argon-filled glovebox, using an in situ EL-Cell (ECC-Opto-Std EL-CELL) with a sapphire optical window designed to perform the spectroelectrochemical investigation of the P-doped and undoped TiO 2 electrodes without any additive, a freshly cut and cleaned Li metal disk was used as a counter electrode, a glass fiber separator (ECC1−01−0012-J/L EL-CELL) soaked in an electrolyte of 1 mol•L −1 LiPF 6 dissolved in 1:1 ethylene carbonate (EC)-dimethyl carbonate (DMC) (Sigma-Aldrich). 46The cell was left to rest overnight in an OCV.Galvanostatic cycling with potential limitation (GCPL) was performed between 1 and 3 V vs Li/Li + at a C-rate of C/20 using a Biologic SP150 potentiostat.During cycling, Raman spectroscopy measurements were recorded using a Raman spectrometer LabRAM HR evolution (Horiba Scientific), with a confocal microscope (×50) objective, and a spectrometer with a grating of 1800 lines/mm grating, with an excitation wavelength of 532 nm, 50s exposure time and two accumulations for each spectrum.The experiments were performed at room temperature of 22 °C.Crystal structure of the products was measured on a Bruker D8 Advance X-ray diffractometer (XRD) with Cu K radiation (λ = 1.5418Å).The same equipment was used for in situ measurements, the XRD spectrum was recorded every 30 min while the material was being charged and discharged.A high-resolution scanning electron microscope (SEM) (EVO 10, ZEISS) was used to examine the morphology of the synthetic cobalt-free cathode material.Prior to SEM inspection, the sample's surface was coated with carbon while being held in a high vacuum for 20 min in a sputtering coater.
Data Analysis.Data analysis was conducted using a custom Python code.This code utilized libraries such as pandas, numpy, matplotlib, mpl_toolkits, peakutils, csv, scipy, and sklearn for data handling, processing, and analysis.This encompassed tasks such as reading, organizing, smoothing, interpolating, normalizing, peak detection in Raman spectroscopy data, and applying linear regression and hierarchical clustering models for in-depth analysis.
Figure 1 .
Figure 1.X-ray diffraction (XRD) and Raman spectrum of pristine and P-doped TiO 2 : (a) XRD spectrum showing crystallite sizes for each doping level; (b) area-normalized Raman spectrum.
Figure 2 .
Figure 2. (a) Second-cycle voltage profiles for TiO 2 with varying P-doping levels, (b) capacity retention for TiO 2 at different P-doping levels, normalized to the capacity of the second cycle.
Figure 5 .
Figure 5. Raman spectra of lithiated TiO 2 generated by hierarchical clustering of the in situ Raman spectroscopy data for x = 0, x = 0.5, and x = 1.
Figure 6 .
Figure 6.Change of phase fraction of the three phases of TiO 2 over cycling in the different P-doping levels.
Figure 8 .
Figure 8. B 2g , B 3g band wavenumber change between the first cycle and the 4th cycle.
Table 1 .
Synthesis Volumes for Each P-Doping Level | 2024-03-24T15:20:42.091Z | 2024-03-22T00:00:00.000 | {
"year": 2024,
"sha1": "c6581005b39e9a6367fe88a262cfe90d411b4181",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c08122",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b14cac6b0f14f3174df4e2264c1d82b5f0f08e13",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": []
} |
90274083 | pes2o/s2orc | v3-fos-license | Impact of unilateral eyestalk ablation on major biochemical parameters of muscle of the freshwater crab Travancoriana schirnerae Bott , 1969 ( Decapoda : Gecarcinucidae )
The current study examined the moultwise impact of unilateral eyestalk ablation on major biochemical parameters of the muscle of the freshwater crab Travancoriana schirnerae Bott, 1969 (Decapoda: Gecarcinucidae). Meat from unilaterally destalked crabs 15 days post-operation was analyzed for protein, oligosaccharide, polysaccharide, total free amino acid, total lipid, cholesterol and moisture content following standard procedures. The results indicated a moultwise cycling of the major biochemical parameters of the muscle. Unilateral eyestalk ablation induced a rise in all the major biochemical parameters except moisture, irrespective of the moult stages. The alteration induced as an impact of unilateral destalkation in the biochemical parameters decide the suitability of this technique to enhance meat quality as practiced in aquaculture farms of marine decapods.
Introduction
In crustaceans, the X-organ sinus gland (XO-SG) complex located in the eyestalks synthesizes and releases a number of neuropeptide hormones such as gonad inhibiting hormone (GIH), mandibular organ inhibiting hormone (MO-IH), crustacean hyperglycemic hormone (CHH) and moult inhibiting hormone (MIH) which play important roles in growth and reproduction.These hormones also play a role in the metabolism of proteins, lipids, nitrogen, calcium, carbohydrates and water balance (Highnam and Hill, 1979;Beltz, 1988;Chang, 1992).The accumulation and consequent release of stored organic reserves from different tissues to a large extent are influenced by the regulatory and inhibitory actions of XO-SG hormones.Loss of one or both the eyestalks induces significant alterations in stored reserves, consequently modifying the quality of edible tissues.Ablation of one or both the Padmanabhan and Raghavan Braz.J. Biol. Sci., 2016, v. 3, no. 6, p. 341-350.eyestalks is performed in aquaculture farms to induce early maturation in juveniles, so as to obtain mature adults with enhanced meat quality much ahead of the time taken under normal conditions (Santiago, 1977;Primavera, 1978;Lin et al., 2001).
The relative concentrations of biochemical components in muscle vary during the course of a normal moult cycle in crustaceans.In the freshwater crayfish Orconectes virilis, O' Connor and Gilbert (1969) observed that initiation of premoult activity resulted in increased lipid content of the abdominal muscle.Studies by Spindler-Barth (1976) in the shore crab Carcinus maenas observed a rise in the muscle glycogen content of premoult animals without significant variation in the haemolymph glucose throughout the moult cycle.In the land crab Gecarcinus lateralis, free amino acid pools in the haemolymph were lower and more variable than those in the muscle, which showed a three-fold decrease during premoult in comparison to intermoult period (Yamaoka and Skinner, 1976).Mid premoult increase in lipid content with late premoult decrease was recorded in the freshwater shrimp Palaemon paucidens (Teshima and Kanazawa, 1976).Suneetha et al. (2009) reported a rapid synthesis of lipid and protein content in the muscle from postmoult to premoult stage in the shrimp Penaeus monodon.Higher protein, carbohydrate and lipid levels were recorded in the hard-shelled swimmer crab Portunus sanguinolentus (Sudhakar et al., 2009a).Paray et al. (2014) observed that the protein content of muscle and hepatopancreas peaked during postmoult and gradually declined in intermoult and premoult stages of the shrimp Portunus semisulcatus.
Changes in the biochemical composition of various tissues due to eyestalk ablation is extensively studied in decapod crustaceans (Sainz-Hernández et al., 2008;Khazraeenia and Khazraiinia, 2009;Wu et al., 2013;Padmanabhan and Raghavan, 2016).Unilateral eyestalk ablation reportedly improved the protein, carbohydrate, lipid, ash, amino acid and moisture content in the muscle of portunid crabs Charybdis lucifera and Portunus sanguinolentus (Murugesan et al., 2008;Sudhakar et al., 2009b).Less attention has been paid with regard to moultwise impact of eyestalk ablation on the biochemical composition of freshwater decapods.Koshio et al. (1992) and Soundarapandian and Ananthan (2008) The present study on moultwise impact of unilateral eyestalk ablation on the biochemical composition of Travancoriana schirnerae Bott, 1969 (Decapoda: Gecarcinucidae) meat would be the first report of its kind.The knowledge generated from this study will help to decide the suitability of unilateral eyestalk ablation to enhance meat quality, as practiced in aquaculture farms of marine decapods.
Materials and methods
Adult crabs (carapace width 4.0-5.0cm) in different moult stages were collected from the paddy fields of Ondayangadi, about 5 km Northeast of Mananthavady (11.82°N and 76.02° E, altitude 767 m) in Wayanad district of Kerala.They were immediately brought to the laboratory and maintained in clean plastic tubs.The body weight and moult stages were recorded.The moult stages were ascertained by microscopic examination of setal development in the epipodite of the third maxilliped of males and pleopods in the case of females.
Unilateral eyestalk ablation was carried out on the third day after acclimatization.The crabs were cleaned under running tap water, dried and swabbed with 70% alcohol.With a pair of sterilized scissors, the right eyestalk was carefully ablated from the base; the wound was quickly cauterized with a blunt, red hot needle to prevent bleeding (Caillouet, 1972).Antiseptic powder was applied on the wound and sealed with cotton.Unablated control and eyestalk ablated crabs were placed in separate tubs.On 15th day, both the control and ablated crabs were sacrificed.Freshly dissected meat was weighed and used for analyses.Protein, carbohydrate and lipid contents were analyzed adopting standard methods (Lowry et al., 1951;Dubois et al., 1956;Folch et al., 1957;Frings et al., 1972;Johnston and Davies, 1972).
Total free amino acid (FAA) and cholesterol contents were estimated according to Lee and Takahashi (1966) and Zlatkis et al. (1953), respectively.
To determine the moisture content, one gram of freshly dissected meat was kept in the oven at 105 °C and weighed at regular intervals until a constant weight was obtained.The difference between the wet and dry weights was expressed as percentage in terms of wet weight of the tissue (Pillay and Nair, 1973).
Analyses were carried out in pentaplicate and the results were presented as Mean±SD.The data was analyzed by one-way analysis of variance (ANOVA), using SPSS 16 software.
Results
The moultwise variation in the major biochemical constituents: protein, oligosaccharide and polysaccharide, total FAA, total lipid, cholesterol and moisture in the muscle of unilaterally destalked Travancoriana schirnerae is presented in Table 1.
Table 1. Impact of unilateral eyestalk ablation on major biochemical components in the muscle of
Travancoriana schirnerae during different moult stages.
Unilateral eyestalk ablation induced a significant rise in the muscle protein content irrespective of the moult stages.In the intermoult control animals, protein content recorded 19.81 ± 2.38% and in eyestalk ablated crabs it increased significantly to 23.68 ± 2.61%.The highest protein content was noticed in the unilaterally eyestalk ablated premoult crabs (29.44 ± 4.05%) which was significantly higher than the control group (26.75 ± 3.51%).In the postmoult stage, protein content increased significantly from 17.13 ± 1.96% in control crabs to 19.35 ± 3.05% in the experimentals (Figure 1).comparison to the intact controls (0.500±0.20%) of the intermoult stage.Unilateral destalkation also displayed a significant increase in oligosaccharide content (1.01±0.26%) in premoult crabs, compared to their control counterparts (0.743±0.26%).Though a sharp decline in the oligosaccharide content was observed in the control crabs of the postmoult stage (0.442±0.05%), unilateral ablation showed a significant rise in this value (0.607±0.12%) (Figure 2).In the unablated crabs, from intermoult to the premoult stage, the muscle polysaccharide content demonstrated a steady increase, thereafter declining to its lowest value in the postmoult stage.The differences in the polysaccharide contents of intermoult control (0.575 ± 0.19%) and experimental (0.888 ± 0.29%) crabs were found to be statistically significant.Among the experimental animals of the three moult stages, premoult crabs recorded the highest value for polysaccharide content (1.205 ± 0.33%) while their controls recorded 0.790 ± 0.13%.In postmoult animals, unilateral eyestalk ablation induced a significant upsurge in the muscle polysaccharide content from 0.432 ± 0.11% to 0.584 ± 0.10% (Figure 3).
Discussion
The present study revealed that unilateral destalkation during different moult stages induced a rise in the muscle biochemical components except moisture.In control animals, the values demonstrated an increasing trend from the postmoult to premoult stages.It is well known that crustaceans accumulate nutrients prior to moulting to provide enough energy for ecdysis.Passano (1960) proposed that biochemical processes are more pronounced than morphological changes during early premoult stages.
Our observations revealed a significant increase in the average protein value of unilaterally destalked Travancoriana schirnerae in all the three moult stages.In agreement with the present findings, unilateral eyestalk ablation demonstrated a significant rise in the protein content of marine intermoult crabs Charybdis lucifera and Portunus sanguinolentus (Murugesan et al., 2008;Sudhakar et al., 2009b).In contrast, Koshio et al. (1992) reported that the intermoult protein contents of the whole body remained unaltered in unilaterally destalked Macrobrachium rosenbergii.Among control Travancoriana schirnerae, the highest value for muscle protein was recorded in premoult crabs and lowest in postmoult crabs.Studies by Suneetha et al. (2009) in the muscle of Penaeus monodon showed that rapid synthesis of protein occurs from postmoult to premoult stages.Sudhakar et al. (2009a) documented that protein content of hard-shelled Portunus sanguinolentus increased significantly when compared to the soft-shelled crabs.However, protein content peaked during the postmoult stage, gradually declined during intermoult and premoult stages in the muscle of Portunus semisulcatus (Paray et al., 2014).
In Travancoriana schirnerae, the oligo and polysaccharide reserves in the muscle recorded a significant rise as a result of unilateral ablation in all the three moult stages.Similar rise was reported in the carbohydrate content of unilaterally destalked intermoult Charybdis lucifera and Portunus sanguinolentus (Murugesan et al., 2008;Sudhakar et al., 2009b).The free sugar content in the muscle of the field crab Oziotelphusa senex senex showed an increase following eyestalk extirpation (Venkataramanaiah and Ramamurthi, 1980).Many authors have investigated the Padmanabhan and Raghavan Braz.J. Biol. Sci., 2016, v. 3, no. 6, p. 341-350.variation in carbohydrate content as a result of destalkation.Bilateral destalkation during intermoult induced an increase in the muscle glycogen concentration of the freshwater crabs Paratelphusa jacquemontii (Rangnekar et al., 1961) and Varuna litterata (Madhyastha and Rangnekar, 1976).Rao et al. (1985) observed that loss of eyestalks resulted in increased muscle glycogen concentration of intermoult fiddler crabs Uca annulipes and Uca triangularis.Rangnekar and Madhyastha (1971) and Nagabhushanam and Kulkarni (1980) noticed a significant rise in muscle glycogen content of bilaterally destalked Metapenaeus monoceros and Parapenaeopsis hardwickii, respectively.Wang and Scheer (1963) reported that the enzyme uridine diphosphate glucoseglycogen transglucosylase (UDPG-GT), which converts glucose to glycogen in the muscle, is under the control of eyestalk hormones.The significant rise in the muscle carbohydrate content in the present study may be attributed to the partial removal of eyestalk hormones as a result of unilateral destalkation.In contrast, unilateral eyestalk ablation did not affect the carbohydrate content of Macrobrachium malcolmsonii during intermoult stage (Soundarapandian and Ananthan, 2008).
Among control Travancoriana schirnerae, the premoult stage recorded comparatively higher values for oligo and polysaccharides than the postmoult and intermoult stages.Similar observations were made by Spindler-Barth (1976) in Carcinus maenas where the muscle glycogen concentration reached maximum value during premoult stage.
The total FAA content in the muscle of unilaterally ablated Travancoriana schirnerae showed statistically significant rise during the three moult stages.
Likewise, unilateral destalkation in intermoult Orconectes virilis caused increased FAA level in the muscle within 24 hours (McWhinnie et al., 1972).The haemolymph FAA content recorded a significant rise in unilaterally ablated M. monoceros (Surendranath et al., 1992).Soundarapandian and Ananthan (2008) reported that unilateral destalkation led to enhanced levels of total FAA in the whole body of juvenile Macrobrachium malcolmsonii.
Conversely, bilateral destalkation led to decreased FAA content in the muscle and haemolymph of the intermoult freshwater crab Barytelphusa guerini (Gangothri et al., 1988).
The increase in FAA content observed in experimental crabs of the present investigation may be ascribed to the changes in protein metabolism likely to have been initiated by the partial removal of eyestalk hormones.In intact Travancoriana schirnerae, the total FAA levels increased steadily from postmoult to premoult stages.Comparable results were noticed in Orconectes virilis, where the muscle FAA levels increased two to three-fold during premoult stage (McWhinnie et al., 1972).High amino acid content was recorded in the muscle of premoult Penaeus monodon, which reached significantly lower levels during postmoult stage (Faadila et al., 2013).Conversely, total FAA content in the muscle of Gecarcinus lateralis decreased almost three-fold during the premoult period compared to the intermoult period (Yamaoka and Skinner, 1976).In the blue crab Callinectes sapidus, the whole body tissue FAA content was maximum in the intermoult stage, declined during premoult and remained low in postmoult condition (Wheatly, 1985).
The current investigation has observed an increase in the muscle lipid content of destalked crabs irrespective of the moult stages.Similar results were documented from the body tissue of intermoult unilaterally destalked Charybdis lucifera and Portunus sanguinolentus three days post-operation (Murugesan et al., 2008;Sudhakar et al., 2009b).Diwan (1973) recorded an increase in lipid content on account of eyestalk ablation in the edible muscle of Barytelphusa cunicularis.However, no significant change in fat content of the muscle was noticed after bilateral eyestalk removal in Varuna litterata and Parapenaeopsis hardwickii (Madhyastha and Rangnekar, 1976;Nagabhushanam and Kulkarni, 1980).Rao et al. (1985) observed that bilateral eyestalk ablation induced a decrease in the fat content of muscle of fiddler crabs.Unilateral eyestalk ablation did not alter the whole body lipid content in Macrobrachium rosenbergii (Koshio et al., 1992).
As observed in our study, a progressive increase in the lipid content of the abdominal muscle from intermoult to premoult stages was recorded in Orconectes virilis (O'Connor and Gilbert, 1969) and Penaeus monodon (Suneetha et al., 2009).In contrast, investigations of Tanikawa et al. (1958) in the king crab Paralithodes camtschatica revealed that soft-shelled meat contained larger amounts of crude fat than the hard-shelled meat.Kanazawa et al. (1976) observed almost same levels of muscle lipids throughout the moult stages in Penaeus japonicus.The increase in muscle lipid during the premoult period is crucial, which will be utilized as an energy source for the subsequent moulting process.Muscle lipid is a major contributor for muscle revival after ecdysis, acting as a reservoir for the production of cellular and subcellular membranes, which is exhausted after postmoult stage (O' Connor and Gilbert, 1969).
In the present investigation, muscle cholesterol content recorded a significant rise in the experimental groups compared to the intact crabs of the three moult stages.In agreement with our observations, eyestalk ablation caused significant rise in muscle cholesterol concentration of intermoult Sesarma boulengeri (Sinha and Mooswi, 1978).On the contrary, Teshima (1978) reported that cholesterol content in the muscle of unilaterally eyestalk ablated intermoult Penaeus japonicus was slightly lower than that of unablated group.Arcos et al. (2003) recorded significant decrease in the haemolymph cholesterol levels eight days post unilateral ablation in intermoult Litopenaeus vannamei.In the same species, Sainz-Hernández et al. (2008) observed that neither unilateral nor bilateral eyestalk ablation induced any significant difference in the haemolymph cholesterol concentration.The muscle cholesterol content in eyestalk intact Travancoriana schirnerae gradually increased from postmoult to premoult stages.However, such moultwise variations were not recorded in the muscle of Penaeus japonicus (Kanazawa et al., 1976) Gómez et al., 2012).
In the current investigation, unilateral eyestalk ablation did not induce any significant difference in the muscle moisture content between the moult stages.This observation reveals that eyestalk principles do not play a role in the regulation of water content of the muscle tissue.Similarly, unilateral destalkation did not yield significant variation in moisture content of the muscle in juvenile Macrobrachium malcolmsonii (Soundarapandian and Ananthan, 2008) and adult Portunus sanguinolentus (Sudhakar et al., 2009b).Conversely, investigations in juvenile Homarus americanus revealed significantly higher muscle tissue water content in response to destalkation (Charmantier et al., 1984;Jackson et al., 1987).In destalked intermoult Ocypode macrocera, significant increase in moisture content was recorded in the first 24 hours until the eighth day of the experiment (Bhat et al., 2012).
In the current study, a moultwise fluctuation was not observed in moisture content of the muscle.Comparable observations were noticed by Cesar et al. (2006) in the moult stages of Litopenaeus vannamei.In Penaeus monodon, no marked variation in the muscle moisture content was reported during different moult stages (Faadila et al., 2013).However, Travis (1957) showed a higher proportion of water at ecdysis and early postmoult stage, declined in early premoult before rising again at late premoult stage in the spiny lobster Panulirus argus.In Penaeus monodon, Suneetha et al. (2009) observed that the whole body moisture content declined steadily as the animal progressed to the premoult stage from the postmoult stage.
Figure 1 .
Figure 1.Variation in the protein content of muscle of control and experimental crabs during different moult stages.Statistically significant rise in the oligosaccharide content was discernible in the experimental group (0.658 ± 0.14%) in Figure 2. Changes in the muscle oligosaccharide content in response to unilateral eyestalk ablation.
Figure 3 .
Figure 3. Polysaccharide content in the muscle of unablated and unilaterally ablated crabs during different moult stages.
Figure 4 .
Figure 4. Changes in the total free amino acid content in response to unilateral eyestalk ablation.
Figure 5 .
Figure 5. Impact of unilateral eyestalk ablation in the total lipid content of muscle during different moult stages.
Figure 6 .
Figure 6.Bar graph illustrating changes in the muscle cholesterol content of control and unilaterally destalked crabs in different moult stages. | 2019-04-02T13:03:04.854Z | 2016-12-31T00:00:00.000 | {
"year": 2016,
"sha1": "1057696da8f1c1889c8afe9894661be790e700ea",
"oa_license": "CCBY",
"oa_url": "http://revista.rebibio.net/v3n6/v03n06a10.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1057696da8f1c1889c8afe9894661be790e700ea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
267183904 | pes2o/s2orc | v3-fos-license | A novel approach to improve reliability of aerosol jet printing process
3D structures were aerosol jet printed using silver nanoparticles in a single-step
Introduction
Printed flexible electronics (PFE) was developed to overcome the limitation of rigid and brittle traditional electronics by the use of elastic materials to fabricate electronic circuits that can be stretched or bent without breaking while still maintaining desired electronic properties.This technology is being used in a wide range of applications, including smart packaging, sensors [8], displays, and energy harvesting [60], [33], that are lightweight, low cost, and can be easily integrated into a range of electronic devices.The manufacture of flexible, stretchable, wearable, and conformal electronic components is possible due to several conventional, 3D, and hybrid printing technologies, where the most popular are: (i) inkjet printing [20], [2], (ii) aerosol jet printing [12], [58], (iii) screen printing [37], [68], (iv) gravure printing [48], [47], or (v) stereolithography [38], [39].Among these, inkjet printing (IJP) and aerosol jet printing (AJP) are recently developed due to capability of printing at high resolution with less material wastage and comparably low electrical resistivity of c.a. 4.5 × 10 −8 Ωm (~3 × bulk Ag) [66].
While IJP is dedicated for flat 2D surfaces, AJP can be applied to print on 3D objects.AJP utilizes an aerosol stream generated from metal nanoparticle-based conductive inks to deposit small droplets of material onto a substrate.A mean size of metal nanoparticles in the ink is usually c.a. 10 nm [62] or 50-60 nm [35].
The AJP process has the capability of direct writing and works without conventional masks, due to easy control and selective deposition of inks at precisely defined locations on the substrate.High resolution ensures the fabrication of prints as small as a few micrometres in size [22] with a high level of precision and accuracy, while high printing speeds make the process a suitable technique for high-volume production applications.A commercially available printers enable fabrication patterns with a minimum feature size starting from 15 µm up to few cm in width and 0.1 µm in thickness [55].The versatility to apply a wide range of materials results from an atomiser used to create a dense aerosol of microdroplets focused by shielding gas in an aerosol stream [13].Unlike other printing techniques, AJP operates without physical contact between the printing head and the flexible substrate and allows printing on uneven or curved surfaces.The geometry of printed paths, e.g.width and high, is regulated by nozzle outlet orifice diameter and stand-off distance.Furthermore, compared to traditional approaches, aerosol reduces the cost of ink consumption and limits the waste of hazardous materials used in the etching processes employed by subtractive methods.These features multiply the applications of the AJP process in the fabrication of high-quality devices.However, AJP prints show the same problems as IJP, e.g.micro-and nanoscale pores within the structure, which significantly affect the properties and reliability of PFE.While in IJP the key factors contributing to the voids are usually the pinning effect, residual surface temperature, insufficient droplet overlap and surface defects [64], in AJP it is the diameter of the aerosol droplets and surface temperature [34].What is more, the challenging issue in the AJP is the selection of parameters to avoid overspray formation.
Overspray refers to the unintended dispersion and scattering of the aerosol, which is deposited in the peripheral region of the trace and deteriorates its reliability.As small droplets with low inertia contribute mainly to overspray, there are several factors that maximise homogeneity of the printed path, including nozzle design, aerosol generation method, ink properties, gas pressure, and substrate characteristics [42], [67], [57].
However, flexible substrates need to withstand the high temperature during processing and match the coefficient of thermal expansion (CTE) of the deposited material.It should be noted that PI, among others, shows excellent heat resistance and dimensional stability, i.e., glass transition temperature (Tg) >450 ˚C and CTE at 0 -10 ppm⋅K −1 [11] , [24].Additionally, PI have mechanical properties to ensure the service life of the device after multiple bending [25].Therefore, in PFE fabrication, which include high-temperature sintering, PI is commonly applied.
The printing process leaves semi-wet paths of ink on the surface of flexible polymer substrate [65].However, to transform the printed layer into a conductive one, a sintering procedure is necessary to evaporate the solvent, remove the polymer capping shell, and join the metal nanoparticles (NPs).
It is worth mentioning that nanometer size of the metal particle significantly lowers the sintering temperature compared to bulk material.Nevertheless, in conventional furnace sintering, a temperature of c.a. 200 ˚C for a minimum of 30 min is required [58].Some authors combined chemical and thermal sintering [23], however ink composition should be considered.To improve the process and eliminate substrate material limitations, other sintering techniques are proposed, e.g.electrical [6], photonic [31] or microwave [9].Due to processing speed, simplicity and prints quality, photonic sintering [43] is more and more popular, with the following radiation sources: laser [3], [49], flash lamp [31], UV lamp [53], [51] or IR lamp [28], [61].Photonic sintering aims to achieve targets rather than heating the entire system indiscriminately.In consequence, conductive paths absorb the radiation, leaving the polymer substrate usually unaffected.The absorption spectra of silver NPs can vary depending on their size, shape, or composition.
Nevertheless, for spherical silver NPs, the absorption peak is typically in the range of 400-450 nanometres [14], [15].Therefore, the application of UV light with wavelength ranging from 100 to 400 nm is limited [43], while laser depending on optical properties of the ink and the absorption characteristic of the NPs, have to be designed for dedicated application [56].
Although flash lamps are very popular, intense pulse lights (IPL) and its wide spectra that embrace UV, visible and near-IR regions, irradiate on the polymer surface, limiting the application of the selected substrate materials [27].In this case, the IR lamp becomes very promising in PFE sintering.IR technology uses irradiation in the near infrared (750 to 2,500 nm) and provides selective heating and short time sintering of printed metallic NPs [28].
Printed electronics often involves complex patterns and structures with varying geometries, such as lines [22], dots, and interconnections [54].Achieving consistent and uniform printing quality across the entire substrate can be difficult due to factors such as ink flow, substrate roughness, and inksubstrate interactions.Variations in ink viscosity, surface tension, and drying characteristics can lead to defects and inconsistencies in printed structures.The quality assessment of the silver NPs-based printed electronics is complicated due to complexity of the fabrication process (e.g.ink preparation, printing, drying, curing, and post-processing) and small size of the prints as well as applied metallic powder.The crucial factor characterising the prints is electrical conductivity, which is influenced by the packability of silver NPs and sintering.Good packability is defined as high density of metal NPs [71], while sintering refers to the removal of chemical agents and the joining of NPs to establish a conductive network [40].
PFE is a rapidly evolving field, and standardised testing methods for quality assessment are still being developed.There may be a lack of universally accepted protocols and techniques for evaluating critical parameters such as electrical conductivity, mechanical properties, and the durability of printed structures.
As a consequence, it is difficult to compare results across different studies.Therefore, within the proposed paper, Ag NP based ink was applied in the AJP process to fabricate prints on thin and flexible polyimide substrates.Samples were prepared with or without additional in-line substrate heating.Two types of sintering methods were used, the furnace and the infrared lamp.Printed samples were scanned and analysed with the use of various types of microscopes, e.g., light microscope (LM), scanning electron microscope (SEM), confocal microscope (CM), and atomic force microscope (AFM).Samples with significant morphological and structural differences were produced due to different process parameters configurations.
Finally, the fabrication techniques were summarised, and optimal parameters were selected to improve the structure and reliability of printed traces.
Aerosol jet printing process
In the research a self-built aerosol jet printer was used.The setup with the operating principle of the printing process is presented in Fig. 1.W in a reservoir filled with 5 ml of suspension, which was conductive ink.The size of generated aerosol droplets was mostly in the range of 8-17 µm [34].Produced aerosol was transported to the printing head by compressed air, which was used as aerosol carrier and sheath gas.After being compressed and accelerated by a sheath gas stream, the flowing ink aerosol was sprayed out of the nozzle through an orifice with an inner diameter of 0.36 mm to form thin conductive traces on the foil substrate.The printing head was attached to the manipulator arm with a stand-off distance of 3 mm above the bed (Fig. 2).What is more, preliminary studies showed that the printing process is stable without nozzle clogging or aerosol stream choking with the SG in the range of 120-360 mbar.Therefore, the pressure of SG in this study was selected to 240 mbar.The printing velocity in both stages was set to 73 mm/min.The printing was performed with control room temperature of 22 ˚C and humidity of 50%.
Table 1.Properties of the utilized ink provided by manufacturer
Printed traces characterization methods
The microscope analysis of printed and sintered traces was A resistance measurements were performed on at least three randomly selected samples at constant temperature of 22 °C.
Parameters selection
The research began with many experiments combining various process parameters.Parameter selection aimed to achieve homogeneous traces with width below 50 µm (excluding overspray) and regular and smooth surface.It is obvious that increasing the pressure of sheath gas results in a decrease of the printed trace width.However, the carrier gas (CG) and the sheath gas (SG) are mutually dependent.After exceeding the crucial pressure value of the SG, the aerosol will not be dosed without increasing the CG.On the other hand, high pressure of the CG will provide intensive aerosol deposition, resulting in a wide trace.
The smallest tested CG pressure of 40 mbar produced traces with a width of about 120 µm including overspray, which was difficult to distinguish due to irregular and rough surface (see
In-line drying effect
The results of printing process with selected in stage ( 1 structure, without spreading and filling a free space (see Fig. 7).
The thickness of the path is c.a. 6 µm.However, due to extremally high longitudinal roughness (Ra = 0.52 µm and Rz = 3.93 µm), the trace has black colour.However, in the contrary to the samples printed with the lowest CG pressure in stage (1) and shown in Fig. 4, the trace was rather continuous just with local open porosity (Fig. 7c).
Therefore, to solve the problem, the power of the ultrasonic generator was increased from 18 W to 24 W. The increased volume of aerosol generated in the reservoir prevented intensive evaporation of the liquid and production of completely continuous and smooth traces (Fig. 8).In this case, the longitudinal roughness was significantly decreased to Ra = 0.11 µm and Rz = 0.7 µm.intensive overspray and open porosity as well (Fig. 9a).On the contrary, the trace printed on a heated substrate (FS2+H) is centred, regular, and satisfactory narrow (Fig. 9b).The samples sintered in a furnace for 1 and 2 hours seemed to have a similar surface structure (Figs. 10 and 11).However, high resolution scans on AFM showed nano-porosity in sample sintered for 1 h (FS1) (Fig. 10b), while nano-roughness due to existence of nano-plateaus in sample sintered for 2 h (FS2) (Fig. 11b).As a result, the former sample had higher surface mean roughness Sa and Sq compared to the latter (Tab.5).open pores with diameter up to 8 µm and many small craters with diameter below 1.5 µm (see Fig. 12a).Despite those defects, the trace is continuous with an overspray having width of 20-25 µm on both sides.The substrate heating provides a sample free of porosity (Fig. 12b).However, the overspray region is intensively developed.An AFM scans confirmed high peaks in the trace/overspray boundary, known as the coffee ring effect (CRE) (Fig. 13).The increase in overspray height resulted from accelerated evaporation of liquid present in the ink aerosol and enhanced deposition of ejected droplets.The CRE is visible in the DM microscope as a black region (see Fig. 8 a and b, both sides of the traces).It should be emphasised that according to SEM results this region is discontinuous.To confirm this statement, an additional sample was printed with insufficient aerosol volume on heated substrate (the same as shown in Fig. 7) and sintered in furnace for 1 hour.The DM showed a potentially satisfactory trace with a black region in the middle (Fig. 14a), while the CM scan suggests intensive porosity in this region (Fig. 14b).The reliable result with spongy structure and discontinuity is provided by the SEM image (Fig. 14c).
Therefore, DM and CM surface analysis can be misleading and should be confirmed by SEM or AFM scans.
Infrared sintering
The same printed samples were sintered by IR lamp.NIR dot lamp heated only printed lines instead of whole foil samples.
The sintering time was 5 min and 2.5 min, for a traverse speed of 10 mm/min and 20 mm/min, respectively.Regardless of the speed of sintering, the quality of the samples depended on the printing process.Intensive open porosity was present in traces printed without substrate heating (Fig. 15a).Sample IR 1 sintered with lower traverse speed generates a finer structure with bonded particles and visible aggregates (Fig. 15b).In the case of sample IR 2 single aggregates can be distinguished in the standard topography mode (Fig. 15c) and were highlighted by phase image which is relative to sample material contrast (Fig. 15d).Smoother surface structure of IR 1 sample arises from higher thermal energy generated while sintering with lower traverse speed.The energy was sufficient to bond all aggregates to solid material.It should be noted that without substrate heating, the trace material contains residues of the liquid phase and polymer material, which have to be removed and restrain the short-time sintering.samples, respectively.However, the roughness of samples printed with substrate heating is slightly higher, probably due to increased half-dried aerosol deposition with lowered viscosity.
Discussion
The most troublesome issue in AJP is the overspray formed usually in the nearest region of the printed trace.In some cases, the spattered overspray can cover tens of square millimetres of the substrate.It results from the presence of small aerosol droplets below 1 µm [22].Fine droplets are ejected from the main stream and create an aerosol cloud that electrostatically embed on the foil.Skarżyński et al. [57] concluded that wider overspray and lesser homogeneity of the trace result from lower Saffman force acting on fine aerosol droplets.The problem firstly was solved by adding virtual impactor to the line transporting ink aerosol to separate fine droplets [67], [52].
However, this solution increases the costs of the process by intensifying ink consumption.The present study showed that overspray can be limited and aerosol cloud spatter totally eliminated by applying heating of the bed, which significantly influences the properties of the ink.As a silver nanoparticles suspension, the ink has to be chemically stabilised and therefore contains a precise mixture of surfactants, polymer additives, and solvent [44].All of the stabilisers are temperature sensitive.
Therefore, the printing nozzle is usually fixed close to the substrate to increase control of the process.However, with activated bed heating, the temperature is transferred to the nozzle, enhancing evaporation of the solvent directly in the stream.As a consequence, the properties of the ink change, the Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 2, 2024 surface tension decreases [4], while the viscosity and density increases, facilitating the deposition [69].It should be emphasised that the alcohol and polymer additives to the ink are responsible for controllable transport of nanosized silver particles.Nevertheless, after deposition, the solvent has to be removed to bond metal particles into a solid continuous trace.
Heating of the bed helps to accelerate the drying process by promoting the evaporation of solvents and leads to changes in the concentration due to destabilization of surfactants [19], [30].
This prevents the formation of defects and allows for faster printing speeds.However, heating of the deposited ink in the trace starts the decomposition of the silver precursor at the temperature above the boiling point and the decay temperature of the solvent.Therefore, silver nanoparticles are bound and form agglomerates [70] in situ in the absence of solvent (Fig. 17) or clusters [29], when a small number of nanoparticles are attached to aerosol droplets (see Fig. 16b,d).The clusters are formed by polymerisation of the capillary bridges in the wet state, at moderate temperatures below 100 °C [21].Pham-Van et al. [50] hypothesised that these clusters form structures that minimise the second moment of mass distribution.Moreover, Yang et al. [70] stated that thermal decomposition of the ink complex would be largely restricted in isolated clusters and as a result form a discontinuous film.The drying process affects polymer solutions, which exhibit a change in surface tension dependent on the local solute concentration and lead to gelation [59], [26].On the other hand, variations in surface tension along the liquid−gas interface induce a Marangoni flow that pushes particles away from the contact line and therefore leads to suppression of the CRE [4].Anyfantakis et al. [5] reported that suspension surfactants are responsible for solutal Marangoni flow and flattening of the profile of the final polymer film.This phenomenon was observed in present study by formation of porous overspray.What is more, these porous features of solidifying suspension can also be attributed to the breakdown of planar growth due to instability of the solid-liquid interface.
These factors influence growth kinetics and determine the growth morphology [45].
After printing, the sintering process begins and is usually divided into two stages.In the first stage, the nanoparticles in the agglomerates are bonded to the solid material.However, the residue of the chemical stabilisers provides an energy barrier to sintering.Therefore, the removal of stabilizers typically requires a temperature higher than 250 °C and a relatively long time due to a relatively large amount of surface residues (>10 wt.%) [41], low mobility and physical adsorption of surfactants [72].It should be noted that thermal energy requirements are reduced in the area of agglomerates due to the action of surface energy and van der Waals forces between nanoparticles [32], [17], [7].In the second stage, the sintered aggregates are joined together, forming a solid trace (Fig. 18).The sintering of AP samples printed without bed heating results in a finer material structure (see Fig. 15) due to possible particles migration while sintering, which was confirmed by nano-scale roughness measurement.However, the structure shows some porosity and non-continuity between aggregates with possibility to microcracks generation.Gramlich et al. [16] found that sintering should begin before the film is fully dried to form sintering necks and increase the particle-to-particle bonding significantly above the previously predominant van der Waals force.As a result, sintered material can resist capillary pressure and thereby prevent cracks.
The selection of the sintering method and the sintering parameters significantly affected the quality of the prints.The furnace sintering lasts much longer, and the entire substrate is heated uniformly.On the contrary, infrared radiation directly affected the traces by penetrating the printed layer, causing localised heating and enabling selective sintering of the material.
While infrared sintering can provide rapid heating, precise temperature control might be more challenging than in furnace.
On the other hand, some heat-sensitive substrates may be unsuitable for furnace sintering due to prolonged exposure to high temperatures.Therefore, IR sintering is more suitable for heat sensitive substrates, as it provides localised heating without subjecting the entire substrate to prolonged affection of high temperatures [63], [18].The results showed that furnace sintering (FS) without bed heating in 1 hour generated high nano-porosity due to insufficient energy.Increasing the sintering time to 2 hours solved the problem, the porosity faded (compare Fig. 10 and Fig. 11) and roughness decreased (Tab.5).
The quality of the prints was verified by resistance measurements.Defects detected in samples FS1 and FS1+H were responsible for the highest values of sheet resistance.
Extended time decreased twice order of magnitude from 29.6 Ω/□ to 0.283 Ω/□ in sample FS1 and FS2, respectively.In the case of infrared sintering, the traverse velocity of the polymer substrate regulated the material structure.The lower velocity applied in samples IR1 and IR1+H provided much lower nanoscale roughness (Tab.5), indicating decreasing of porosity.As a result, a decrease in a sheet resistance was achieved.
It should be emphasised that the sheet resistance of the printed traces is comparable with the results obtained for commercially available printers.Chen et al. [12] printed large traces of 2 mm width and 10 µm thick on paper substrate and obtained a sheet resistance of 1.13 × 10 −2 Ω/□, while Seiti et al.
[55] declared a sheet resistance in the range of 0.05-0.1 Ω/□ on polymer substrate, depending on the ink and trace geometry.
The prints fabricated in this study and sintered by IR lamp fall within the range.Nevertheless, additional in-line bed heating should be applied to increase the thickness of the prints and as a result decrease resistance.
Conclusions
In the presented study, silver-nanoparticle based ink was applied
Fig. 1 .
Fig. 1.Schematic diagram of the aerosol jet printer with an ultrasonic atomiser equipped with: A -control unit, Bmicrofluidic flow controller, C -ink reservoir, D -ultrasonic generator, E -air compressor and F -printing head with the nozzle fixed above moving bed An aerosol was generated by an ultrasonic transducer working with a frequency of 1.7 MHz and a power of 18 or 24
A
microfluidic flow controller (Elveflow OB1 MK3+, Paris, France) enabled precise regulation of compressed air pressure down to 1 mbar.Samples 15x60 mm made of polyimide foil 100 µm thick (previously degreased with ethanol) were fixed to the high-precision mobile CNC heating bed with magnets.A length of printed traces was 50 mm.The selected samples were printed with additional in-line heating of the substrate material at a temperature of 90 ˚C.
Fig. 2 .
Fig. 2. Printing head (front -A) and IR lamp (back -B) fixed in the holder and attached to manipulator arm, C -heating and moving bed. 1 -aerosol inlet, 2 -shielding gas inlet, 3 -main resin-printed body, 4 -fixing screws, 5 -exchangeable nozzle with inner orifice diameter of 0.36 mm.The ink used in the research is a commercially available suspension of silver nanoparticles (d50 = 6 nm up to 45 vol.%) suspended in solvent, mostly tetradecane or a mixture of ethanol and glycol (Amepox Microelectronics, Ltd., Łódź, Poland).Properties of the ink are collected in Tab.1.The research was divided into two stages: (1) printing parameters selection, i.e., pressure of carrier and sheath gas; and (2) printing and sintering
Table 2 .
Range of printing parameters in stage (1).conducting ink sintering were applied: (i) conventional furnace sintering at 230 ˚C for 60 min or 120 min and (ii) IR lamp sintering with voltage of 5 V (50% of the maximum power), four passes with traverse velocity of 10 or 20 mm/min.The ink manufacturer suggests sintering in the furnace sintering for 60 min.Printed traces were analysed at different variations of sample preparation: (i) as printed without a heating The samples were sintered in a muffle furnace (Nabertherm LT9/12, Lilienthal, Germany) with a heating temperature range of 30-1300 ˚C and by infrared adphosNIR dot lamp (adphos, Bruckmuehler, Germany) equipped with a 150W emitter halogen lamp.A dedicated head focusses NIR (near infrared) light to a small heated area with a diameter of around 7 mm providing a high heating energy density of 3.9 W/mm 2 .The system heats only a very small round area and thus needs transport during operation.The precise specification and process parameters of printing and post-printing treatment are collected and presented in Tab. 3.
performed using VHX-6000 digital microscope -DM (Keyence VHX-6000, Osaka, Japan) and the atomic force microscope -AFM microscope (NT-MDT NTEGRA Prima, Apeldoorn, The Netherlands).AFM scans were performed with the use of resonant, noncontact mode, and NANOSENSORS PPP-NCLR cantilevers.Additionally, the surface of sintered traces was analysed with the use of scanning electron microscope -SEM (Tescan VEGA 3 SBH, Brno, Czech Republic) equipped with SE, BSE detectors, and EDS system for elemental analysis and confocal microscope -CM (Olympus Lext OLS5100).Surface morphology, dimensions, and overspray were assessed by DM and SEM, while CM and AFM provide surface topography parameters (profile high and roughness) and irregularity dimension.Linear roughness was measured on the axis of the trace line by CM, while square roughness was measured as a rectangle 20x50 µm by AFM.For the electrical characterizations, i.e. prints resistance, a 4-point probe measurement method with precise measure unit (B2901BL, Keysight Technologies, Santa Rosa, the USA) was used.
Fig.Fig. 3 .
Fig.3a).Furthermore, intensive overspray is visible on the left and right sides of the path direction.It suggests that the aerosol cloud consisting of the finest droplets was formed and ejected from the printed stream by SG.Therefore, the aerosol cloud in form of spatter is responsible for the intense smudging of the foil.In the central part of the trace, the presence of agglomerated, chemically bonded particles, irregularly deposited on the
Fig. 4 .Fig. 5 .
Fig. 4. CF scans of the sample presented in Fig. 3, profile directions (a,b), scanned surface (c) and generated lateral (d) a nd longitudinal (e) profiles.When the parameter values were switched and the CG pressure of 60 mbar, the trace width was maintained the same.However, the structure of the trace changed significantly and consisted of two visible parts: (i) central homogeneous line with the width of 38 µm and line with peripheral overspray with the width c.a. 200 µm.It is worth stressing that aerosol cloud was significantly reduced.The higher pressure of the CG increased ) optimal parameters (CG: 60 mbar) and without heating of the bed are presented in Fig. 6a.It is clearly visible that printed traces are well concentrated in the main axis.However, intensive spatter occurred, smudging all around the polymeric substrate.What is more, some local irregularity of the trace was found with significant open porosity in the central, thickest part of the trace (Fig. 6b).The porosity resulted from evaporation of the solvent.Samples were kept at room temperature and measured 1 h after deposition.In this short time the drying liquid transformed into gas and by the arising pressure was removed from the deposited material, leaving open porosities in the central continuous part of the prints resembling 'chimneys'.
Fig. 6 .
Fig. 6.DM micrographs of AP sample produced with selected parameters (a,b).Red arrows indicate spatter caused by aerosol cloud out of the trace path, while yellow arrows point irregularity of the trace -cyclical wider and narrower areas.The application of heating significantly changed the print characteristic.Bed heating causes an increase of the process temperature, changing the flow characteristic and heterogeneity of the traces.It is stated that heated nozzle acts as capillary with rising pressure, which as a result increased velocity of the SG in the nozzle.What is more, the rising temperature affected the aerosol material as well.The intense evaporation of the liquid in the stream significantly increased the viscosity of the ink
Fig. 8 .
Fig. 8. Micrographs of smooth trace printed onto heated substrate (AP+H) with appropriate aerosol volume, DM (a), CM (b) and generated lateral (c) and longitudinal (d) profiles.The results of printed traces geometry and surface parameters measurements are presented in Tab. 4. Despite spatter formation, all traces printed without substrate heatingshowed high width of the main body, as well as large overspray, in the range of 30-35 µm and 186-203 µm, respectively.On the contrary, samples prepared with additional heating showed width in the range of 19-23 µm.Furthermore, the thickness of the prints carried out with and without heating of the substrate foil was 5.5 µm and 1.2 µm, respectively.This huge difference arose from liquid evaporation and changed the viscosity of the ink aerosol by increasing the temperature.As a result, the overspray was limited, and an increase in thickness was noted.
3. 3 .
Furnace sintering Aerosol jet printed traces require a post-printed sintering process to achieve optimal conductivity and mechanical integrity.The sint1ering process involves subjecting the printed traces to elevated temperatures, which promotes the bonding and consolidation of the conductive particles in the ink.The Eksploatacja i Niezawodność -Maintenance and Reliability Vol. 26, No. 2, 2024 basic method for sintering printed traces is furnace heating.Both types of samples, as-printed and as-printed with in-line heating, were sintered in furnace for 1 or 2 hours.Despite the manufacturer's suggestions for sintering time (1 hour) [73], we extended the time to 2 hours and compared the surface structure and prints properties.Extended time of samples curing in furnace should supply additional energy to silver nanoparticles and improve sintering process.The sintered samples showed the same features as the non-sintered samples.Therefore, samples produced without in-line heating (FS2) were characterized by
Fig. 14 .
Fig. 14.DM micrograph of the trace surface printed with insufficient aerosol volume, sample (a) with CM scan along marked transverse line (b) and SEM image (SE) in the middle of trace width (c).
Fig. 15 .
Fig. 15.AFM scans of sample IR 1 (a,b) and IR 2 (c,d).Topography (a-c) and topography phase image -relative to sample material (d) Substrate heating enhanced liquid evaporation and aerosol deposition.Samples IR1+H and IR2+H showed excessive development of overspray in the trace boundary region.Figure16presents a set of peaks in the overspray area that are higher even than the main part of the trace.Independently of the traverse speed, IR sintering in combination with substrate heating in the printing process provided a continuous and smooth surface of the trace.In comparison to the AP+H sample (Fig.17), the smoothing effect of the sintered samples is clearly visible.Agglomerates bound by liquid residues form a wavy surface topography.AFM in the phase image (Fig.17c) confirmed a slack structure without the presence of solid material.On the contrary, single silver particles were detected in IR1+H sample by high resolution AFM topography scans, which are particularly visible in friction mode (Fig.18).Large
Fig. 18 .
Fig. 18. High resolution AFM topography image (a) and friction mode -LF (b) of IR1+H sample.Blue arrows indicate the boundary lines between sintered aggregates.
in aerosol jet printing process to fabricate prints on flexible polyimide substrates.Samples were prepared without or with additional in-line heating of the substrate material to temperature of 90 ˚C.The major problem of the process is the formation of the spatter.Fine droplets with diameter below 1 µm are ejected from the stream and create aerosol cloud that electrostatically embed on the foil.An additional problem was the local open porosity that resulted from evaporation of the solvent.A gas formed inside the trace from the drying liquid exerted pressure, leaving open porosities in the central continuous part of the prints that resemble 'chimneys'.The present study showed that overspray can be limited, while aerosol cloud spatter and open porosity are totally eliminated, by applying bed heating.The heat transferred to the nozzle increases the viscosity and density of the ink due to accelerated evaporation of the solvent from the aerosol.As a consequence, the agglomerates with increased nanoparticles concentration prevent defect formation.Furthermore, the enhanced deposition of silver nanoparticles increases the thickness of the traces.All printed samples were sintered in furnace at 230 ˚C for 60 min or 120 min or by IR lamp with a voltage of 5 V and a traverse velocity of 10 or 20 mm/min.The sintering parameters recommended by the ink manufacturer, i.e., furnace heating at 230 ˚C for 60 min were insufficient, generating defined by AFM nano-porosity and high roughness.Increasing the sintering time to 120 min smoothed the surface of the trace by decreasing porosity and roughness as well.In the case of IR sintering, the material structure was regulated by the process traverse velocity, and thus samples sintered with a lower velocity showed the lowest nanoscale roughness, indicating the smoothest sample surface.The use of infrared light for sintering aerosol jet printed traces offers several advantages.It allows for rapid and localised heating, enabling precise control over the sintering process.Infrared sintering also reduces the thermal impact on the surrounding materials, minimising the risk of damage to the substrate.Additionally, the process can be easily integrated into existing manufacturing lines or combined with printer, making a practical solution for large-scale production of electronic devices.The research confirmed that properly planned AJP process improves the quality and reliability of the printed traces.
Table 4 .
Selected parameters of printed traces.
* SF -spatter formation, described as extensive overspray covering substrate up to 3 mm on both sides of the trace
Table . 5
. Nano-scale roughness of samples measured by AFM. | 2024-01-24T17:44:08.396Z | 2024-01-17T00:00:00.000 | {
"year": 2024,
"sha1": "9c9730dcfd8233e9fc08d054eb325cbf3e11b8c5",
"oa_license": "CCBY",
"oa_url": "https://ein.org.pl/pdf-180012-102567?filename=A%20novel%20approach%20to.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0ce2a08352cb9ae3322494f2a6664c7838a2b650",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
254200997 | pes2o/s2orc | v3-fos-license | A theoretical justification of the slip index concept in fretting analysis
Fretting in the partial-slip and gross-slip regimes under a constant normal load is considered. The tangential force—displacement relations for the forward and backward motions are described based the generalized Cattaneo—Mindlin theory of tangential contact and Masing’s hypothesis on modelling the force—displacement hysteretic loop. Besides the critical force and displacement parameters (characterizing the triggering of sliding), the model includes one dimensionless fitting parameter that tunes the tangential contact stiffness of the friction—contact interface. Explicit expressions are derived for the main tribological parameters of the fretting loop, including the slip index and the signal index. The presented phenomenological modelling approach has been applied to the analysis of two sets of experimental data taken from the literature. It has been shown that the experimentally observed simple relation of a rational type between the slip index and the slip ratio corresponds to the gross-slip asymptotics of the corresponding model-based predicted relation. The known quantitative criteria for the transition from the partial slip regime to the gross slip regime are expressed in terms of the stiffness parameter, and a novel geometric transition criterion is formulated.
Introduction
The phenomenon of fretting has been observed for a long time [1], and it is still the subject of active tribological research, both experimental [2] and theoretical [3]. Whereas fretting wear, fretting fatigue, and fretting corrosion are frequently encountered in frictional contacts subjected to prolonged tangential oscillations of small amplitude, overall, the accumulated fretting damage may become a critically decisive factor for long-term functionality of the contact interfaces. An important example of serious consequences that may result from fretting wear is associated with flow induced vibrations in the pressurized water reactor system of nuclear power plants [4,5].
In addition to the stick regime of fretting, which is regarded as a non-dissipating regime [6], there are distinguished two main regimes of fretting wear: partial slip and gross slip, and their distinction from experimentally observed tribological characteristics of a contact interface, such as variations of the tangential force and the relative displacement, is rather to be regarded as one of poorly understood aspects of fretting process [7]. This is mainly because the regime transition occurs when a relative motion is undertaken over the entire contact interface, which is usually hidden from observation in the real engineering applications. Significant progress in the analysis of the fretting regime transition was achieved when Fouvry et al. [8] introduced certain transition criteria to quantify the boundary between the partial and gross slip. In particular, based on the classical Cattaneo-Mindlin theory [9,10] of tangential contact (in a ball on flat contact configuration), the effects of partial slip and dissipated energy were highlighted and, in particular, the so-called slip and energy ratios were suggested as dimensionless transition criteria.
Varenberg et al. [11] have made another important contribution to our understanding of the transition from fretting to reciprocal sliding by introducing a similarity criterion, termed as the slip index, as a result of their dimensional analysis of the mechanics of fretting contact. Further, Varenberg et al. [12] have established a simple empirical relation between the slip ratio and the modified slip index that incorporates the friction coefficient.
In response to the need of monitoring fretting characteristics in real-time, Kim et al. [13] introduced the so-called fretting signal index as a normalized phase difference between the friction force signal and the displacement signal, when one of them vanishes. However, to the best of the authors' knowledge, there are no published reports revealing the interdependence between the different transition criteria introduced so far.
In the present study, we develop a unified mathematical modelling framework, which incorporates as a special case the Cattaneo-Mindlin theory-based models used in Refs. [8,13]. To keep the analysis simple, we employ a one-free-parameter model for the tangential force loading of a contact-friction interface, which is based on Masing's hypothesis (see, e.g., [14]) for a hysteresis loop formed by the unloading and reloading curves in cyclic loading. In particular, the developed approach allows to theoretically justify the experimentally established relation [12] between the slip index and the slip ratio. As an important result of the presented analysis, we formulate a simple modelfree transition criterion, which is based on geometric properties of the slip ratio/slip index variation curve.
Masing hysteretic model for fretting contact
We consider a contact interface that may experience relative tangential motion under a constant normal load, N. Let F and x denote the tangential force and the corresponding relative displacement, respectively. Moreover, let * F and * x denote the critical force and displacement at incipient sliding starting from the position of rest, when the tangential load is gradually increasing. For quasi-static (non-accelerating) sliding, we assume * F F and, thus, the tangential force F is balanced by the friction force * F N , where is the Coulomb coefficient of friction. The functional dependence ( ) F x of the tangential force F on the tangential displacement x in the case of initial loading is termed as the backbone curve (see Fig. 1). We note that the derivative d ( ) / d F x x is called the incremental tangential stiffness, and its value at the zero point is equal to 0 tan( ) , where 0 is the angle of inclination of a tangent line to the backbone curve at the origin (see Fig. 1). The slop c 0 tan( ) S is usually called the (initial) interface contact stiffness. For frictional interfaces, c S determines the maximum incremental tangential stiffness.
The Masing model for reciprocating quasi-static sliding between the contacting surfaces with the force amplitude 0 F below the critical value, that is 0 gives the following analytical expressions for the forward, ( ) F x , and backward, ( ) F x , force-displacement curves (see Fig. 2), which are based on a chosen expression ( ) F x for the backbone curve, It is to emphasize that the Masing model exploits the symmetry condition ( ) Following the Ref. [15], we assume that We note that the case 3 / 2 m corresponds to the Cattaneo-Mindlin theory [9,13] (see also Ref. [8]). In its general form, the Eq. (3) represents the model of tangential reaction of a lap-type joint (we refer to the Ref. [13] for details). In the present study, the exponent parameter 1 m is used as a fitting constant, along with the parameters * x and * F . If the displacement amplitude is not large, that is 0 * x x , then the substitution of Eq. (3) into Eqs. (1) and (2) yields where, in view of Eq. (3), the force and displacement amplitudes are related by If the displacement amplitude is large, that is 0 then the force amplitude 0 F equals * F , and thus, Eqs. (1) and (2) imply that and Fig. 3. In the case of displacement-controlled loading, when the displacement variable x is specified, the value of the displacement amplitude 0 x may take an arbitrary positive value. In the force-controlled tangential loading, the force amplitude 0 F , of course, cannot exceed the critical value * .
can be obtained from the backbone curve ( ) F x by means of an affine transformation without shearing, that is C , which is in complete agreement with Eq. (1).
Slip ratio
We recall [11] that the slip amplitude is defined as the amplitude of the relative displacement between two positions where the tangential force vanishes. In other words, we consider the equations ( The so-called slip (or sliding [8]) ratio is defined as
and s
A is the resulting slip amplitude. In When the backbone curve is defined by Eq. (3), it is readily seen that the root of Eq. (9) is independent of * F (this conclusion also follows from a simple dimensional analysis) and thus depends on the values of the model parameters * x and m as well as on the value of the imposed displacement amplitude 0 x . Hence, the slip ratio will be a function of the dimensionless parameter m and the relative displacement amplitude: In view of Eqs. (5) and (8), Eqs. (9) and (10) yield where we have introduced the auxiliary notation ( and the dimensionless stiffness parameter m that tunes the tangential contact stiffness of the friction-contact interface.
Slip index
We recall that the notion of the slip index, , was introduced [11] based on the dimensional analysis of the hysteretic friction loop by the equation d c / A S N , where c S is the tangential stiffness of the contact interface, and N is the normal load. In our notation, we have According to Ref. [11], the stiffness c S is defined as the slop of the forward force-displacement curve ( ) F x at the beginning of the forward motion (see Fig. 2), that In view of Eqs. (1) and (7), we easily find that c , and therefore, Eq. (3) implies that Thus, the substitution of Eq. (13) into Eq. (12) yields where we have taken into account that Thus, in the framework of the developed generalized Cattaneo-Mindlin theory-based model, the slip index is found to be proportional to the coefficient of friction , the stiffness parameter m, and the relative displacement amplitude 0 * / x x that we denoted above as .
Energy ratio
Following Ref. [8], we consider the ratio between the dissipated energy, d W , and the total energy, t W , and The total energy t W is defined as the energy input, i.e., whereas the dissipated energy (per cycle) is given by If the displacement amplitude is not large, that is 0 * x x , then the substitution of Eqs. (4) and (5) into Eq. (17) yields where, in view of Eq. (3), we have In particular, when the imposed displacement amplitude coincides with the critical value of displacement, that is 0 It is of interest to note that * d 0 W in the special case 1 m , which corresponds the case of linearly elastic tangential response (without dissipation), when Eq. (3) simplifies as It can be simply verified that in the case 1 m , Eq. (20) also yields d 0 W for any 0 then the substitution of Eqs. (7) and (8) into Eq. (17) implies that where * d W is given by Eq. (21). Thus, the energy ratio as a function of the relative displacement amplitude 0 * / x x can be evaluated by Eqs. 16) and (20) or (22) followed by simple algebra. However, when the derived formulas are applied for numerical calculations, it is much simpler to program the dissipated ratio d t / W W in three simple steps: first, we calculate the total energy t W , second, depending on the value of the relative displacement amplitude, we calculate the dissipated energy d W , using one of Eqs. (20) or (22), and then, we take the ratio of d W to t W .
Signal index
Regarding the periodic character of fretting oscillations and per se following Ref. [13], we introduce the phase of displacement signal by assuming that It is clear (see Fig. 3) that, when the phase angle (being measured in radians) increases from zero to π / 2 , the tangential displacement x decreases from the maximum value 0 x (displacement amplitude) to zero.
Let 1 denote the phase corresponding to the In view of Eq. (23), the following relations hold true:
Transition criteria
Following Ref. [8], we consider quantitative characteristics of the transition between a partial and a gross slip behaviour at the fretting contact interface that complies with the Masing hypothesis. According to the assumed backbone curve (3), the transition between the partial slip and gross slip regimes is represented by the condition Fig. 4), such that they tend to unit as m tends to infinity. Hence, by generalizing the observed results, we can formulate the following general transition criterion: the transition from partial slip to gross slip is identified with the inflection point of any of the curves s, , and versus the relative displacement amplitude .
Gross-slip asymptotics
It is of interest to observe that the slip ratio s and the energy ratio in the gross slip regime are given by simple rational expressions. Indeed, according to Eqs. (11) and (22) (29) and (30), it becomes clear that the signal index slower approaches to the limit value as tends to infinity.
Variation of the slip ratio vs. the slip index
First, we note that, in view of Eqs. (10) and (14) where is the slip index.
It is of interest to observe that, based on a large number of experimental results, Varenber et al. [12] empirically established the relation 1 , which is of the form of Eq. (32). By comparing Eqs. (32) with (29), it becomes evident that this experimental law corresponds to the gross-slip asymptotics (see Eq. (29)). Figure 6 shows the result of fitting of the experimental data for nano-and microscale fretting from Refs. [11,16], which were represented in Ref. [12] based on average friction coefficient values. It is readily seen that while the simple approximation from Ref. [12] fits well the experimental data in the gross Experimental data is according to Refs. [11,16]. 11), is capable of fitting also the data in the partial slip regime.
Discussion
As it was already mentioned above, the tangential contact model (see Eqs. (1)-(3)) reduces to the Cattaneo-Mindlin contact model in the special case 3 / 2 m , which corresponds to the Hertzian normal contact. Of course, the adopted phenomenological approach does not provide expressions for * F and * x similar to those furnished by the Cattaneo-Mindlin theory of elastic tangential contact. We recall that the latter theory, which assumes the Hertzian contact geometry and isotropy of the material properties, was generalized to arbitrary axisymmetric and non-axisymmetric geometries [17,18] and transverse isotropy [19]. In particular, if the initial gap between the contacting surfaces, which is measured in the undeformed state in the direction normal to the contact interface, is described by the monomial shape function ( ) Λ r r , where r is the polar radius from the centre of circular contact area, then the following relation holds true: ( 1)/ m . Moreover, the case 1 and, thus, 2 m corresponds to the conical contact geometry, and m decreases to 1 as the shape parameter increases to infinity and, thereby, the contacting surfaces become flatter (the limiting case 1 m was noticed in Section 3.1). The effect of the tangential contact stiffness parameter m on the initial part of the normalized forcedisplacement relation of the backbone curve is shown in Fig. 7, where the effect of different friction coefficients for partial slip and sliding (static and kinetic coefficients of friction, s and k , according to the terminology [20]) is illustrated as well.
We underline that, compared to Eq. (3), the above formula introduces only one additional dimensionless parameter, namely, the friction coefficient ratio k s / . Figure 8 shows the results of fitting the experimental data presented in Ref. [21] by using the model predictions that are based on the backbone curves given by Eqs. (3) and (33) (see curves 1 and 2, respectively). We note that the total energy in the slip regime was evaluated as t 0 * 4 W x F , where according to Ref. [21], * F is the maximum tangential force associated with the displacement amplitude 0 x . Evidently, the refined model based on Eq. (33) allows a better fit of the energy ratio results, and, in particular, x . Experimental data is according to Ref. [21]. Curve 1 is drawn based on the constant friction coefficient model; curve 2 takes into account a drop of the friction coefficient from 0.9 to 0.85 in the transition to sliding. this modified model accommodates the observed jump in the energy ratio upon the transition from partial slip to sliding. It should be emphasized that the ratio k s / was not used as a fitting variable, and its value was evaluated based on the friction coefficients s 0.9
and k 0.85 taken from the data presented in Ref. [21].
Yet another point that deserves a comment is a pronounced discrepancy between the model predictions and the experimental data for larger displacement amplitudes (see Fig. 8). Apparently, this can be explained by the effect of the system stiffness, that is of the tangential accommodation of the testing device [8], since the dissipated energy is evaluated as d t e W W W , where e W is the elastic energy. That is why, when the elastic energy is also stored in the system besides the contact interface, the share of the dissipated energy d W in the total energy t W decreases. At the same time, the models presented above implicitly take into consideration only contact deformations at the contact interface.
Another important result of the presented analysis is that the empirical relation , which was established in Ref. [12] between the slip ratio s and the slip index , represents the so-called gross-slip asymptotics. This, in particular, means that the empirical relation, if applied for determining the transition between the partial and gross slip, may introduce a www.Springer.com/journal/40544 | Friction systematic error as a priori can be expected from any asymptote, provided insufficient experimental data is available for the analysis, and other additional tools of analysis (like the newly introduced convexity/concavity geometric criterion) are not applied.
We recall that differential calculations were applied in Ref. [8] for evaluating the transition point from the maximum of the second derivative of a certain contact parameter (tangential force or dissipated energy) with respect to the contact displacement. It is to note here that the second derivative of the dissipated energy is identically equal to zero in gross slip. However, as Fig. 8 shows, the tribological data is very noisy and, therefore, the application of the numerical differentiation tools is rather problematic. On the contrary, the convexity/concavity transition criterion can be easily implemented, and, moreover, in many cases the transition point may be approximately established by the direct inspection of the plotted data. It should be emphasized that a number of transition criteria have been introduced since the seminal paper by Fouvry et al. [8], but to the best of the authors' knowledge, the simple convexity/concavity transition criterion (without the necessity of specifying the backbone curve) has not been reported in the literature up to now.
It is to note here that the fretting signal index was introduced in Ref. [13] by assuming the harmonic variation of the tangential force, that is like the functional dependence 0 cos F F , where is the phase angle.
However, this approach effectively works only when 0 * F F , when there is a one-to-one correspondence between the variables F and in the base interval [0,π] . It is also to note here that, though Eq. (23) adopts the first harmonic variation for the tangential displacement x, following the Ref. [13], the dependence of x on the phase angle can be approximated by the equation 0 ( ) with an arbitrary (including sawlike modulation) function ( ) Friction 11(7): 1265-1275 (2023) | https://mc03.manuscriptcentral.com/friction so-called microscopic point of view, the transition from rest to gross slip involves microslips between asperities forming the rough surfaces. However, the related questions about the variation of the coefficient of friction at the contact interface lie outside of the scope of the present study, as they refer to the microscale modelling approach. On the contrary, in the present study we necessarily take a so-called macroscale point of view, as the slip index is measured from the macro-scale characteristics of fretting loops.
Finally, we observe that the presented above simple mathematical modelling framework can be further generalized to account for the effect of variation of the backbone curve due to wear. Recently, the phenomenon of non-monotonic behaviour of the dissipated energy in the partial-slip regime of fretting wear was highlighted in Refs. [22,23]. In this way, the stiffness parameter m (which is shown to be dependent on the contact geometry) is likely to become dependent on the number of fretting cycles due to the contact geometry adaptation. Another possible generalization concerns the application of artificial neural networks (ANNs) (see, e.g., Ref. [24]) for the purpose of realistic description of the backbone curve based on experimentally observed data for the tangential force-displacement relation in the fretting loop.
Conclusions
In the present study, a unified mathematical modelling approach for the analysis of the tangential forcedisplacement hysteretic loops in fretting has been developed based on Masing's hypothesis about the scaling of the forward and backward forcedisplacement curves from the backbone curve. By adopting a one-free-parameter generalized Cattaneo-Mindlin contact model of the frictional tangential contact loading, explicit relations for the main tribological parameters of the fretting loop (slip and energy ratios among others) have been derived. As a result, novel transition criterions, which are parameterized by the interface stiffness parameter, have been introduced, including the convexity/concavity geometric criterion that is shown to be a model-free transition criterion. | 2022-12-04T16:53:54.475Z | 2022-12-02T00:00:00.000 | {
"year": 2022,
"sha1": "b5ae9e0c7235cef30aa529c2330728f17d0252d5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-022-0662-1.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "bf8c1daa22d8a06d97a57257a948fe060f03ba8f",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
866133 | pes2o/s2orc | v3-fos-license | Principals Projections on the Malaysian Secondary School Future Curriculum
Study on future is involving a time-span to observe future alternatives as well as to identify the greatest events that are mostly like to occur in future while to assist the policy-makers and curriculum-designers to make decision. Longstreet and Shane (1993) however, emphasized that future planning does not mean to change what currently we already possessed instead of focusing on future-probabilities and the obtained-impacts in better future developments. The first basis, according to Saedah Siraj (2008a), is: future is a changing phenomenon compared to present day. The second, human creates something today and in future with what is planned, future-planning is arranged based on values and belief and the future begins from the present moment. Meanwhile, curriculum, or as clarify by Saedah Siraj (2001), is planning or designing education program. In this regards, the principals' projections on the types of future curriculums and curriculum contents at secondary schools in Malaysia would be the central discussion of this article. This study including earlier studies of Saedah Siraj and Mohd Paris Saleh (2003) and Saedah Siraj and Faridah Abdullah (2005) involved future planning/designing education program or as affirms by Saedah Siraj (2008a) is Future Curriculum. In general, the study goal is an attempt to attain consensus of the principals' projections on the types of future curriculums and curriculum contents at secondary schools in Malaysia where the study findings will also be discussed.
Introduction
Study on future is involving a time-span to observe future alternatives as well as to identify the greatest events that are mostly like to occur in future while to assist the policy-makers and curriculum-designers to make decision.Amara andSalancik (1971-1972) asserted that future is any activity which is increased understanding on the future products as the impacts of the present-day developments and today preferences.Longstreet and Shane (1993) however, emphasized that future planning does not mean to change what currently we already possessed instead of focusing on future-probabilities and the obtained-impacts in better future developments.The first basis, according to Saedah Siraj (2008a), is: future is a changing phenomenon compared to present day.The second, human creates something today and in future with what is planned, future-planning is arranged based on values and belief and the future begins from the present moment.
All the above discussions are reflections of study's goals on future (including the study on future curriculum and future education) among which are: provides possible projections and future choices in assisting the policy-makers and curriculum-designers to plan the desirable future as underlined by Ayers (1969) and to assist policy-makers and curriculum-designers in finalizing their decision-makings: firstly, in identifying the best future choices; and secondly, to identify the events which are mostly like to occur in the future.
Meanwhile, curriculum, according to Saedah Siraj (2001), is planning or designing education program.In this regards, the principals' projections on the types of future curriculums and curriculum contents at secondary schools in Malaysia would be the central discussion of this article.This study including earlier studies of Saedah Siraj and Mohd Paris Saleh (2003) and Saedah Siraj and Faridah Abdullah (2005) involved future planning/designing education program or as affirms by Saedah Siraj (2008a) is Future Curriculum.
Most studies on future are conducted by the western scholars while the studies on future curriculum and curriculum content only grasps little attention in Malaysia.When only most minuscule studies is conducted on this subject, then not much domestic new inputs is obtainable as well as there are little new suggestions/ideas exist and these resulted little changes are made or can be made on this subject.In short, there are existing obstacles in applying changes in curriculum at all learning arenas nationwide while in the west, for instance, in the United States of America and Canada, both are far-advanced in improving and applying the virtual secondary school curriculum compared to Malaysia (Virtual High School, 2008;Wikipedia, 2008).As a consequence, the nation is left ten years behind US and Canada pertaining to the development and application of today virtual-curriculum.
The goal of this article is to discuss a study which attempts to attain consensus of the principals' projections on the types of future curriculums and curriculum contents at secondary schools in Malaysia.For this, the authors attempt to answer the following research question: What would be the principals' projections on the types of future curriculums and curriculum contents at secondary schools in Malaysia?
Among the important of this study are: The first, to know the future scenario of education at secondary school level in Malaysia.The second, the study findings can be used by the policy-makers and curriculum-designers to decide future education direction of a nation or institution.The third, the study findings can be utilized as a balance to decline any policy as well as any curriculum implementations that is going to damage or unbeneficial to the future generation and the nation.The fourth, policy-makers and curriculum-designers would be able to analyze: firstly, the today needs of the present secondary school students as well as to provide them the immediate solutions.Secondly, to project on the today secondary school students' future needs as well as to provide them possible future solutions including their future solutions on the national/international situations, challenges, and issues as to provide them the best ways and approaches to confront such challenging future phenomenon particularly future jobless crisis, future clashes between field of interest and carrier selection as well as future confrontation between the new education goals and life trends.In these regard, Saedah Siraj (2005) clarified that when the basis of curriculum does not further advances with future projection based on futurist views then the education's defects could be seen so obviously.Edwards (2008), who added the Wilber's three notions to four (Figure 2), clarified that there is deficient in the Wilber's notions that transpires in the process for the knowledge's accumulation and certification, which are: an interpretive/elucidative, reflective, assimilation stage that follows on from research-experience, observation and data collection.
Model of Future Studies
Accordingly, an interpretive strand/aspect of Figure 2, (after get through the Slaughter's (2008) observations, the ICKM of Figure 2 bring forth the Knowledge of Creation in Future Studies or KCFS of Figure 3), is the non-variance in incorporating a subjective or a collective or a cultural fragment which grants a significant constituent in the Knowledge Cycle to clarify how data passed through filtered, structured, modified and interpreted before being articulated in a socially provable shape: 1) Introduction and literature review (lower right quadrant); 2) Method (upper right); 3) Results (upper left); 4) Discussion (lower left); and 5) Conclusion (returning to lower right).Slaughter (2008) who observed KCFS via four different approaches of Future Studies: Forecasting, Delphi Technique, Scenario, and Causal Layered Analysis, affirmed that it is appropriate to use both KCFS and ICKM in today study on future.Accordingly, ICKM is applied in this study.
Framework of the study
Based on earlier studies by Hiltz and Turoff (1993), Saedah Siraj and Mohd Paris Saleh (2003) and Saedah Siraj and Faridah Abdullah (2005), this study applied Delphi Technique for consensus attainment on the future curriculum and curriculum content at secondary schools of Malaysia.
There are two central entities in Delphi's processes: the first, each expert is granted the chance to evaluate other experts' views on similar topic; and the second, each expert is merely present her/his personal opinion (Saedah Siraj, 2008a).
Begins in 1950s at Rand Corporation, Santa Monica, California, the Delphi Technique was exercised to project the future US security requirements (Saedah Siraj, 2008ad) while today, it was applied in various fields, for instance, in 1971, in Education, it was applied to gather the views of penal of experts without placing them in a place (Cyphert & Grant, 1971).Linstone and Turoff (1975) clarified that when the Delphi Technique is applied in a situational-study, other than the time and cost factors lead some experts are not viable to sit together; it is advisable and even better for the researcher to acquire the experts' subjective views.Both of them also emphasized that their above clarifications has nothing to do with the accuracy in Analytical Approach application.
Normally, there would be four Delphi Rounds though in some cases it may be fewer or more; and the Delphi process would be discontinued after a reasonable consensus attainment is achieved as well as the required information is obtained (Delbecq, Van de Ven, & Gustafson, 1975).
Based on above discussions and earlier discussed study goals, Delphi Technique is identified by the researchers as the most suitable approach to attain the experts' consensus on the projection of curriculum at secondary schools in Malaysia.
Penal of experts
The panel of experts for this study is not selected randomly, instead is selected according to specific criterion.For this research purposes, individual who is identified as an expert should comply with the following criterion: Firstly, an expert who has acquired knowledge, experiences, and trainings in the implementation of school curriculum.Secondly, an expert who has retained the post as school principal and has experience in school management for more than 10 ten years; and Thirdly, an expert who is willing to take part in three Delphi Rounds.Based on these criterions, ten experts were identified and assigned as the panel of experts where each two of them were principals of Smart School, Premier School and Boarding School while the other four were principals of national secondary school in Malaysia.
Data collection procedure
Data collections are carried out on three Delphi Rounds and the details of each round are as follows:
Delphi Round 1
With the purpose to acquire information on the types of future curriculum and curriculum content at secondary schools in Malaysia, the respondents are interviewed in Delphi Round 1.The data accessed from these interviews are then formulated as the basis to construct the following Delphi Rounds' survey questions.
Delphi Round 2
Likert's 5-points scales is utilized to decide on the years of projection when each type of future curriculum will be applied at secondary schools in Malaysia as well as to attain the experts' consensus on when each type of future curriculum content will be applied at secondary schools in Malaysia.
Delphi Round 3
Questionnaires of Delphi Round 3 are similar to Round 2. Both median and IQR are attached to show the distribution of expert's views on each item.In this way, each expert is granted the chance to evaluate other experts' views in Delphi Round 2 so the expert may reconsider her/his answers in the next Round.The expert's answer of this Round should be one of these: The first, constant with the previous answer if that answer is inside IQR.The second, the expert may change her/his previous answer if that answer is outside of IQR.The third, by offering reasons why the answer is remains the same; the expert is constant with her/his answer when that answer is outside of IQR.
The purpose of this Round is to narrow the gap of views differences among the experts and indirectly heading to attain the consensus.
Data analysis procedure
Data accessed from the interviews of Round 1 is thematically-analyzed -performing analysis according to specific themes.In this study, two themes have been identified: firstly, the types of curriculum; and secondly, the types of curriculum contents.
After receiving the feedbacks from the questionnaires of Rounds 2 and 3 then the data is analyzed based on min, median, and IQR.Projection on the occurrence years (the experts' levels of agreements) is based on the following median scores: I) 1-5 years = 4.5-5; II) 6-10 years = 3.5-4; III) 11-15 years =2.5-3.49;IV) 16-20 years = 1.5-2.49;and V) after 20 years = 1-1.49.The item's median is 4.5 to 5 when a type of curriculum is projected to be utilized at secondary schools in Malaysia are within 1 to 5 years.When the item's median is 3.5 to 4 means the years of curriculum utilization is within the next 6 to 10 years.A type of curriculum would be utilized at secondary schools in Malaysia within the next 11 to 15 years when the item's median is 2.5 to 3.49.Moreover, when the item's median is 1.5 to 2.49 means the years of curriculum utilization is within the next 16 to 20 years; and when the item's median is 1 to 1.49 means the years of curriculum utilization is after 20 years.Similar scores are used to select the years of utilization of the curriculum contents.
Item consensus
The calculation of IQR is used to fix the relationships between each item and each expert where it will lead for the interpretation on the consensus of each item.The stages of consensus are fixed based on IQR as follows: I) High consensus = IQR is 0 to 1; II) Moderate consensus = IQR is 1.01 to 1.99; and III) Without consensus = IQR is 2.0 and above.
Item arrangement
The data is analyzed to arrange certain item according to the arrangement based on the consensus' attainment and the years of projection.The item's arrangement is based on the item's earned median score of Likert's 5-points scales rates.The item is considered high when its median score is 5 while it is considered the lowest when its median score is 1.Thus, it should be noted that in the analyzed data, the items are not arrange according to normal numerical arrangement instead is based on the earned median score.
Statistical analysis
The Central Tendency measurement is used in statistical analysis of this study.The feedbacks from the questionnaires of Delphi Rounds 2 and 3 are analyzed using the Frequency of Central Tendency to calculate its median and IQR.According to Martino (1972) the median is the most accurate statistical approach to show the group views as well as it is also able to show each particular view of the expert.In fact, it is recognized that IQR is the most accurate calculation compared to min to show the relationships between each expert and each item or its shows the IQR's views differences among the experts on each item.
Data analysis
Data analysis is conducted using Qualitative Approach for Delphi Round 1 and Quantitative Approach for Delphi Rounds 2 and 3.This data analysis will be able to show the principals' consensus on the projection of the types of future curriculums and the curriculum contents at secondary schools in Malaysia.This data analysis would be utilized to answer the following research questions: The first: What are the principals' projections on the types of future curriculums at secondary schools in Malaysia?And the second: What are the principals' projections on the types of future curriculum contents at secondary schools in Malaysia?
The under discussed data show that ten principals' responses where each two of them were principals of Smart School, Premier School and Boarding School while the other four were principals of national secondary school in Malaysia.
The data is analyzed using the Central Tendency measurement: median and IQR.
Analysis of Delphi Round 1
All penal of experts are interviewed in Delphi Round 1 to get their views on the projections of the types of future curriculums and curriculum contents will be applied at secondary schools in Malaysia.The researchers analyzed the interviews data based on the following themes: The first, the projections on the types of future curriculums at secondary schools in Malaysia; and the second, the projections on the types of future curriculum contents at secondary schools in Malaysia.
Feedbacks from the interviews of Delphi Round 1, which are analyzed for Delphi Round 2, shows the principals projected that there are 10 types of future curriculums will be applied at secondary schools in Malaysia.The types of future curriculums are divided into three categories: Science and Technology, Skill, and Format.
For the types of curriculum contents, the principals projected that there are 18 types of future curriculum contents will be applied at secondary schools in Malaysia.The types of curriculum contents are also divided into three categories: Science and technology, Skill, and Language.Delphi Round 1 data analysis summations are listed below: The principals projected that there are 10 types of future curriculums which will be applied at secondary schools in Malaysia.These curriculums are divided into three categories: Science and technology (4 types), Skills (4 types) and Format (2 types).
Analysis on Analysis on the projections of the types of future curriculum contents at secondary schools in Malaysia
Science and technology 1. Curriculum content containing Technology education.2. Curriculum content containing comprehension and computer system application (design and invention) curriculum content.
3. Curriculum content containing information technology.4. Curriculum content containing the more effective software applications including tutoring-software. 5. Curriculum content containing sciences, mathematics, and technology.6. Curriculum content containing alternative energy.7. Curriculum content containing agricultural-biotech. Skill 8. Critical and creative thinking in planning the future skills.9. Info-search skills.10.Future jobs demand skills.11.Problem solving skills.12. Learning management skills.13.Effective communicational skills.14.Linked to student's interests and skills.Humanity 15.Less emphasize on religious education and moral.16.In future, the field of arts and humanities will get less attention.Value 17.A more collaborative and interactive learning student.Language 18. English language is customized in all subjects.The results of interviews with ten principals show that they projected that there are 18 types of future curriculum contents will be applied at secondary schools in Malaysia where each 7 items are categorized under Science and technology and Skills, 2 items under Humanity, and each one under Value and Language.
In Delphi Round 2, the data is analyzed using the Central of Tendency measurement: median and IQR and the latter are used by each item to find the levels of consensus among the penal of experts.4.2.1 Analysis on the projection of the types of future curriculum will be applied at secondary schools in Malaysia Table 1.1 shows the principals' projections on the types of future curriculums will be applied at secondary schools in Malaysia.
Analysis on the projection of the types of future curriculum contents will be applied at secondary schools in Malaysia
Table 2 shows the principals' projections on the types of future curriculum contents will be applied at secondary schools in Malaysia.
Overall summation of Delphi Round 2 data analysis shows that only one item does not attains any consensus and this indicates that there are no views differences among the experts on most items.
In order to confirm these findings, the questionnaires together with the summation of Delphi Round 2 data analysis will to be circulated again among the penal of experts.
Analysis of Delphi Round 3
The similar questionnaires to Delphi Round 2 are circulated to the penal of experts.When this data is analyzed then the questionnaires of Delphi Round 3 together with median and IQR analysis as well as all experts' previous answers are circulated again to each expert.In this Round, each expert is given the opportunity to reconsider back their answers: either consistent with their previous ones or substitute it with other answers.Those decided not to change are requested to attach their reasons.
The main goal of this Round is to attain the highest consensus among the experts.In this Round, the data is analyzed based on median and IQR.All data analysis tables of this Round will be shown later while the findings of this Round data analysis would be utilized to answer the research questions.4.3.1 Analysis of the projection of the types of future curriculums will be applied at secondary schools in Malaysia What would be the principals' projections on the types of future curriculums will be applied at secondary schools in Malaysia?To answer this, the analysis is divided into three parts: Sciences and technology, Skill and Format.Analysis is also conducted in three Delphi Rounds where all answers are depicted in Tables 3, 4 and 5. Table 3 shows the principals' projections on the occurrence years of the types of future science and technology curriculums at secondary schools in Malaysia.
The principals' projections on the types of future skill curriculum will be applied at secondary schools in Malaysia is depicted at Table 4.
The principals' projections on the types of future format curriculums will be applied at secondary schools in Malaysia are depicted at Table 5.
Analysis on the projection of the types of future curriculum contents will be applied at secondary schools in Malaysia
What would be the principals' projections on the types of future curriculum contents will be applied at secondary schools in Malaysia?To answer this second research question, the analysis, which is also conducted in three Delphi Rounds, is divided into five parts: Science and technology; Skill; Humanity; Value; and Language.The answers for the above research question are depicted at Tables 6, 7, 8, 9 and 10.
Table 6 shows the principals' projections on the occurrence years of future science and technology curriculum contents at secondary schools in Malaysia.
The principals' projections on the types of future skill curriculum contents at secondary schools in Malaysia are depicted at Table 7.
Table 8 depicted the principals' projections on the types of future humanity curriculum contents at secondary schools in Malaysia.
Table 9 depicted the principals' projection on the types of future value curriculum contents at secondary schools in Malaysia.
Table 10 depicted the principals' projection on the types of future language curriculum contents at secondary schools in Malaysia.
Conclusion
The following are summary of the study findings:
Types of curriculum
The consensus among the principals is attained on all ten types of the following future curriculum will be applied at secondary schools in Malaysia: Additional interdisciplinary in the subjects of sciences, mathematics and technology; Education technology-based curriculum; Agriculture and biotech curriculum; Alternative energy curriculum; The concept of future communication system; Problem solving-based curriculum; Future planning competent-based curriculum; Student's online interests and competent-based curriculum; School-based curriculum; and Home-schooling curriculum.
Sciences and technology curriculum
The consensus among the principals is attained on four types of curriculums under the Category of Sciences and technology curriculum will be applied in future at secondary schools in Malaysia, namely: Education technology-based curriculum; Agriculture and biotech curriculum; Alternative energy curriculum; and Additional interdisciplinary in the subjects of sciences, mathematics and technology.
Skill curriculum
The consensus among the principals is attained on four types of curriculums under the Category of Skill curriculum will be applied in future at secondary schools in Malaysia, namely: Future planning competent-based curriculum; Student's online interests and competent-based curriculum; Problem solving-based curriculum; and the concept of future communication system.
Format curriculum
The consensus among the principals is attained on two types of curriculums under the Category of Format curriculum will be applied in future at secondary schools in Malaysia, namely: School-based curriculum and Home-schooling curriculum.
The above first, second, third and fourth subtopics answered the first research question: What would be the principals' projections on the types of future curriculum will be applied at secondary schools in Malaysia?
Types of curriculum contents
The consensus among the principals is attained on the following 16 types of future curriculum contents will be applied at secondary schools in Malaysia: Curriculum content containing Technology education; Curriculum content containing comprehension and computer system application (design and invention) curriculum content; Curriculum content containing information technology; Curriculum content containing the more effective software applications including tutoring-software; Curriculum content containing sciences, mathematics, and technology; Curriculum content containing alternative energy; Curriculum content containing agricultural-biotech; Critical and creative thinking in planning the future skills; Info-search skills; Future jobs demand skills; Problem solving skills; Learning management skills; Effective communicational skills; Linked to student's interests and skills; In future, human sciences and arts will attain less attention; A more collaborative and interactive learning student; and English language is customized in all subjects.However, only 1 item (Less emphasize on religious education and moral) from the types of future curriculum contents failed to attain consensus among the principals.
Sciences and technology curriculum content
The consensus among the principals is attained on three types of curriculum contents under the Category of Sciences and technology curriculum content can be applied at secondary schools in Malaysia in 1 to 5 years from today, namely: Curriculum content containing information technology; Curriculum content containing the more effective software applications including tutoring-software; and Curriculum content containing sciences, mathematics, and technology while the consensus among the principals is attained on the other four types of curriculums under this Category, namely: Curriculum content containing Technology education; Curriculum content containing comprehension and computer system application (design and invention); Curriculum content containing alternative energy; and Curriculum content containing agricultural-biotech only can be applied in the next 6 to 10 years.
Skill curriculum content
The consensus among the principals is attained on six types of curriculum contents under the Category of Sciences and technology curriculum content can be applied at secondary schools in Malaysia in 1 to 5 years from today, namely: Critical and creative thinking in planning the future skills; Info-search skills; Future jobs demand skills; Problem solving skills; Learning management skills; Effective communicational skills while the consensus among the principals is attained on a type of curriculum under this Category, namely: Linked to student's interests and skills only can be applied in the next 6 to 10 years.
Humanity curriculum content
There is no consensus among the principals on a type of curriculum content under the Category of Humanity Curriculum Content, namely, less emphasis on religious education and moral.This shows that emphasizing on religious education and moral is vital in future Humanity curriculum content of the Malaysian secondary schools though it might happen that this (less emphasis is given on religious education and moral) in the next 16-20 years ahead as viewed by the consensus of the principals.However, the consensus among the principals is attained on a statement related to the Malaysian secondary schools' curriculum content, namely, the field of arts and humanities will be given less attention in future, not immediately but after the next 20 years.
Value curriculum content
The consensus among the principals is attained on the type curriculum content under the Category of Value curriculum content can be applied at secondary schools in Malaysia in 1 to 5 years from today, namely, A more collaborative and interactive learning student.This also shows that emphasizing on value and moral is essential in the future Value curriculum content of the Malaysian secondary schools.
Language curriculum content
The consensus among the principals is attained on the type curriculum content under the Category of Language curriculum content can be applied at secondary schools in Malaysia but only in 11 to 15 years from today, namely, English language is customized in all subjects.These late implementation years might be related to the current problem's not enough English proficiency and skilled teachers particularly, for the subjects of science, mathematics, and even in English subject itself.
The above discussions on the fifth, the sixth, the seventh, the eighth, the ninth and the tenth subtopics replied the second research question: What would be the principals' projections on the types of future curriculum contents will be applied at secondary schools in Malaysia?
What the Malaysian government (the Ministry of Education Malaysia or MEM) should do without delay in facing the possibility of the implementation of the four types of the Science and Technology's curriculums at secondary schools of Malaysia, namely: Education technology-based curriculum and also Additional interdisciplinary in the subjects of sciences, mathematics and technology (will be applied in 1 to 5 years from today); Agriculture and biotech curriculum (will be applied in the next 6 to 10 years); and Alternative energy curriculum (will be applied in the next 11 to 15 years)?To answer these, the following are some of the authors' suggestions to the MEM: With the goal to have sufficient skill-teachers at all schools nationwide, particularly, in the following subjects: education technology, sciences, mathematics, agriculture, biotech, and alternative energy, the MEM collaborates with local and foreign universities to conduct special trainings or special higher studies program on the concerned subjects for trainee teachers and even the in-service teachers.Those undergone these programs should be offered the government scholarship, the periods of training or course or pursuing higher studies is recognized in-service as well as merits consideration in promotion exercises.
Moreover, the above four projected types of the Science and Technology curriculum, namely: Education technology-based curriculum and also Additional interdisciplinary in the subjects of sciences, mathematics and technology (will be applied in 1 to 5 years from today); Agriculture and biotech curriculum (will be applied in the next 6 to 10 years); and Alternative energy curriculum (will be applied in the next 11 to 15 years) necessitate the MEM, firstly, to shape a new education policy of the future since most probably education in future is more challenging than today, particularly, the virtual/wireless/mobile education; and secondly, to set up immediately an initial National ICT Curriculum Content Group (nictCCg).Its main function is to prepare and to develop a standard national ICT curriculum content and among its members are the experts of content and curriculum content, software designers and expert-teachers (in Malay, guru pakar) of the subject concerned.
What MEM should do straight away in facing the possibilities of the implementation of the four types of the Skill curriculum, namely: Future planning competent-based curriculum; The concept of future communication system; Problem solving-based curriculum (all these three will be applied in the next 6 to 10 years) and Student's online interests and competent-based curriculum (will be applied in the next 11 to 15 years)?For these, the MEM is recommended to offer the similar earlier discussed offers to the trainee teachers and the in-service teachers but MEM should regulates that those who are offered must pursue their higher degree studies in one of these fields: Future Studies, Future Communication System; and Curriculum (Competency-based and Problem solving-based).
What concurrently MEM should do in facing the possibility of the implementation of the two types of the Format curriculum, namely: School-based curriculum and Home-schooling curriculum in the next 11 to 15 years?The former necessitates that the school principals and assistant-principals acquired at least at school level, adequate curriculum knowledge as well as experiences in curriculum implementation while the latter requires sole Local Area Network (LAN), national/state/district/school regulate-server, education software designers, cheaper telecommunication rates and even lower costs for mobile/online appliances (Saedah Siraj, 2003Siraj, , 2004)).Certainly, Home-schooling offers more advantages to Special Education students as well as those who are the pencil's, the school's, the classroom's, the teacher's and even the naughty-friend's phobias.Other than that Home-schooling via mobile learning is non-costly compared to the current traditional schooling which yearly involved a colossi quantity of US Dollar in school-building constructions worldwide excluding another enormous amount of US Dollar spent for transportation, staff salaries, books, tuition's and school's fees, food and lodging (for those who live in hostel or home rent) and maintenance (Saedah Siraj, 2003Siraj, , 2004)).
The following are among the most interesting of the research findings: Critical and creative thinking in planning the future skills; Info-search skills; Future jobs demand skills; Problem solving skills; Learning management skills; Effective communicational skills; and Linked to student's interests and skills; whereas in the aspects of curriculum are: Future communication system; Future planning competent-based curriculum; Student's online interests and competent-based curriculum; and Problem solving-based curriculum.Most probably these skills/types of curriculums will turn into future mass attractions.Certainly, these kinds of training-patterns are indisputably required by the trainee teachers as well as in-service teachers.The current Malaysian education teaching curriculum should be restructured in congruent to the study findings and such it is compatible to the present and future worldwide multifarious advance development.Meanwhile, MEM should also offer new fields (such as future communication system, future planning) in training the trainee teachers as well as to glitter the learning infrastructures developments, the school facilities including to turn the learning place in compliance to the future international education standard such as to maneuver the mobile or wireless or virtual teaching-learning environment.
One of the most important is student factor.As above discussed, all new environments, new fields like Info-search competency and Future communication system, and new learning styles such as the collaborative and interactive learning will certainly have positive and negative impacts on students.The most practical long-range and even short-range solution is that the study findings also verified the curriculum is emphasizing on religious education and moral is one of the shapes of future necessity.Significantly, this study is successful in identifying future probabilities that mostly going to occur where according to Saedah Siraj (2008a) it true though that human creates something today and in future with what is planned, future-planning is arranged based on values and belief; and certainly, the future begins from today.
All the above discussions show the reflections of the goals of study on future, including this study on future curriculum among which the authors provides possible experts' projections and future choices on the types of curriculums and curriculum contents that can be applied at secondary schools in Malaysia in a way to assist the policy-maker and curriculum-designers to plan for our children better longing-future.Table 1 shows the summation of Delphi Round 2 data analysis on the types of future curriculum will be applied at secondary schools in Malaysia where all items attained consensus among the penal of experts.The median scores of each type of future curriculum is either 4 or 5 and IQR for all ten types of future curriculum is 1 which meant that the consensus (second highest) is attained among the principals on all the following types of future curriculum will be applied at secondary schools in Malaysia: Additional interdisciplinary in the subjects of sciences, mathematics and technology; Education technology-based curriculum; Agriculture and biotech curriculum; Alternative energy curriculum; The concept of future communication system; Problem solving-based curriculum; Future planning competent-based curriculum; Student's online interests and competent-based curriculum; School-based curriculum; and Home-schooling curriculum.Note.* = IQR = Q3-Q1.
Table 2 shows the summation of Delphi Round 2 data analysis on the projection of the types of future curriculum contents will be applied at secondary schools in Malaysia.Item 1 (Curriculum content containing Technology education) attained the highest consensus (IQR is 0) among the penal of experts and only 1 item (item 15: Less emphasize on religious education and moral) failed to attain consensus while the rest (16 items) IQR scores is 1.Hence, the types of future curriculum contents will be applied at secondary schools in Malaysia are as follows: Curriculum content containing Technology education; Curriculum content containing comprehension and computer system application (design and invention) curriculum content; Curriculum content containing information technology; Curriculum content containing the more effective software applications including tutoring-software; Curriculum content containing sciences, mathematics, and technology; Curriculum content containing alternative energy; Curriculum content containing agricultural-biotech; Critical and creative thinking in planning the future skills; Info-search skills; Future jobs demand skills; Problem solving skills; Learning management skills; Effective communicational skills; Linked to student's interests and skills; In future, the field of arts and humanities will get less attention; A more collaborative and interactive learning student; and English language is customized in all subjects.Note.* = number is based the results of Delphi Round 1.
Table 3 shows the median for items 1 and 2 is 5 which meant all types of Future Sciences and Technology curriculums projected will be applied at secondary schools in Malaysia within 1 to 5 years.Whereas item 7 (agriculture and biotech curriculum) will be applied at secondary schools in Malaysia within 6-10 years ahead.Item 9 (alternative energy curriculum) is the last one will be applied at secondary schools in Malaysia that is in the next 11-15 years.The types of curriculum of item 1 (Additional interdisciplinary in the subjects of science, mathematics and technology) and item 2 (Education technology-based curriculum) attained the highest consensus where its IQR is 0 that meant there is no views differences among the expert.Note.* = number is based on the result of Delphi Round 1.
Table 4 shows that all items attained high consensus where the IQR scores is either 0 or 1. Item 5 (The concept of future communication system) attained the highest consensus where the score of IQR is 0. Thus, there are no views differences among the experts.The score median is 4 for items 5, 6 and 7. Correspondingly, these curriculums (item 5: The concept of future communication system; item 6: Problem solving-based curriculum; and item 7: Future planning competent-based curriculum) will be applied within 6 to 10 years while item 8 (Student's online interests and competent-based curriculum) will be applied at secondary schools in Malaysia from 11 to 15 years from now.Overall, all items attained the experts' consensus.Based on Table 5, the median score for items 9 (School-based curriculum) and 10 (Home-schooling curriculum) is 3 while the IQR is 1.These demonstrated that both formats attained consensus among the experts, and correspondingly, both future formats will be applied at secondary schools in Malaysia in 11 to 15 years from today.Table 6 shows the median for items 3, 4 and 5 is 4.5 to 5 which are the highest median scores depicted in the table and these also demonstrated that all experts attained consensus.Hence, these curriculum contents (items 3: Curriculum content containing information technology; item 4: Curriculum content containing the more effective software applications including tutoring-software; and item 5: Curriculum content containing sciences, mathematics, and technology) can be applied at secondary schools in Malaysia in 1 to 5 years from today.
The median for items 1, 2, 6 and 7 is 4 which meant these curriculum contents: Curriculum content containing Technology education (item 1); Curriculum content containing comprehension and computer system application (design and invention) (items 2); Curriculum content containing alternative energy (item 6); and Curriculum content containing agricultural-biotech (item 7) only can be applied at secondary schools in Malaysia in the next 6 to 10 years while three curriculum contents are attaining the highest consensus where each one IQR scores is 0, they are: Curriculum content containing the more effective software applications including tutoring-software (items 4); Curriculum content containing sciences, mathematics, and technology (item 5); and Curriculum content containing Technology education (item 1).In general, all items attained high consensus where each IQR scores either 0 or 1. Note.* = number is based on the results of Delphi Round 1.
Table 7 shows that the median and IQR for 7 items of skills.The median for the above all types of curriculum contents are either 4.5 or 5 except item 14 (Linked to student's interests and skills) where it's median is 4. Item 8 (Critical and creative thinking in planning the future skills); item 9 (Info-search skills); item 10 (Future jobs demand skills); item 11 (Problem solving skills); item 12 (Learning management skills) and item 13 (Effective communicational skills) can be applied at secondary schools in Malaysia in the next 1 to 5 years while item 14 (Linked to student's interests and skills) only can be applied in 6 to 10 years ahead.Item 9 (Info-search skills) and item 10 (Future jobs demand skills) attained the highest consensus where each Median is 5 and IQR scores is 0 which implicated that there are absent of views differences among the experts.Overall analysis demonstrated that all items attained high consensus where each IQR scores is either 0 or 1.Table 8 shows 2 items of the types of future humanity curriculum contents.Item 15 (Less emphasis on religious education and moral) does not attain consensus among the experts where its IQR is 3 (high).Apparently, all experts rejected that the future humanity curriculum contents at secondary schools in Malaysia should designates less emphasis on religious education and moral.However, item 16 (the field of arts and humanities will be given less attention in future) attained consensus among the experts where the median is 1 but this type of curriculum content will only be applied after 20 years from today.A more collaborative and interactive learning student 5 1-5 0 Note. * = number is based on the results of Delphi Round 1.
Table 9 shows the median and IQR for item 17.The median for item17 is 5 (the highest median score).All experts attained consensus on this item where the IQR score is 0, which meant that there is no views differences among the experts.This finding encourages the students to apply the collaborative and interactive learning which is going to be practiced in the next 1 to 5 years.
Table 10.The principals' projection on the occurrence years of the types of future language curriculum contents at secondary schools in Malaysia *Item no.
Type of future humanity curriculum content Median Years of projection IQR 18 English language is customized in all subjects 2.5 11-15 1 Note. * = number is based on the results of Delphi Round 1.
Table 10 shows the median of item 18 (English language is customized in all subjects) is 2.5 which meant all experts attained consensus where it's IQR is 1.This curriculum can only be applied in 11 to 15 years from today.
Intentional Behavioral
Cultural Social
Collective Exterior Interior
The Subjective World
Intuitive Strand
Experience the data, implicit knowledge, confronts reality.
Step 2. Subjective experience of the injunctions, performs the behavior, have the experience, encounter and
The Objective Behavioral World Injunctive Strand
The disciplinary matrix, "Do this," "Perform this behavior." Step 1.For example, follow the instructions to learn how to perform an experiment, meditate, how to write, or cook, or play music etc.
Step 3. Interpret data from a personal, scientific, or cultural perspective, review the findings
The Validative Strand
Communal verifications, peer group confirmations, evaluate criticisms, gather feedback.
Step 4. Discuss with teachers, peers, present findings, publish paper, integrate feedback and restart the cycle.
Subjective Intentional
4.1.1Analysis on the projections of the types of future curriculums at secondary schools in Malaysia Sciences and technology 1.Additional interdisciplinary in the subjects of sciences, mathematics and technology.2. Education technology-based curriculum.3. Agriculture and biotech curriculum.4. Alternative energy curriculum.Skill 5.The concept of future communication system.6. Problem solving-based curriculum.7. Future planning competent-based curriculum.8. Student's online interests and competent-based curriculum.
Format 9 .
Non-centralized curriculum or non-federal curriculum or school-based curriculum.10.Home-schooling curriculum.
Figure 1 .
Figure 1.Four Quadrants Model of Wilber
Table 2 .
Summarization of Delphi Round 2 data analysis: the principals' projections on the types of future curriculum contents will be applied at secondary schools in Malaysia
Table 3 .
The principals' projections on the occurrence years of future science and technology curriculums at secondary schools in Malaysia
Table 4 .
The principals' projections on the occurrence years of future skill curriculum at secondary schools in Malaysia
Table 5 .
The principals' projections on the occurrence years of the types of future format curriculums at secondary
Table 6 .
Principals' projections on the occurrence years of future science and technology curriculum contents at secondary schools in Malaysia
Table 7 .
The principals' projections on the types of future skill curriculum contents at secondary schools in Malaysia
Table 8 .
The principals' projections on the occurrence years of the types of future humanity curriculum contents at secondary schools in Malaysia Note.* = number is based on the results of Delphi Round 1.
Table 9 .
The principals' projection on the occurrence years of the types of future value curriculum contents at secondary schools in Malaysia | 2017-09-07T20:22:36.135Z | 2008-11-01T00:00:00.000 | {
"year": 2008,
"sha1": "8112e512a9a4bd3b227fb90bac9eef261d61c2f4",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/622/598",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8112e512a9a4bd3b227fb90bac9eef261d61c2f4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
548619 | pes2o/s2orc | v3-fos-license | The Detection and Characterization of Extrasolar Planets
We have now confirmed the existence of > 1800 planets orbiting stars other than the Sun; known as extrasolar planets or exoplanets. The different methods for detecting such planets are sensitive to different regions of parameter space, and so, we are discovering a wide diversity of exoplanets and exoplanetary systems. Characterizing such planets is difficult, but we are starting to be able to determine something of their internal composition and are beginning to be able to probe their atmospheres, the first step towards the detection of bio-signatures and, hence, determining if a planet could be habitable or not. Here, I will review how we detect exoplanets, how we characterize exoplanetary systems and the exoplanets themselves, where we stand with respect to potentially habitable planets and how we are progressing towards being able to actually determine if a planet could host life or not.
Introduction
Just over 20 years ago, we were completely unaware of the existence of any planets outside our own solar system.Known as extrasolar planets, or exoplanets, the first were detected in 1992 [1], but rather than orbiting a Sun-like star, these two exoplanets were detected in orbit around a pulsar: the remnant core of a massive star.The first extrasolar planet detected around a Sun-like star was discovered in 1995 [2].What was remarkable about this planet is that it was estimated to have a mass similar to that of Jupiter, but was orbiting its parent star every 4.2 days, meaning that it is closer to its star than Mercury is to the Sun.
Since 1995, we have confirmed the existence of more than 1800 exoplanets, in orbit around just over 1100 stars.The properties of many of these extrasolar planetary systems are quite different to what we might have expected based on our own solar system.Our Solar System has eight planets-four inner rocky/terrestrial planets and four outer gas/ice giants-in orbits that lie in approximately the same plane and that are close to being circular.As already mentioned, the first exoplanet discovered around a Sun-like star was Jupiter-like, but orbiting extremely close to its parent star.Many of these close-in exoplanets, known as 'hot' Jupiters, have since been discovered.Additionally, many exoplanets have highly eccentric (non-circular) orbits [3], unlike the planets in our own solar system.
In the last few years, we have also started observing massive, gas giant (Jupiter-like) planets at large distances from their parent stars [4]; more distant than the outermost planets in our own solar system.Essentially, we now have a large sample of exoplanets with a wide range of properties and characteristics, some of which are unexpected and surprising.We are also starting to be able to directly image some exoplanets [5] and are beginning to be able to determine the spectra of exoplanet atmospheres, the first step towards determining the presence of biosignatures [6] and whether or not an exoplanet could be habitable.
In this paper, I will review the different exoplanet detection methods, describe what we currently know about exoplanet properties and characteristics and discuss what we might learn in the coming years.
Exoplanet Detection
Given that planets are typically close to a star that is much more luminous than the planet, direct detection of exoplanets is extremely difficult.This means that most confirmed exoplanets have been detected indirectly.There are a number of different indirect methods.One, known as the 'Doppler wobble', or radial velocity, method, measures the change in the radial velocity of the host star as it orbits the common centre of mass.Another method, known as the transit method, looks for dips in the brightness of the host star as a planet passes across the face of the star.Possibly the most exotic method is to use gravitational lensing, a consequence of Einstein's theory of General Relativity.In the following sections, we describe the different detection methods and what they can tell us about the characteristics of the exoplanets that they can detect.
The Radial Velocity Method
In planetary systems, the planets and the host star all orbit the system's common centre of mass.Since the star is typically, by far, the most massive object in the system, the centre of mass is normally close to, or even inside, the host star.Directly observing this motion is very difficult.However, using high-precision spectrometers, such as HARPS [7] and HARPS-N [8], one can use the Doppler effect to determine the radial (line-of-sight) velocity of these stars.As the star orbits the common centre of mass, its spectral lines will shift slightly as it moves towards and away from the observer.This Doppler shift in the spectral lines can then be used to determine the radial velocity of the star, and, if this shows periodicity, can be used to then infer that something must be in orbit about this star.It could be a stellar companion; however, the magnitude of this variation gives an indication of the mass of the companion.If this turns out to be less than 13 Jupiter masses, it would be regarded as a planet.
Figure 1 shows the radial velocity variation for 51 Pegasi, the very first Sun-like star known to host a planetary-mass companion [2].The radial velocity variation allows us to infer a planetary companion with a mass of just over 0.4 Jupiter masses.One caveat, however, is that this method only determines the line-of-sight velocity of the host star, which means that the actual inclination of the orbital plane of the planet, relative to the Earth, is not known.Consequently, the actual orbital velocity of the star could be greater than the measured radial velocity, and hence, the actual mass of the planet could be greater than that determined from the host star's radial velocity.The dependence of the mass on the inclination is, however, quite weak, and the orbit would need to be quite strongly inclined before the estimated mass differed substantially from the actual mass.Within a few years of the detection of 51 Pegasi b, the sample of radial velocity detected exoplanets had also grown substantially.Given that we would expect the orbits to be randomly orientated with respect to our line of sight, very few would then have masses substantially different from that inferred from the radial velocity measurements.In addition to the mass of the planet, the radial velocity of the star can be used to determine the period of the orbit and, hence, the distance of the planet from the star.The radial velocity curve can also be used to determine the eccentricity of the planet's orbit.In the case of 51 Pegasi b, shown in Figure 1, the radial velocity curve is almost perfectly sinusoidal, indicating that the planet's orbit has a very low eccentricity: the orbit is circular.If, however, the radial velocity curve is asymmetric, this tells us that the radial velocity varies throughout the orbit and that the orbit is, consequently, eccentric.We can, therefore, use the shape of the radial velocity curve to determine this eccentricity.Radial velocity measurements can also be used to infer the presence of multiple planets, and indeed, many such systems have been detected [9].
The Transit Method
The transit method is probably the most obvious of the indirect exoplanet detection methods.It involves simply observing stars and waiting for small dips in their brightness that repeat periodically.One might expect this to be a fairly straightforward method, but it turns out that there are many complications.In particular, false positives are very common and quite difficult to identify [10].Such false positive can include grazing eclipses from a stellar companion, transits of sub-stellar objects with radii similar to that of a giant planet and distant binary star systems whose angular separation is small enough that it blends with the target.In most cases, one needs to do follow-up radial velocity observations to establish if there is a companion or not and, if it is a companion, to determine the mass and, hence, whether it is a planet or not.
As discussed in the previous section, the first exoplanets found around Sun-like stars were detected using the radial velocity method and were, typically, in orbits very close to their parent stars.Given that we would expect the inclination of the orbits to be randomly oriented, relative to our line-of-sight, once there were 10 such planets, it became quite likely that one would transit its host star.Two groups [11,12] carried out a survey of the known exoplanetary systems and, indeed, observed exoplanet HD209458b transiting its host star.The transit light curve is shown in Figure 2 [11] and illustrates that a Jupiter-like planet will typically reduce the brightness of the host star by about 1%.
This leads to an increased interest in this method, and until quite recently, the most successful transit project was the Wide-Angle Search for Planets (WASP) [13], which, to date, has found 102 confirmed exoplanets.This was a ground-based project that used high-quality camera lenses to do a wide-angle survey of a large number of stars.Consequently, most of the exoplanets found were quite massive (Jupiter-like) and close to their parents stars ('hot' Jupiters).Recently, however, NASA's Kepler satellite [14] has become, by far, the most successful exoplanet detection project.Being a space mission, not only can Kepler detect smaller planets, the data is also so exquisite that many false positives can be eliminated without radial velocity follow-ups or spectroscopic analysis [15].
The Kepler mission has also detected numerous multiple-planet systems, including one with five planets [16].Being a transit mission, this is interesting, because it tells us that, like the Solar System planets, the orbits of these exoplanets must be co-planar.In addition, it is possible to use variations in the timings of the transit events in multi-planet systems [17] to estimate the masses of the planets in the system.It is therefore possible to confirm the planetary nature of the objects in these multiple systems without using follow-up observations.Consequently, a recent analysis of multi-planet Kepler systems confirmed the existence of 851 new exoplanets, in 340 different planetary systems [18]; almost doubling the number of known exoplanets.However, as will be mentioned later, we now have evidence that not all exoplanetary systems are aligned as might be expected, giving us some indication that planet formation and evolution is a complex and dynamic process.The light curve of HD209458, which shows a small dip in brightness when its companion planet transits across the face of the star.This measurement can be used to infer the radius of the planet.(Figure from [11]).
Gravitational Microlensing
Possibly the most exotic of the planet detection methods is to utilise Einstein's Theory of General Relativity.That space curves in the presence of mass means that light can be deflected (lensed) by a massive body.When the lens is extremely massive, such as a galaxy cluster, its mass can act to produce multiple, distorted images of even more distant galaxies [19].What is of interest here, however, is when a star in our own galaxy acts to lens the light from a more distant star that, from our perspective, passes behind the lens star.What we observe, in this case, is an apparent increase in brightness that can last for tens of days.
If the lens star has a companion planet and it happens to be in the right position, it can act as an additional lens and can produce an additional change in brightness, that is of a much shorter duration than the overall event.This method is, however, degenerate in that it is sensitive to the planet-to-star mass ratio and to the angular separation between the planet and its host star.If we do not know the mass of the star, or its distance, then typically, it is assumed that the host star is an M-dwarf (a common type of star with a mass 10%-60% that of the Sun) and that it is about halfway between the Sun and the centre of our galaxy.Therefore, this method cannot always provide a definitive mass or semimajor axis estimate for the planet, but it can place very useful constraints on the planet's properties.
An example of a microlensing event is shown in Figure 3 [20].The overall event lasts 50 days, while the planetary deviation is indicative of an approximately 5.5 Earth mass planet orbiting at about 2.6 AU (1 AU = average distance from the Sun to the Earth) from the lens star (which probably has a mass of around 0.2-0.3Solar masses).
Figure 3.The light curve from a lensing event.The magnitude initially increases, peaks after about 20 days and then starts decreasing; the whole event lasting ∼ 50 days.In the absence of the influence of a companion, it would be symmetrical.In this case, however, a ∼ 5.5 Earth mass companion to the lens star, orbiting at ∼ 2.6 AU, causes an additional amplification (shown in the insert) that can be analysed to determine the mass and orbital distance of the companion.(Figure from [20]).
Given the distribution of the stars in our galaxy, such microlensing events are more likely to be seen if we are looking towards the densely-populated galactic bulge (center).The background star will typically be in the bulge of the galaxy, and the lens star will be about halfway between the Solar System and the galactic centre.Given this geometry, this method is typically sensitive to planets with orbital radii between 1 and 10 AU, but is able to detect planets with masses as low as that of the Earth.It is also largely insensitive to the properties of the host star and, so, can probe a wide range of both planet masses and host star masses [21].
Gravitational microlensing, therefore, has the advantage of probing a region of parameter space that is largely inaccessible to other methods.However, a microlensing event is also an event that will almost certainly never repeat and the host star is typically so distant, that any kind of follow-up observations are essentially impossible.To date, the microlensing method has detected 29 planets in 27 different planetary systems, with masses that range from almost eight Jupiter masses down to only a few Earth masses.Future ground-based and space-based surveys could, however, significantly increase the sample of microlensing-detected exoplanets [22,23].
Direct Imaging
Directly imaging extrasolar planets is very difficult, especially as the planet is typically close to a host star that is much brighter than the planet (by a factor of almost a billion in the case of low-mass, Earth-like planets).We are, however, now starting to be able to directly detect exoplanets.The first was a three-planet system (now four [24]) around the star HR8799 [4].Because it is very challenging to directly image exoplanets, all have been massive (>1 Jupiter mass), tend to be quite young (so are still bright) and are typically at large distances from their host star.To date, we have detected about 20 such objects with one, β Pic b, shown in Figure 4 [25], having a mass of ∼9 Jupiter masses, and orbiting at a distance of only ∼ 10 AU from its host star.One reason why direct imaging is exciting is that some of the properties of these objects can be determined from direct observations, rather than by being inferred indirectly.For example, β Pic b, shown in Figure 4, is known to be spinning faster than any planet in our own solar system [26].Given that these are all giant planets, they are unlikely to have conditions suitable for life.However, being able to directly observe and characterize such planets is a step towards being able to do so for less-massive, closer-in planets that may have conditions that make life possible.
Additionally, these planets are interesting, as they are both more massive than any of the Solar System planets and, in most cases, further from their parent stars than the Solar System planets are from the Sun.This presents a problem for standard planet formation scenarios and suggests that there may be more than one planet formation mechanism [27].Surveys for such planets [28] will add to the sample and will help to constrain theories addressing how such planets might form [29].
Basic Properties of Exoplanet Systems
The different exoplanet detection methods are able to probe different regions of parameter space and determine different properties of the exoplanets and their orbits.Figure 5 shows exoplanet mass plotted against orbital period (in days).Since most of the host stars will have masses similar to the Sun, an orbital period of ∼ 350 days corresponds to a semi-major axis of ∼ 1 AU.The different colours in Figure 5 are for the different detection methods.Transits (green) tend to be found close to their host stars.Microlensing (blue) tends to find exoplanets that are at moderate distances from their host star, but can detect exoplanets with a wide range of masses.Directly-imaged exoplanets (pink) are at large distances and are also massive.Exoplanets detected via the radial velocity method (red) can be found at a wide-range of distances, but this method can only detect lower-mass planets close to their parent star, hence the apparent diagonal boundary that illustrates the sensitivity limit for this method.The figure also shows exoplanets found via two methods that have not been discussed in detail here.One is looking for timing variations in rapidly spinning stellar remnants, such as white dwarfs and pulsars (orange).Another is looking for modulation of the brightness of a star due to distortions in its shape from an exoplanet on a close-in orbit (two purple dots with periods just over 0.2 days).
What Figure 5 shows is that we find exoplanets in virtually all parts of parameter space where we have sufficient sensitivity.We have planets extremely close to their parent stars; much closer than planets in our own solar system.Some of these close-in planets have masses only slightly greater than that of the Earth (super-Earths); some have masses similar to that of Neptune (mini-Neptunes), and there are also massive gas-giants, called 'hot' Jupiters.This is unlike our own solar system, where massive planets are only found in the outer parts.Similarly, Figure 6 shows the orbital eccentricities, e, plotted against orbital period.Most of the planets in our own solar system have orbits that are almost circular (e ∼ 0).Exoplanets, however, show a wide range of eccentricities, with some having e > 0.9.The empty region in the top-left of Figure 6 is either because such planets would be so eccentric that they would collide with their parent star or they come sufficiently close that they undergo a tidal interaction with their parent star and their orbit shrinks and becomes circular [30].Therefore, again, we detect exoplanets in all regions of parameter space where it is possible for them to exist.
Planet Formation and Evolution
A great deal of our initial understanding of planet formation came from the properties of our own solar system.The planets in the Solar System all lie in the same plane, indicating formation in a disc-like structure around the young Sun.This is consistent with our understanding of star formation, in which conservation of angular momentum means that the initially slowly rotating cloud of gas and dust collapses into a low-mass protostar surrounded by a circumstellar disc [31], through which the mass is later transported via viscous processes [32] and in which the planets form.
The temperature in the disc also decreases with increasing radius, and there is a distance at which it drops sufficiently for ice to form on the dust grains.This is known as the snowline and occurs at ∼ 2.7 AU around a Solar-mass protostar [33].Inside the snowline is where we would expect rocky, terrestrial planets to form.Just outside the snowline, there is actually more solid material than just inside the snowline, and this is where we would expect gas/ice giants to form.This is because observations suggest that circumstellar gas discs have lifetimes of only ∼ 5 Myr [34].Since gas giant planets must form before the gas disappears, beyond the snowline is where there is sufficient solid material (ice and dust) to allow the cores of the giant planets, which need to be reasonably massive, to form before the gas dissipates [35,36].
Disc-Planet Interactions
Given our knowledge of our own solar system, it was somewhat of a surprise to discover that massive exoplanets could orbit very close to their parent stars, and that many exoplanets have much higher eccentricities than planets in our own system.In a simple sense, the reason is largely because planet formation is complex and dynamic.Planets are able to interact and exchange angular momentum with the surrounding disc, causing the planet to move from its formation radius.In the case of massive planets, they can open a gap in the disc.This is illustrated in Figure 7, which shows a snapshot from a simulation of a gas disc with an embedded Jupiter-mass planet.In such a scenario, the planet moves inwards with the inflowing gas [37], which means that gas giant planets that forming beyond the snowline can end up much closer to their parent star than where they formed, in some cases forming a 'hot' Jupiter.
Lower-mass planets do not open gaps, but can still migrate [38].In fact, initial calculations suggested that for low-mass planets, this should occur so fast, that the cores of giant planets should migrate into the parent star before the planet can become massive enough to open a gap and slow its migration.This issue is still not completely resolved, but considering the three-dimensional nature of these discs [39] and including more detailed thermodynamics [40] result in slower migrations rates.Similarly, turbulence in the disc [41] and considering the torque from material that orbits at a similar radius to the planet [42] can introduce a random element to the migration and can lead to outward, as well as inward migration.It has also been suggested that there is still a signature of the snowline in the known exoplanet population [43].This would be difficult to explain if the migration were very rapid, or random, so as to remove any signature of the initial distribution.
Intermediate-mass planets (approximately Saturn mass) can open partial gaps and may undergo a phase of runaway migration [44].In this scenario, a feedback mechanism operates so that the planet migrates ever faster as it moves through the disc.Essentially, though, it is expected that planets will move from where they form, and such a process is thought to provide a mechanism for the migration of gas giant planets, which typically form beyond the snowline, into orbits very close to their parent star.
Dynamical Interactions
Figure 6 also shows that exoplanets have a wide range of eccentricities; much greater than we see in our own solar system.Although it is possible for disc-planet interactions to drive eccentricity growth [38], it is more likely that such interactions will dampen the planet's eccentricity and force eccentricities to remain small [45].It is therefore thought that the exoplanet eccentricity distribution is not driven by disc-planet interactions, but is more likely a consequence of dynamical interactions in multi-body systems [46].This process can also lead to the ejection of some bodies from the system, producing free-floating, planetary-mass bodies [47,48].Such interactions also, typically, increase the eccentricities of the planets that remain in the system.In some cases, this can lead to some of the remaining bodies having orbits with periastra very close to their parent star.If this does happen, the planet can then be tidally circularised into a very close orbit, again producing a 'hot' Jupiter.
Therefore, along with disc-planet interactions, dynamical interactions can also sculpt the distribution of exoplanets.Evidence for this as an alternative mechanism was enhanced with the discovery of planets that orbit in a plane that is inclined with respect to the spin of the host star [49,50].Such misalignments are difficult to explain through disc-planet interactions alone and are thought to be a consequence of Kozai-Lidov cycles [51,52].Kozai-Lidov cycles occur when a companion-probably stellar, rather than planetary-on an outer, inclined orbit perturbs the inner planet, so that both its eccentricity and inclination oscillate.If the planet passes sufficiently close to its parent star, its orbit can be circularised through tidal interactions with its parent star, and it may then end up in a close-in, inclined orbit [53].
Outer Planets
The directly imaged planets, which tend to be massive and at large distances from their host star, present something of a puzzle for theories of planet formation.One possibility is that, rather than forming via core growth followed by gas accretion, such planets form directly via a gravitational instability [29] that may be present when the disc is young and massive [54].It has been suggested that such a process could explain some of the closer exoplanets [55], but more recent work has shown that the inner parts of the disc are unlikely to be susceptible to the growth of such an instability [56].
The outer parts of very young, circumstellar discs do, however, have conditions that make this instability viable [57,58], and so, it is a possible formation scenario for these directly imaged planets [59].There are even observations of a very young system that appears to show evidence for a bound object forming in the outer parts of the circumstellar disc.Figure 8 shows a radio image (left-hand panel) of the HL Tau system with excess emission coming from what might be a protoplanet at ∼ 65 AU (upper right quadrant of the left-hand images).The right-hand panel is a numerical simulation of how a disc in such a system may evolve and indicates that it is susceptible to the growth of planetary-mass bodies through direct gravitational collapse [60].A recent suggestion is that some of these objects that form in the outer parts of circumstellar discs could migrate inwards, lose mass through tidal stripping and form objects with properties similar to the those of closer-in exoplanets [61].It does, however, appear that such a process is likely to be rare [62], and so, it is not clear that any of the known closer-in exoplanets could have formed in this way.
Composition
In the previous sections, we discussed the properties of the exoplanetary systems, rather than specific characteristics of the exoplanets themselves.It is, however, also possible to determine something about the characteristics of the actual exoplanets.Transit measurements give the radius of an exoplanet.Combining this with radial velocity measurement, which give an estimate of the planet's mass, allows one to determine the planet's mean density.Figure 9, taken from [63], shows a plot of radius against mass for exoplanets that are well characterized.The left-hand panel shows all such exoplanets, while the right-hand panel shows those that are similar in mass and radius to that of the Earth ( i.e., R pl < 2.5R Earth and M pl < 10M Earth ).
There are a couple of interesting effects illustrated in Figure 9.The left-hand panel shows mainly Jupiter-like exoplanets, and yet, quite a large number of these have radii quite a bit bigger than is expected based on standard models (curve labelled hydrogen in the left-hand panel of Figure 9) [64].In some cases, the radius is almost twice as large as models would indicate, giving these planets densities as low as ∼ 0.1 g cm −3 [65].Given that the exoplanets in Figure 9 have both transit and radial velocity measurements, they are all quite close to their parent stars.One possible explanation for their radii being inflated is that they are strongly irradiated [66].An alternative [67] is that these planets are heated through tidal interactions with their parent stars.
The right-hand panel in Figure 9 shows planets with radii and masses similar to that of the Earth.One thing this illustrates is the mass-radius degeneracy; a planet of a given radius can have a range of possible masses that depends on its composition.This is one reason why follow-up observations are typically required to determine the nature of transiting objects.What is also illustrated is that we have found exoplanets with a wide range of different composition.Some have a higher fraction of rock than the Earth and, hence, have a lower density.Some have a higher fraction of iron and, hence, have Mercury-like densities.There are also some that appear to be predominantly water.We have recently detected a 17 Earth-mass exoplanet with a rock-like composition [68], which, because of the high mass, has a density significantly higher than that of the Earth.
In fact, it is actually more complicated than Figure 9 indicates.A planet with a reasonably substantial atmosphere (1%-10% of its total mass) can have the same mass and radius as a planet that has substantial water content [69].Therefore, there are cases where even accurate mass and radius measurements cannot break the degeneracy.
We are, also, starting to identify some exoplanets that are quite similar to the Earth.In Figure 9, the black filled circle with the dotted red ellipse illustrates the parameters of Kepler-78b [63,70], possibly the most Earth-like, in terms of size and composition, exoplanet found to date.All of the exoplanets in the right-hand panel of Figure 9 are, however, close to their parent stars and are therefore too hot to have conditions suitable for life.
Atmospheres
In addition to wanting to understand more about the composition of exoplanets, we would also like to be able to characterize their atmospheres in more detail.As already discussed, most exoplanets are detected indirectly.We have, however, recently started directly observing giant exoplanets at large distances from their host star.In such circumstances, we are receiving photons directly from these exoplanets, so it should be possible to spectroscopically analyse their atmospheres.However, even this is very difficult, and one of the most studied objects is actually the nearest brown dwarf [71], an object more massive than 13 Jupiter masses, but not massive enough to ignite hydrogen burning in its core (< 80 Jupiter masses) and become a star.
Figure 10 shows the surface maps of this brown dwarf, Luhman 16B [71].These are produced using a technique called Doppler imaging and show variations in brightness that move across the image as the brown dwarf rotates.These are indicative of large-scale cloud inhomogeneities and indicate that such objects have a form of weather.As already mentioned, Luhman 16B is not an exoplanet, but a slightly more massive object, called a brown dwarf.We do, however, have some information about some of the directly imaged exoplanets.One of the planets in the HR8799 system (HR8799b) is redder than expected, suggesting the presence of dust clouds in its atmosphere [72].We also have a measurement of the carbon-to-oxygen ratio in the atmosphere of another of the planets in the same system (HR8799c), which may give a hint as to how such planets might form [73]. New instruments, such as SPHERE [74] and the Gemini Planet Finder (GPI) [75], will, however, potentially allow us to substantially improve our understand of the character of giant exoplanets at large orbital distances; another step towards being able to characterize closer-in and lower-mass exoplanets and, eventually, exoplanets that might have conditions suitable for life.
Transit and Secondary Eclipse Spectra
Even though most exoplanets have been detected indirectly, we are still, in some cases, able to infer some things about their atmospheres.One method is to use transit spectroscopy.During a transit, some of the light from the host star will pass through the atmosphere of the planet.Transit spectroscopy involves observing the stellar spectrum both during a transit and outside of a transit.Subtracting these two spectra should then give the spectrum of the planet's atmosphere [6].HD209458b, for example, has been studied in great detail.Early observations suggested the presence of sodium in its atmosphere [11], later confirmed using high-resolution spectroscopy [76], and the non-detection of CO [77] is indicative of the presence of clouds.
A way to do transit spectroscopy is to simply determine the depth of the transit at different wavelengths.This would indicate that the planet has a wavelength-dependent effective radius and, hence, Figure 11.A transit spectrum for WASP-12b, showing how the transit depth varies with wavelength.The figure also shows some models based on different atmospheric compositions.The actual composition is not clear, but there appears to be some scattering, and the models rule out a featureless, pure hydrogen atmosphere.(Figure from [78]).
indicates something about the composition of the atmosphere.Figure 11 illustrates such a spectrum for the hot Jupiter, WASP-12b [78].It shows how the transit depth varies with wavelength and shows some theoretical spectra; a carbon-rich atmosphere with scattering, an oxygen-rich atmosphere with scattering and one dominated by Rayleigh scattering.This probably illustrates how difficult such work is, as all of the spectra produce reasonable fits, but some form of scattering does seem to be required, and the analysis does rule out a featureless, pure hydrogen atmosphere.
The closest 'hot' Jupiter to us is HD189733b, and consequently, it is one of the most well-studied.Both sodium [79] and water vapour [80] have been detected.It is also been possible to make albedo measurements of HD189733b [81], which suggest that it has optically thick reflective clouds on the dayside hemisphere and that it would appear a deep blue colour at visible wavelengths.Recently, we have also managed to do transit spectroscopy measurements of the super-Earth, GJ 1214b [82].The results tend to be somewhat inconclusive, as far as the actual composition of the atmospheres are concerned, but do often indicate the presence of clouds in the atmospheres of such planets [83].
A similar technique is to consider how the observed spectrum changes as the planet moves behind its parent star; known as the secondary eclipse.This can give some indication of the actual spectrum of the planet and can be used to determine its temperature and also something of its composition.Again, this is a very difficult measurement and it is typically only possible for 'hot' Jupiters [84], such as HD189733b [85].
Phase Variations
In addition to Transit Spectroscopy, another way to investigate the properties of transiting 'hot' Jupiters is to do infrared observations of the system.'Hot' Jupiters will be tidally locked, with one side always facing the parent star, and therefore, we would expect the dayside to be much hotter than the nightside.Hence, as the planet orbits the star, we would expect to see the infrared flux increasing as the planet moves away from us, peaking just before it passes behind its parent star.Similarly, it should decrease as it moves out from behind its parent star and starts moving back towards us.
This infrared phase variation allows us to determine something of the temperature structure of the planet [86] and can be combined with secondary eclipse mapping [87] to tell us something of both the latitudinal and longitudinal temperature structure.It is also possible to detect phase variations in the optical, which has been used to infer the presence of inhomogeneous clouds in the atmosphere of Kepler-7b [88].
High Resolution Spectroscopy
Since the planet moves much faster in its orbit then the star, the Doppler shift of the planet's spectral lines will be much greater than that of the stellar lines.In fact, a 'hot' Jupiter will move fast enough (> 100 km s −1 ) for this spectral line shift to be resolved by high-resolution spectroscopic instruments.This allows the planet's spectral lines to be distinguished from the stationary telluric lines in our atmosphere and the very-slowly moving stellar lines [89].This is a new, but powerful, technique that has even been used to detect water vapour and carbon monoxide in the atmosphere of a non-transiting exoplanet [90] and will probably be the best method for characterizing the atmosphere of small, rocky worlds [89].
Essentially, we are now in a position where we are starting to be able to characterize the atmospheres of giant planets.Future missions, such as JWST (the James Webb Space Telescope), will allow us to further investigate the atmospheres and characteristics of exoplanets.Being able to do so, for lower-mass, terrestrial/rocky planets, is, however, still a challenge, but high-resolution spectroscopy with the European Extremely Large Telescope (E-ELT) may make this possible [89].
Habitability
Understanding the properties and characteristics of exoplanets is of course of intrinsic interest, as it allows us to understand their formation and evolution.An ultimate goal is, however, to try and eventually find exoplanets that are potentially habitable and to determine if any of these exoplanets actually have conditions suitable for life.We do not know all the possible conditions that might be suitable for life, and hence, we typically define a habitable zone around a star as being the region where water can exist in liquid form.There are many other factors that can influence the size of the habitable zone (greenhouse effect, planet's albedo, carbon cycle, orbital properties), but around a Sun-like star, it is thought to be between ∼ 0.75 AU and ∼ 1.4 AU [91].
There are, however, numerous factors that can influence the habitable zone.Planets that are very dry compared to the Earth could actually have a much wider habitable zone than planets that are more Earth-like in terms of their water content [92,93].Super-Earth planets with thick hydrogen-helium atmospheres could, potentially, be warm enough to host life, even if as far as 10 AU from their host star, well outside the traditional habitable zone [94].In fact, it has even been suggested that free-floating planets [48] with thick hydrogen-helium atmospheres may have conditions suitable for life even in the absence of stellar irradiation [95].
Another factor that will affect habitability is the size/mass of the planet.If the planet is very massive, it will likely have a dense gaseous atmosphere, making life very unlikely.If its mass is too low, it will have insufficient gravity to hold on to any substantial atmosphere, again making life unlikely.We might, therefore, expect that planets with masses and radii similar to that of the Earth would be the most likely to be able to support life.However, determining precisely what conditions make a planet suitable is not really known.A crude way to estimate habitability is to simply consider factors like mass, radius, escape velocity and temperature [96].This is not a rigorous scientific approach, as appearing similar to the Earth does not mean that a planet will be habitable.Similarly, there may be planets that are habitable that we would regard as not being similar to the Earth.
Based on this approach, however, we now have about 22 exoplanets that are regarded as being similar to the Earth.These are all planets with radii less than 2.5-times that of the Earth, masses less than about 20 Earth masses and surface temperatures that would possibly allow water to exist in liquid form.However, all of these planets are more massive than the Earth (super-Earths) and orbit stars that are less massive than the Sun (called M-or K-dwarfs).The host star properties play an important role in potential habitability.Stars less massive than the Sun are cooler and less luminous.As illustrated in Figure 12, the habitable zone (blue band in Figure 12) varies with host star mass, which means that the habitable zone moves closer to the star as the star mass decreases [97,98].Since the habitable zone depends on the star mass, it becomes easier to detect planets similar to the Earth around stars less massive than the Sun, than it is to detect such planets around Sun-like stars.Since the star is less massive than the Sun and the planet is closer to its host star than the Earth is to the Sun, the reflex motion of the star is greater and, hence, easier to detect.Similarly, the ratio of the planet's radius to the star's radius increases with decreasing star mass, and so, it is easier to find planetary transits.The main reason, therefore, that most of the planets that are regarded as similar to the Earth are around stars less massive than the Sun is simply because they are easier to find.Similarly, in the coming years, it is likely that our search for potentially habitable exoplanets will focus on these lower-mass stars where such planets are easier to find and, according to analysis of Kepler data, are quite common [99].
Although detecting potentially habitable planets around M-and K-dwarfs is easier than around Sun-like stars, there are some complications to habitability on such planets.These stars are cooler than the Sun and, hence, emit most of their energy in the infrared, rather than in the visible.This has implications for processes like photosynthesis that may need to operate via a process that uses more photons than is the case on the Earth [100].Planets around these lower-mass stars are also close enough that they would be expected to be tidally locked; one side of the planet will always face the star.This has implications for atmospheric stability [101] and can induce rapid climate shifts that might make habitability unlikely [102].
Exomoons and Binary Systems
Although the obvious targets for potential habitability may well be planets with masses and radii similar to that of the Earth, orbiting stars in a region where liquid water can exist, that does not preclude the possibility that more exotic systems could be habitable.For example, we now have planets that orbit both stars in a binary (two star) system [103].It is possible to study the habitability of planets in such systems [104], and it appears that one of the known circumbinary exoplanets (Kepler-47c) is indeed in its star's habitable zone.It is, however, likely too massive to host life.However, the existence of massive planets in the habitable zone introduces the possibility of life on a moon, known as an exomoon, in orbit about such a planet [105].It is actually possible to detect exomoons through their influence on the timing of their parent planet's transit across the face of the host star [106].However, no such objects have yet been detected [107].Additionally, the most massive moon in our own Solar System is Jupiter's moon Ganymede, which is significantly less massive than the Earth.Hence, we do not know if moons with sufficient mass, so as to retain an atmosphere suitable for life, can, or do, actually exist.On the other hand, some of the Jovian moons, such as Europa, may have a tidally heated and potentially habitable ocean below the surface ice [108].The European Space Agency's JUICE (Jupiter Icy Moons Explorer) mission may provide information about the potential habitability of Europa's subsurface ocean [109], but doing so for a potentially habitable exomoon may be beyond our capabilities for the foreseeable future [110].
Conclusions
We now have a large (> 1800 ) sample of extrasolar planets, detected via a variety of different methods.Most methods do not directly detect the planet itself, but infer its presence from observations of the host star.We are, however, now starting to be able to directly detect massive planets at large distances from their host stars, and recently installed instruments, such as the Gemini Planet Finder (GPI) [75] and SPHERE [74], will allows us to directly detect massive exoplanets closer to their host stars than we can do today.
The different detection methods typically sample different regions of parameter space, and what we are discovering is that we are finding planets wherever we have sensitivity to do so.This includes massive planets very close to their parent stars ('hot' Jupiters) and planets with much more eccentric orbits than the eccentricities of our own solar system planets.This points to the highly dynamical and complex nature of planet formation and evolution, and we can now largely explain the properties of exoplanetary systems and why they differ so much from our own solar system.Of course, this does beg the question as to whether or not our solar system is special [111] and whether or not the properties of our solar system are relevant with respect to habitability.We do not know the answer to this, but it is a guiding our search for potentially habitable exoplanets.
An important step towards determining if a planet is habitable or not is characterizing its atmosphere and trying to identify biosignatures [6].We are starting to be able to probe the atmosphere of giant planets, and forthcoming projects, like the James Webb Space Telescope (JWST) and the European Extremely Large Telescope (E-ELT), will advance this knowledge.However, it seems unlikely that anything currently planned will allow us to properly characterize the atmospheres of potentially habitable terrestrial planets.We do, however, now have a small sample of planets that are regarded as being similar to the Earth in terms of mass, radius and temperature, and we will likely add to this sample in the coming years.At the moment, these are primarily orbiting stars less massive than the Sun (as such planets are easier to find than those in the habitable zone around a Sun-like star), but there is a good chance that we will soon detect a true Earth analogue.
Therefore, we continue to progress in terms of improving our understanding of exoplanets, their formation and evolution and their characteristics and properties.Ultimately, one of the main goals is to find and characterize a potentially habitable exoplanet.Although we may be some way away from being able to actually determine if a planet is habitable, we will likely soon have a large sample of potentially habitable planets on which to focus our attention.
Figure 1 .
Figure 1.Figure showing the radial (line-of-sight) velocity of the star, 51 Pegasi, determined by measuring shifts in the star's spectral lines.The periodic nature of the star's radial velocity indicates the presence of a companion with a mass of about 0.4 Jupiter masses and an orbital period of only 4.2 days.(Figure from [2]).
Figure 2 .
Figure2.The light curve of HD209458, which shows a small dip in brightness when its companion planet transits across the face of the star.This measurement can be used to infer the radius of the planet.(Figure from[11]).
Figure 4 .
Figure 4. Infrared images of β Pictoris taken in November, 2003, and again in late 2009, showing a companion at ∼ 10 AU, that has clearly moved substantially between 2003 and 2009.These observations suggest that the object has a mass of about nine Jupiter masses.(Figure from [25]).
Figure 5 .
Figure 5. Figure showing the masses of the known exoplanets plotted against their orbital periods.The different colours represent the different detection methods.There is more detail about this figure in the text, but essentially, we are detecting exoplanets in all parts of parameter space where we have sensitivity to do so (credit: this research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.).
Figure 6 .
Figure 6. Figure showing exoplanet eccentricity plotted against orbital period.Unlike Solar System planets, whose orbits are mainly close to being circular, exoplanets have a wide range of eccentricities (credit: this research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.).
Figure 7 .
Figure 7. Image from a simulation of a planetary-mass body embedded in a circumstellar disc.For planets that are sufficiently massive, the planet can open a gap in the gas disc and drives waves into the surrounding disc.The planet then migrates inwards with the surrounding disc and can be stranded in a close orbit, forming what is called a 'hot' Jupiter.
Figure 8 .
Figure 8. (Left) A radio image of the HL Tau system showing excess emission at ∼ 65 AU (upper right quadrant of the left-hand images), which could be a protoplanet in formation.(Right) A simulation showing how such an object could indeed form, through direct gravitational collapse, in the outer parts of a disc like that in the HL Tau system.(Figures 1 and 2 from [60]).
Figure 9 .
Figure 9. Mass-radius relationship for all well-characterized exoplanets.Triangles are for Solar System planets.The left-hand panel shows the full range of exoplanet radii and masses and illustrates that many close-in gas giants have inflated radii.The right-hand panel is for exoplanets with masses and radii similar to that of the Earth.It illustrates a degeneracy in that planets of different masses and compositions can have similar radii.It also illustrates that we have found exoplanets with a wide range of different compositions.(Figure from[63]).
Figure 10 .
Figure 10.Surface maps of the brown dwarf, Luhman 16B, showing darker and brighter regions that are indicative of large-scale cloud inhomogeneities.(Figure from [71]).
Figure 12 .
Figure 12. Figure showing how the habitable zone varies with star mass.Stars with masses less than that of the Sun are less luminous and cooler, and hence, the habitable zone moves closer to the star as the star mass decreases.(Figure from [98]). | 2015-09-18T23:22:04.000Z | 2014-09-19T00:00:00.000 | {
"year": 2014,
"sha1": "c13d72c3f6a2c64b5c7c7220e9131513d8776947",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-1547/5/2/296/pdf?version=1411393004",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "c13d72c3f6a2c64b5c7c7220e9131513d8776947",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
232081800 | pes2o/s2orc | v3-fos-license | Free Energy Landscape of RNA Binding Dynamics in Start Codon Recognition by Eukaryotic Ribosomal Pre-Initiation Complex
Specific interaction between the start codon, 5’-AUG-3’, and the anticodon, 5’-CAU-3’, ensures accurate initiation of translation. Recent studies show that several near-cognate start codons (e.g. GUG and CUG) can play a role in initiating translation in eukaryotes. However, the mechanism allowing initiation through mismatched base-pairs at the ribosomal decoding site is still unclear at an atomic level. In this work, we propose an extended simulation-based method to evaluate free energy profiles, through computing the distance between each base-pair of the triplet interactions (d1, d2 and d3) involved in recognition of start codons in eukaryotic translation pre-initiation complex. Our method provides not only the free energy penalty (ΔΔG) for mismatched start codons relative to the AUG start codon, but also the preferred pathways of transitions between bound and unbound states, which has not been described by previous studies. To verify the method, the binding dynamics of cognate (AUG) and near-cognate start codons (CUG and GUG) were simulated. Evaluated free energy profiles agree with experimentally observed changes in initiation frequencies from respective codons. This work proposes for the first time how a G:U mismatch at the first position of codon (GUG)-anticodon base-pairs destabilizes the accommodation in the initiating eukaryotic ribosome and how initiation at a CUG codon is nearly as strong as, or sometimes stronger than, that at a GUG codon. Our method is expected to be applied to study the affinity changes for various mismatched base-pairs.
Introduction
The translation reaction, or mRNA-dependent protein synthesis, is catalyzed by the ribosome, the macromolecular ribonucleoprotein complex [1,2]. During eukaryotic initiation, the ribosome dissociates into the large (60S) and small (40S) subunits, and the latter binds the methionyl initiator tRNA (Met-tRNA Met i ) and mRNA with the help of eukaryotic initiation factors (eIFs) [3,4]. Met-tRNA Met i is recruited by eIF2, a heterotrimeric factor that binds the tRNA in a manner dependent on GTP binding. The resulting ternary complex (TC) binds the 40S subunit in the context of multifactor complex (MFC) with eIFs 1, 3 and 5, forming the 43S preinitiation complex (PIC) [5]. The mRNA is bound by eIF4F through its 5' cap, which then is recruited to the 40S subunit through eIF3 in mammals and eIF5 in yeast [6]. The 48S pre-initiation complex (PIC) thus formed scans for the start codon in the process called scanning. The PIC is primed for scanning by the eIF5-catalyzed GTP hydrolysis for eIF2; the products, GDP and Pi, stay bound to eIF2 during scanning. The start codon base-pairing with the Met-tRNA Met i anticodon in the P-site allows the 40S subunit to stall at the start codon and then join the 60S subunit after most bound eIFs are released in conjunction with Pi release from eIF2 [7,8]. The resulting 80S initiation complex accepts an amino-acyl tRNA in the A-site to begin the translation elongation cycle.
The fidelity of start codon recognition is regulated by eIF1A and eIF1 that bind the 40S subunit A-site and P-site, respectively [9,10]. Essentially, these factors regulate the conformational changes of the PIC. Thus, the N-terminal tail of eIF1A interacts with the codon-anticodon base-pairs to stabilize its closed conformation for initiation [11]. By contrast, eIF1 stabilizes its open, scanning-competent conformation by physically impeding the P-site accommodation of the mismatched codon-anticodon base-bairs [5]. Upon start codon selection by the PIC, eIF1 is released to permanently stabilize the closed state [12,13]. The PIC-bound eIFs directly interact with eIF1 to control the balance of its binding and release, thereby keeping the level of initiation accuracy appropriate for eukaryotic cell function [6,[14][15][16]; for review, see [17].
Despite the aforementioned mechanisms to ensure the accurate initiation of translation, most eukaryotes tested allow translation initiation from near-cognate start codons (start codons with one-base substitution in the AUG codon), for example, CUG and GUG, at a low frequency [18][19][20][21]. However, not all the near-cognate start codons can initiate translation at an equal rate. Among the most puzzling is the observation that GUG serves as a poor initiation site, even though the G residue in the 1st position can potentially wobble-base-pair with the U residue in the anticodon. In agreement with a wobble base-pairing, GUG serves as a normal initiation site in prokaryotes (Bacteria and Archaea), which possess fewer initiation factors than in eukaryotes [17]. Moreover, CUG is considered as the strongest near-cognate start codon in eukaryotes, even though it is not a start codon in prokaryotes [17]. To solve these conundrums, it is crucial to probe the stability of codon-anticodon interactions in the P-site. As it is difficult to experimentally measure these interactions at an atomic resolution, computational analysis offers an effective solution [22,23]. Specifically, molecular interactions are represented by the free energy profiles, which in this case depend on the nucleotide constitutions of codon and anticodon, and thereby determine the frequency of the bound state. Thus, computational estimation of their free energy profiles provide an insight into the mechanism of translation initiation by various start codons.
Previous molecular dynamics (MD) simulation studies estimated energy gap between AUG and mismatched codons by computing free energy scores of interaction between these codons by free energy perturbation (FEP) [24]. The work identified CUG as the strongest near-cognate start codon. Moreover, it demonstrated eIF1's contribution in discriminating from near-cognate start codons. However, there are mainly two caveats in the FEP approach. First, except for the conformation of AUG found within the PIC structure, the geometry of near-cognate start codon was conjectured merely through substituting a base with the corresponding base in the AUG. Second, information on the binding process was missing. As single-strands of RNA are flexible [25], their structural changes may be diverse. Although the binding free energy is independent of the transition path and thus could be calculated by the FEP method, it is crucial to estimate structural changes involved in the binding process in order to understand the selective binding mechanism.
In this study, we employed adaptive biasing force (ABF) method [26][27][28] in order to overcome these problems. This method explores codon-anticodon binding free energy by using systematic reaction coordinates. By using distances of each triplet base-pair as the reaction coordinates, we generated the free energy profiles of base-pairing interactions between Met-tRNA Met i anticodon and cognate (AUG) or near-cognate (CUG or GUG) start codons. Our results provide structural insights related to a strong penalty for placing GUG codon in the P-site and a permissiveness of CUG as a potential start codon in eukaryotes.
Simulation Procedure
We referred to previous study of start codon recognition in eukaryotic translation initiation using all-atom molecular dynamics (MD) simulations [24]. To reconstruct codon-anticodon interaction in solution, we employed open pre-initiation complex (PIC) structure (PDB ID: 3J81) determined by CryoEM [11]. To reduce computational cost for MD simulation, we extract atoms within 25 Å from N1 atom in the middle base of anticodon in tRNA molecule [24]. Then, nucleotides were edited to reconstruct PIC models involving target codons in our study (Tab. 1). When editing the nucleotide (e.g. AUG → GUG), first, atoms except N1 and N9 in base group, sugar group, and phosphate group were deleted. Then, coordinates of missing atoms were inferred. All histidine residues were configured as ϵ-protonated. These molecules were soaked into a 36 Å radius water sphere ( Fig. 1), neutralized by K + , and added 150 mM KCl. TIP3P water model was employed. VMD [29] was used to infer missing atom coordinates, solvate the model, and visualize the structure throughout the study. All simulations were carried out using NAMD (version 2.13 multi-core) [30]. CHARMM36 force-field (July 2019 update) was used [31,32]. Multilevel summation method (MSM) electrostatics [33] was employed. Cutoff at 12 Å (with switching from 10 Å) was applied to non-bonded interactions. Temperature and pressure were set at 310 K and 1 atm, respectively; Langevin thermostat (damping coefficient: 5/ps) and Langevin-piston barostat were adopted. All C1' (nucleotide) and C α (amino acid) atoms farther than 22 Å from the center of the system (i.e. water sphere) were restrained at their initial positions, and water molecules crossing the boundary of water sphere (radius 36 Å) were restrained. Harmonic potential functions with spring constant 10 pN/Å were adopted as the restraint of molecules. After energy minimization (10,000 steps), the system was equilibrated for 10 ns, and then simulated for 1 µs (time-step: 2 fs); the biasing force was applied only after collecting 200 samples in the bin. Each model (Tab. 1) was simulated five times.
Multi Dimensional Adaptive Biasing Force (ABF) Method
We performed adaptive biasing force (ABF) molecular dynamics method [26][27][28] to evaluate multi-dimensional free energy profiles in terms of d 1 , d 2 , and d 3 , which were defined as the distances of the 1st, 2nd, and 3rd base-pairs in Å, respectively (Tab. 1). Specifically, these d i were evaluated as distance between the centers of hydrogen donor and acceptor atoms of codon and anticodon (Table 1 shows The free energy profile G(d 1 , d 2 , d 3 ) with respect to three variables d 1 to d 3 was obtained through the analysis of ABF results. The probability of Gibbs free energy for each state P (d 1 , d 2 , d 3 ) obeys Eq. 1: was averaged over the simulation trials for each model, and then G(d 1 , d 2 , d 3 ) was evaluated as Eq. 2: Then, to evaluate the free energy difference between the codon-anticodon bound and unbound states, we defined the free energy scores G bound and G unbound , and their gap ∆G binding as Eq. 3: Here we defined the bound and unbound states as ∀i : 4.0 ≤ d i ≤ 6.0 and ∀i : 7.0 ≤ d i ≤ 9.0, respectively (Fig. 2). The distance range for the bound state corresponds to the codon-anticodon (AUG-CAU) structure [11], and that for the unbound state is based on a previous research that described unbound conformation of the complex [34]. G bound and G unbound were hence weighted average of 1 , d 2 ).
were defined in the same way.
Reconstruction of Averaged Structure
Typical structures, or atomic coordinates corresponding to a specific reaction coordinate (d 1 , d 2 , d 3 ), were obtained by averaging the sampled atomic coordinates as follows. Here, we assumed that the reaction coordinate ( Then, each atomic coordinate was averaged over all the snapshots (sampled at 10 ps intervals) corresponding to the representative reaction coordinate (d 1 ,d 2 ,d 3 ). We evaluated the free energy scores of bound and unbound states (G bound and G unbound ; see Eq. 3) averaged over five simulation trials (Fig. S1), and presented their difference ∆G binding in Fig. 4. In the case of the cognate start codon AUG, ∆G binding must be negative to stabilize the initiation of translation, and was indeed ∼ −4 k B T . In contrast, the GUG codon, less frequently used as a start codon [17,20,21], showed a positive ∆G binding ∼ 2 k B T . For the CUG codon, which is considered as a stronger start codon than GUG [17,20,21], ∆G binding showed an intermediate value ∼ 1 k B T . Thus, ∆G binding accounts for observed initiation rates from the respective start codons. Note that ∆G binding could alternatively be obtained from individual simulation trajectories; assuming each trial as a sample, we confirmed the significance of ∆G binding between AUG and GUG by Welch's t-test (p = 0.044). Conformational changes inferred from the free energy landscape (Fig. 5). The transition path R • n (• is AUG or GUG) is shown by black arrows.
Free Energy Landscape for the Binding Process
To analyze the base-pair binding dynamics in detail, we constructed projected profiles G (d 1 , d 2 ), G(d 1 , d 3 ), and G(d 2 , d 3 ) from G(d 1 , d 2 , d 3 ) (see Materials and Methods), as shown in Fig. 5. We inferred the transition dynamics from the free energy profiles; Fig. 6 shows the suggested paths and their schematics.
In the case of the AUG start codon, the following transitions were expected in the AUG-CAU dynamics in equilibrium (Fig. 6). Starting from the bound state, d 3 shows large fluctuations while d 1 and d 2 show small ones (R AUG 3 ). d 1 and d 2 are bistable (bound and unbound), and once the 3rd G:C base-pair is broken (large d 3 ), the 2nd U:A base-pair may become unbound (R AUG 2 to the large d 2 state). Only after that, the 1st A:U base-pair dissociates (R AUG 1 to the large d 1 state); this is expected to occur less frequently due to the higher barrier than those for R AUG 2 and R AUG 3 .
Starting from the unbound state and reversing the process above, the AUG codon should bind to the CAU anticodon from the side of the 1st A:U base-pair, followed by the 2nd and then 3rd base-pairs (Fig. 6). This result suggests that the recognition of the 1st A:U base-pair is very important for the accurate start codon recognition, in agreement with the role and location of eIF1 in the P-site [11,12] (see Fig. 8 below).
In the case of the GUG codon, d 1 and d 2 show the transition (R GUG 1 ) between two distinct (metastable) states (both bound and both almost unbound) as shown by G(d 1 , d 2 ) in Fig. 6, while d 3 is mostly high. Binding of the 3rd base-pair (transition to lower d 3 ) is possible but less frequent, and simultaneous binding of the 1st and 3rd base-pairs is rare (Fig. 5). As expected, the affinity of the 1st base-pair (wobble G:U) is lower than the case of AUG. This result is consistent with infrequent GUG initiation observed in the previous works [17,20,21].
In the case of the CUG codon, many meta-stable states were observed as shown in Fig. 5. Transition paths seem to be more complicated than the AUG and GUG cases. Although concurrent binding of the 2nd and 3rd base-pairs (lower d 2 and d 3 ) is possible, the 1st base-pair cannot form simultaneously with these other base-pairs (Fig. 5), which makes the CUG pairing unstable compared to AUG base-pairing. Overall, however, the binding free energy ∆G binding is lower for CUG than for GUG (Fig. 4) (see below). Note that, technically, the rugged free energy landscape (Fig. 5) demanded more computational cost for the ABF sampling, as suggested by the slow convergence shown in Fig. 3. To visualize the bound structures and consider the mechanism underlying the abovementioned results (Figs. 5 and 6), we evaluated the averaged structure of the bound state for each model (see Eq. 6 in Materials and Methods). The averaged structures and schematics of the codon and anticodon are shown in Fig. 7.
Binding Dynamics from the Structural Views
In the case of AUG, the averaged bound-state structure is ordered and tightly bound. It is reasonable, as it is the correct start codon, and the binding free energy is negative (Fig. 4). Note that eIF1 and eIF1A molecules (shown in red and blue in Fig. 7, respectively) are present near the AUG-CAU base-pair. It was experimentally suggested that these proteins contribute to the accurate start codon recognition [5,[9][10][11][12][13] (see Fig. 8).
By contrast, the structure of the GUG-CAU base-pair is disordered (Fig. 7). The mismatched bases (the 1st G:U) avoid each other (rather than forming a wobble base-pair) and the uracil in tRNA tilts toward the 2nd U:A base-pair. The directions of the 2nd and 3rd base-pairs were consequently affected, resulting in the unstable bound state (Fig. 4). Although the projected free energy profile G(d 1 , d 2 ) (Fig. 5) suggests cooperative binding of the 1st and 2nd base-pairs (Fig. 7), the 3rd base-pair is mostly separate, which may prevent the recognition of the GUG start codon.
In the case of CUG, the structure is relatively ordered (Fig. 7). Although the 1st C:U base-pair is mismatched, cytosine is smaller than guanine and adenine (purine bases), which may mitigate steric hindrance at the 1st position. As shown in Fig. 5, many meta-stable conformations are possible, which we propose to be attributed to combinations of bound and unbound conformations of the base-pairs. It is therefore reasoned that some near-bound states can occasionally allow translation initiation at this codon.
Discussion
The binding free energy ∆G binding evaluated by our method was qualitatively consistent with experimental observation, as described in Fig. 4. We compared the AUG start codon and two near-cognate start codons (Tab. 1). Assuming that G unbound is common for all the models, i.e. the free energy of the unbound state is independent of the codon, G bound and ∆G binding are equivalent. The difference, or penalty, of binding free energy (∆∆G) induced by AUG → GUG and AUG → CUG substitution was ≃ 6 k B T (3.6 kcal/mol) and ≃ 5 k B T (3.0 kcal/mol), respectively. This result is largely consistent with another computational approach using the free energy perturbation (FEP) [24].
In contrast to the previous work, however, our ABF-based approach provided not merely the binding free energy but information on the nucleic acid binding dynamics represented by the free energy landscape (Figs. 5 and 6). The free energy profiles shown in Fig. 5 suggested an unexpected stability of the 1st A:U base-pair, compared to the 3rd G:C base-pair. According to the free energy profile of AUG binding, dissociation of the triplet base-pairs starts at the 3rd G:C (Fig. 6, right column). In the open PIC model that is suggested to occur during the scanning process prior to start codon recognition [11], the tRNA is not perpendicularly attached to the mRNA, in contrast to the P-site tRNA positioning during the elongation phase. This conformation appears to allow the 5'-side (i.e. cytosine side) of the anticodon to curve away from the start codon, suggesting a stretching force towards the tRNA side ( Fig. 8(a)). We propose that this stretching decreases the affinity of the 3rd G:C base-pair during the scanning process ( Fig. 8(a)). In contrast, the affinity of the 1st A:U base-pair is likely to be increased by interaction with eIF1, so is that of the 2nd U:A base-pair by eIF1A, as proposed previously [5,[9][10][11][12][13] (Fig. 8(a)). In strong agreement with the role of eIF1 in stabilizing the 1st A:U base-pair, our averaged simulation structure indeed positions Asn-34 and Arg-36 in its proximity ( Fig. 8(b)). In fact, the residues Asn-34:Gly-35:Arg-36, termed β-hairpin loop 1, is absolutely conserved from yeast to human. Mutations altering Asn-34 and Arg-36 display significant increase in UUG initiation [35], in agreement with their crucial role in maintaining open scanning-competent PIC conformation.
The free energy landscape of GUG-anticodon base-pairs (Fig. 5) and its averaged simulation structure in the P-site (Figs. 7 and 8(b)) also suggest that the same structural restriction in turn prevents G:U pairing at the 1st position, that otherwise occurs frequently in its free form. The disordered 3rd G:C base-pair seen with the GUG structure appears to be consistent with this idea (Fig. 7). Since we did not observe a strong disorder in CUG-anticodon structure (Fig. 7), we propose that the near-cognate start codon usage characteristic of eukaryotic initiation is mostly explained by a strong perturbation on GUG accommodation in the P-site due to steric restriction imposed by eIF1 β-hairpin loop. In agreement with this thesis, the level of CUG initiation is just equivalent to that of GUG initiation in yeast S.cerevisiae [19] (and our personal observations), although the former is significantly stronger than the latter in various distinct contexts in human cells [21].
The adaptive biasing force (ABF) method adopted in this study was previously used to evaluate free energy profiles of molecular dynamics in other biomolecular systems [26][27][28]. Among many extended ensemble simulation methods, we chose ABF to mitigate the difficulties in studying nucleic acid (DNA and RNA) interactions. An example of such difficulty is encountered when high temperature is applied in replica exchange molecular dynamics [36] that causes complete separation of nucleic acid strands, which cannot revert to the original structure within the simulation timescale.
To overcome this problem, extended simulation methods without destroying the structure should be sought, and the ABF can offer a solution. We hope that further development of this approach can also contribute to the improvement of analysis and prediction of RNA structural dynamics [37,38].
Conclusion
In this study, we proposed a computational method to obtain multi-dimensional free energy profiles for codon-anticodon base-pairing by adaptive biasing force (ABF) molecular dynamics [26][27][28]. This reaction-coordinate-based analysis method provided the equilibrium profiles of base-pair binding dynamics depending on ribonucleotide sequence -start codon-anticodon base-pairing in this case. Our method successfully detected the changes in the free energy landscape (Figs. 5 and 6) induced by site-specific nucleotide substitution in the start codon (e.g. AUG → GUG) and offered a mechanistic explanation for how such changes led to perturbation in initiation frequency from the altered start codons.
Author Contributions
TK, KA, and YT conceived and designed the research, analyzed the data, and wrote the paper. TK and YT performed the numerical simulations. | 2021-03-02T14:22:15.830Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "59a314c2e3654ddf8d63645cd56eb1274323ec7a",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/02/25/2021.02.24.432637.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "59a314c2e3654ddf8d63645cd56eb1274323ec7a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
} |
253179171 | pes2o/s2orc | v3-fos-license | Isometries on almost Ricci–Yamabe solitons
The purpose of the present paper is to examine the isometries of almost Ricci–Yamabe solitons. Firstly, the conditions under which a compact gradient almost Ricci–Yamabe soliton is isometric to Euclidean sphere Sn(r)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S^n(r)$$\end{document} are obtained. Moreover, we have shown that the potential f of a compact gradient almost Ricci–Yamabe soliton agrees with the Hodge–de Rham potential h. Next, we studied complete gradient almost Ricci–Yamabe soliton with α≠0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \ne 0$$\end{document} and non-trivial conformal vector field with non-negative scalar curvature and proved that it is either isometric to Euclidean space En\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E^n$$\end{document} or Euclidean sphere Sn.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S^n.$$\end{document} Also, solenoidal and torse-forming vector fields are considered. Lastly, some non-trivial examples are constructed to verify the obtained results.
Introduction
One of the most significant approaches to understanding the geometric structure in Riemannian geometry is to study the theory of geometric flows. The Ricci flow is a well-known geometric flow introduced by Hamilton [15], who used it to prove a three-dimensional sphere theorem [14]. The idea of the Ricci flow is contributed to the proof of Thurston's conjecture, including as a special case, the Poincaré conjecture. The Ricci soliton on a Riemannian manifold (M, g) are self-limiting solutions to Ricci flow and is defined by (1.1) where L V g denotes the Lie-derivative of g along potential vector field V, Ric is the Ricci curvature of M 2n+1 and λ, a real constant. When the vector field V is the gradient of a smooth function f on M 2n+1 , that is, V = ∇ f, then we say that Ricci soliton is gradient (for details see [9,20]). According to Petersen and Wylie [20], a gradient Ricci soliton is rigid if it is a flat N × R k , where N is Einstein and gave certain classification. The notion of almost Ricci soliton was introduced by Pigola et al. [21] by taking λ as a smooth function in the definition of Ricci soliton (1.1). The authors in [2] studied the rigidity of gradient almost Ricci solitons and showed that it is isometric to the Euclidean space R n or sphere S n . Barros et al. [3], Yang and Zhang [28], Cao et al. [8] obtain several rigidity results.
To tackle the Yamabe problem on manifolds of positive conformal Yamabe invariant, Hamilton introduced the geometric flow known as Yamabe flow. The Yamabe soliton is a self-similar solution to the Yamabe flow. On a Riemannian manifold (M, g), a Yamabe soliton is given by (1.2) where R is the scalar curvature of the manifold and λ, a real constant. Even though both the Ricci and Yamabe solitons are similar in dimension n = 2, the solitons behave differently for dimension n > 2 as the Yamabe soliton preserves the conformal class of the metric but the Ricci soliton does not in general. If λ is a smooth function in (1.2), then it is called almost Yamabe soliton. Alkhaldi et al. [1] gave a characterization of almost Yamabe soliton with conformal vector field. Barbosa and Ribeiro [4] gave some rigidity results for Yamabe almost soliton. Güler and Crasmareanu [13], in 2019, introduced the notion of the Ricci-Yamabe map which is a scalar combination of Ricci and Yamabe flow. In [13], the authors define the following: Definition 1.1 [13] The map RY (α,β,g) : I → T s 2 (M) given by: is called the (α, β)-Ricci-Yamabe map of the Riemannian flow (M, g). If then g(.) will be called an (α, β)-Ricci-Yamabe flow. The Ricci-Yamabe flow can be Riemannian or semi-Riemannian or singular Riemannian flow due to the involvement of scalars α and β. This kind of different choices can be useful in some physical models such as relativity theory. The Ricci-Yamabe soliton emerges as the limit of the solution of Ricci-Yamabe flow. Definition 1.2 A Riemannian manifold (M n , g), n > 2 is said to admit almost Ricci-Yamabe soliton (g, V, λ, α, β) if there exist smooth function λ such that where α, β ∈ R. Almost Ricci-Yamabe soliton is of particular interest as it generalizes a large group of well-known solitons such as: • Ricci almost soliton (α = 1, β = 0).
If V is a gradient of some smooth function f on M, then the above notion is called gradient almost Ricci-Yamabe soliton and then (1.3) reduces to where ∇ 2 f is the Hessian of f. The almost Ricci-Yamabe soliton (ARYS) is said to be expanding, shrinking or steady if λ < 0, λ > 0 or λ = 0 respectively. In particular, if λ is constant, then ARYS reduces to Ricci-Yamabe soliton. Many geometers such as [10,11,22] analyzed Ricci-Yamabe solitons. In [23,26], authors studied Ricci-Yamabe soliton in different spacetimes. Singh and Khatri [16,25] studied ARYS in almost contact manifolds. Siddiqi et al. [24] consider ARYS on static spacetimes.
Motivated by the above studies, we investigated the ARYS under certain conditions. The present paper is organized as follows: In Sect. 2, several rigidity results are obtained by following the methods of Barros and Ribeiro [5] for compact almost Ricci soliton. Also, we obtained the conditions under which compact gradient ARYS is isometric to the Euclidean sphere S n (r ). In Sect. 3, ARYS with conformal, solenoidal and torseforming vector fields are considered. We showed that a complete ARYS with α = 0 and potential vector field as conformal vector field is either isometric to Euclidean space E n or Euclidean sphere S n (r ). Also, complete gradient ARYS with conformal vector field is investigated. Lastly, ARYS with solenoidal and torse-forming vector fields are considered and obtained several rigidity results which are proved by constructing non-trivial examples.
Some rigidity results on ARYS
Before proceeding to the main results of this paper, we obtained several lemmas on ARYS and gradient ARYS which would be used later.
Lemma 2.1
For a gradient ARYS (M n , g, ∇ f, λ), the following formula holds: Proof Equation (1) is directly obtained by taking trace of the soliton equation. For Eq. (2), we consider Schur's Lemma (n > 2), we have Then, using Ricci identity in the above expression gives Thus, in regard of equation (1) yields
Petersen and Wylie [20] obtained the following Bochner formula for Killing and gradient field as:
Lemma 2.2 Given a vector field X on a Riemannian manifold
When X = ∇ f is a gradient field and Z is any vector field, we have Taking an inner product of Eq. (2) in Lemma 2.1 by arbitrary vector field Z gives In particular,
Lemma 2.3 For an ARYS
Making use of Schur's Lemma, Lemma 2.2, (2.4) and (2.5), we get the required results. This completes the proof.
Moreover, from (1.3) we have In consequence of this in Lemma 2.3, we get then X is Killing and M n is RYS.
Proof Since M n is compact, taking integration of Lemma 2.3 gives In view of our hypothesis and (2.7), we get ∇ X = 0 which implies L X g = 0, i.e., X is Killing vector field. In this case, ARYS will be simply RYS since M n will be Einstein manifold, which implies that λ is constant. This completes the proof.
then X is Killing.
Theorem 2.7 Let
Combining second argument of Lemma 2.1 and (2.8), then taking divergence of the obtained expression yields Now, using commuting covariant derivative and Ricci identity, we have Making use of the above expression in (2.9), we get Combining first argument of Lemma 2.1, (2.2) and (2.10), we obtain Making use of the fact that By hypothesis, since M n is compact, we get (2.14) Combining (2.2) in (2.14) proves the second part provided α + (n − 1)β = 0. This completes the proof.
Now, using the foregoing equation in (2.14) yields With regard to Theorem 2.7, Corollary 2.8 and Tashiro's result [27] which states that a compact Riemannian manifold (M n , g) is conformally equivalent to S n (r ) provided there exists a non-trivial function f : M n → R such that ∇ 2 f = f n g. We obtain the following result which is a generalization of Corollary 1 of [5] and Corollary 1.10 of [12].
β} is isometric to a Euclidean sphere S n (r ) if one of the following conditions hold:
(1) M n has constant scalar curvature.
(2) M n is a homogeneous manifold. Hodge-de Rham decomposition theorem states that we may decompose the vector field X over a compact oriented Riemannian manifold as a sum of the gradient of a function h and a divergence free vector field Y, i.e., where div Y = 0. Taking divergence of (2.16) gives div X = h. From the fundamental equation, we have 2div X + (2α + nβ)R = 2nλ. Therefore, combining both equations result in the following: On the other hand, if (M n , g, ∇ f, λ) is also a compact gradient ARYS, then from equation (1)
ARYS with certain conditions on the potential vector field
In this section, we consider ARYS whose potential vector field satisfies certain conditions such as conformal, solenoidal and torse-forming vector fields. First we recall the definition of conformal vector field. A smooth vector field X on a Riemannian manifold is said to be a conformal vector field if there exists a smooth function ψ on M that satisfies We say that X is non-trivial if X is not Killing, that is, ψ = 0. Conformal vector field under almost Ricci soliton and almost Ricci-Bourguignon solitons were considered by authors in [5,6] and obtained interesting results. Now, we state and prove the following lemma. (n ≥ 3) be ARYS with α = 0. If X is a conformal vector field with potential function ψ, then R and λ − ψ are constants.
Lemma 3.1 Let
Proof Since X is a conformal vector field, we have L X g = 2ψg. Making use of this in the soliton equation Making use of Schur's Lemma in (3.3) and inserting it in the covariant derivative of (3.2) results in (n − 2)α∇ R = 0. As α = 0, then R is constant, which implies then from (3.2) that λ − ψ is also constant. This completes the proof.
Theorem 3.2 Let (M n , g, X, λ) (n ≥ 3) be a compact ARYS with α = 0. If X is a non-trivial conformal vector field, then M n is isometric to Euclidean sphere S n (r ).
Proof In regard of Lemma 3.1, we know that R and λ − ψ are constants. Moreover, using Lemma 2.3 [29] we conclude that R = 0, otherwise ψ = 0, a contradiction as ψ = 0. Taking Lie derivative of (3.1) and using the fact that R and λ − ψ are constants give Now, applying Theorem 4.2 of [29] to conclude that M n is isometric to Euclidean sphere S n (r ). This completes the proof. Now, we look at gradient ARYS admitting conformal vector field on which we state and prove the following:
is a non-trivial conformal vector field with non-negative scalar curvature, then either
(1) M n is isometric to a Euclidean space E n . or (2) M n is isometric to a Euclidean sphere S n . Moreover, ψ is a first eigenfunction of Laplacian and λ = 2α+nβ 2n R − λ 1 n f + k, where k is a constant. Proof Since ∇ f is a non-trivial conformal vector field, we have L ∇ f g = 2ψg, ψ = 0. Now, in consequence of argument (1) of Lemma 2.1, we get ψ = f n = 0. Moreover, from Lemma 3.1, we know that R and λ − ψ are constants. Suppose R = 0, then this implies that M n is Ricci flat and by using Tashiro's theorem [27] in the fundamental equation, we conclude that M n is isometric to a Euclidean space E n . On the other hand, suppose R = 0. Then, making use of Lemma 2.1 in ψ = f n gives λ = ψ + ( 2α+nβ 2n )R. As a consequence, (3.1) becomes Ric = R n g for α = 0. Therefore, by involving a theorem by Nagano and Yano [18], we can conclude that M n is isometric to a Euclidean sphere S n . Furthermore, taking into account of the fact that Ric = R n g, we can use Lichnerowicz's theorem [17], the first eigenvalue of the Laplacian of M n is λ 1 = R n−1 . Now, we make use of well known formula by Obata and Yano [19], which gives In view of (3.4), one can easily obtain ψ = −λ 1 ψ, that is, ψ is a first eigenfunction of the Laplacian. Also, we get ( f +λ 1 f ) = 0. Then, by Hopf theorem, we obtain f +λ 1 f = c, where c is a constant. Combining the last expression with Lemma 2.1 give us the required expression for λ. This completes the proof.
In [6], the authors consider almost Ricci-Bourguignon soliton and almost η-Ricci-Bourguignon soliton with solenoidal and torse-forming vector field and obtained several rigidity results. Following similar methods, we examine ARYS (M n , g, ξ, λ) with solenoidal and torse-forming vector fields. | 2022-10-28T15:22:51.939Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "11f71e946cefff149f46713128e019376f008891",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40065-022-00404-x.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "31cbed4d8c80c23463346a7d2cd516bc971c7056",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
254471485 | pes2o/s2orc | v3-fos-license | On the Kirchhoff-Love Hypothesis (Revised and Vindicated)
The Kirchhoff-Love hypothesis expresses a kinematic constraint that is assumed to be valid for the deformations of a three-dimensional body when one of its dimensions is much smaller than the other two, as is the case for plates. This hypothesis has a long history checkered with the vicissitudes of life: even its paternity has been questioned, and recent rigorous dimension-reduction tools (based on standard Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\varGamma $\end{document}-convergence) have proven to be incompatible with it. We find that an appropriately revised version of the Kirchhoff-Love hypothesis is a valuable means to derive a two-dimensional variational model for elastic plates from a three-dimensional nonlinear free-energy functional. The bending energies thus obtained for a number of materials also show to contain measures of stretching of the plate’s mid surface (alongside the expected measures of bending). The incompatibility with standard Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\varGamma $\end{document}-convergence also appears to be removed in the cases where contact with that method and ours can be made.
1. Points of the plate lying initially on a normal to the middle plane of the plate remain on the normal to the middle surface of the plate after bending. 2. The distance of every point of the plate from the middle surface remains unchanged by the deformation.
For mysterious reasons, these statements have become to be known as the Kirchhoff-Love hypothesis; we shall stick to this not fully justified tradition. This hypothesis has been variously criticized in the literature, mainly for the inconsistencies that it may cause with the distribution of stresses that (depending on the specific constitutive law) should sustain the assumed deformation. Podio-Guidugli [63,64] overcame this criticism by taking the view that the Kirchhoff-Love hypothesis is a constraint on the admissible deformations, which is sustained by an appropriate reactive stress, specified in a class admitted by symmetry and determined by the equilibrium equations of three-dimensional elasticity.
More recently, the rigorous analytical tool of Γ -convergence has been employed to derive plate (and shell) theories from three-dimensional elasticity. Admittedly, the problem with this method is that in general, apart from noticeable exceptions [2,7], it only affords to derive single powers of the energy expansion in the thickness 2h. Said differently, we can obtain from a three-dimensional constitutive model either a "membrane-dominated model" or a "flexural-dominated model" (in the words of [14]), meaning that we can isolate twodimensional energies either linear or cubic in h, respectively. For example, models in the former category have been derived in [6] for linear plates and in [44] for nonlinear ones (as well as in [45] for nonlinear shells). Models in the latter category have been derived in [22,24] for nonlinear plates (as well as in [23] for nonlinear shells). These higher-order Γlimits, however, need to be evaluated on the class of deformations that minimize the lower order. In other words, we may only recover the h-cubic bending energy on the minimizers of the h-linear stretching energy. 1 Sadly, the analytic notion of Γ -limit has not yet fully evolved into that of Γ -expansion, and thus it does not yet serve the purpose of deriving blended stretching and bending energies, free to conspire together in a thin sheet of a (possibly activable) elastic material, which is our objective here.
In the engineering literature, the role of Γ -convergence as a method to validate reduced elastic theories for structure mechanics has further flourished in recent years. As lucidly reviewed in [65] and [66, § 2], Γ -convergence has been employed to justify both the Kirchhoff-Love and Reissner-Mindlin theories for linearly elastic plates, in a variety of ways, which have been classified under two general categories, standard [60,61] and improved [59]. 2 For nonlinear elastic plates, standard Γ -convergence clashed with the Kirchhoff-Love hypothesis. It was proven in [22] that the rigorous bending energy (in the flexural-dominated model) is incompatible with the deformation field assumed by the Kirchhoff-Love hypothesis. Here, we try and remedy this clash by revising (and salvaging) the classical hypothesis. In particular, we shall see that the incompatibility pointed out in [22] is resolved by our revised hypothesis.
The paper is organized as follows. In Sect. 2, we recall the basic kinematics of plates and present our revision of the classical Kirchhoff-Love hypothesis. In Sect. 3, we introduce a description for the deformation of a smooth surface that relies on a notion of Cartesian connectors, which avoid the use of coordinates and Christoffel symbols. Sections 4 and 5 are devoted to the dimension-reduction afforded by our revised kinematic hypothesis in two distinct constitutive classes, one for incompressible plates and the other for compressible ones. We consider a number of special nonlinear elastic models for the application of our method; for all we derive stretching and bending energies. The features that these results have in common is the presence in the bending energy of stretching measures of the mid surface of the plate (alongside the expected measures of bending). In Sect. 6, we summarize our conclusions and comment on some possible avenues along which this work could be extended. This paper is closed by the Appendix, where we give explicit formulae for the mean and Gaussian curvatures of the deformed mid surface in terms of the mapping that describes it.
Kinematics of Plates
Here we wish to describe the deformation of an elastic plate with a uniform width (and a planar reference configuration). The deformation will be split into two components, a planar one, which maps the reference plane mid surface onto a deformed mid surface, and an axial one, which maps vectors normal to the mid reference surface on vectors normal to the deformed mid surface. The classical Kirchhoff-Love hypothesis consists in assuming that the second mapping is an isometry (see, for example, [76, p. 551] and [12, p. 156]). Because of this isometry, assuming regularity for the planar mapping is enough to ensure an admissible deformation for sufficiently thin plates. In the following, the latter assumption will be made precise and the isometric constraint along normals will be relaxed. In this framework, an approximate right Cauchy-Green tensor will be constructed and its invariants computed.
Kinematic Preliminaries
Let S be a bounded, two-dimensional flat domain immersed in three-dimensional Euclidean space E and h > 0 a real constant. We call y : S → E an injective C 3 -immersion of S and we denote its image as S := y(S). Pursuing our aim of extending the Kirchhoff-Love hypothesis, we interpret the closed set S := S × [−h, h] ⊂ E as the reference configuration of an elastic plate whose mid surface is S. As we focus on the case 2h diam S, we set diam S = 1, for simplicity, meaning that we shall rescale all lengths to diam S.
We define the mapping f : where ν is the unit normal vector to S and φ : S → R is a C 2 -function which describes how normals to S deform into normals to S . It follows from (1) that the deformation gradient reads as where ∇ denotes the gradient in x, a prime denotes differentiation with respect to x 3 , and e 3 is the unit normal to S. 3 Furthermore, φ is assumed to obey which is justified by the requirement that f be orientation-preserving at least for x 3 = 0, as there, by (2), The classical Kirchhoff-Love hypothesis just requires that φ ≡ x 3 , and so it trivially complies with (3). In our approach, φ will rather remain free and either determined to enforce the constraint of bulk incompressibility or used to minimize the elastic energy stored across the thickness of the deformed plate. We see now how (3) can ensure that f is an orientationpreserving C 2 -diffeomorphism onto its image, for appropriately small values of h.
Proof In the following two steps, we adapt Theorem 4.1-1 of [12, p. 157] to our setting.
1. By the continuity of F, inequality (4) implies that Since S is compact, η attains it minimum in S. Moreover, the minimum of η over S must be strictly positive, otherwise y would fail to be an injective immersion. Thus, h can be chosen so that which is where we shall hereafter take it to be.
In Sects. 4 and 5, we shall use a polynomial approximation for φ in computing the invariants of the right Cauchy-Green tensor C f associated with f . Now, we justify this approximation and lay down a number of preliminary formulae for the invariants of C f .
Invariants of C f
We learned in Proposition 1 how to choose h > 0 sufficiently small so that the mapping f is a C 2 -diffeomorphism. Hypothesis (3) also implies that ∇φ(x, 0) = 0 ∀ x ∈ S; thus, it is also possible to choose h so small that This inequality will be assumed to be valid in the following, and h will be taken to comply with both (5) and (6). Within the approximation stated in (6), F in (2) will be written as Definition 1 The corresponding right Cauchy-Green tensor C f associated with f is given by where and C := (∇y) T (∇y), Here C is the right Cauchy-Green tensor associated with the deformation y, while is the left Cauchy-Green tensor associated with the same deformation. The reader should heed that all tensors C, C 1 , and C 2 act on the two-dimensional space , where V is the translation space associated with three-dimensional Euclidean space E . B(x), however, at the place y(x) ∈ S , acts on the two-dimensional space V ν := {v ∈ V : v · ν = 0}.
Remark 1
The curvature tensor ∇ s ν of S , where ∇ s denotes the surface gradient on S , is a symmetric tensor on V ν (see, for example, [27]). We easily see that both tensors C 1 and C 2 can be expressed in terms of ∇ s ν. As ∇ s ν = (∇ν)(∇y) −1 , it readily follows from (8c) and the symmetry of ∇ s ν that We now compute the principal invariants of C f .
Proposition 2
The first invariant, I 1 := tr C f , can be given the form where H := 1 2 tr(∇ s ν) and K := det(∇ s ν) (12) are the mean and Gaussian curvatures of S , respectively.
Proof We see from equations (8a)-(8c) and Remark 1 that The desired conclusion follows from the identity, which we now proceed to prove. First, we represent locally the curvature tensor ∇ s ν of S as where κ 1 , κ 2 are the principal curvatures of S and n 1 , n 2 , orthogonal unit vectors of V ν , are the corresponding principal directions of curvature, so that, at each place on S , ∇ s ν is a symmetric tensor acting on V ν . Since B is also a symmetric tensor acting on V ν , it can be represented in the frame (n 1 , n 2 ) as B = B 11 n 1 ⊗ n 1 + B 22 n 2 ⊗ n 2 + B 12 (n 1 ⊗ n 2 + n 2 ⊗ n 1 ).
Equation (11) shows that I 1 involves at most quadratic terms in φ and φ . As a consequence of the orthogonal decomposition in (8a), the second and third invariants of C f , I 2 and I 3 , will also involve higher powers of φ, but not of φ . To justify the power expansion of the stored elastic energy considered in the following, we need only retain in I 2 and I 3 the terms at most quadratic in φ; all higher powers of φ will be neglected.
Proposition 3
The third invariant, I 3 := det C f , is expressed by Proof First, we note that, by (8a), Then we consider two elementary identities valid for any second-order tensor A on V 3 : for any orthonormal basis (e 1 , e 2 ) of V 3 , det A = Ae 1 × Ae 2 · e 3 and tr A = (Ae 1 × e 2 + e 1 × Ae 2 ) · e 3 .
Making use of these identities, we readily obtain from (8b) that Basic properties of trace and determinant ensure that Moreover, by the Cayley-Hamilton theorem, which together with (19), (21), and (22a)-(22c) lead us to (18).
Proposition 4 The second invariant I 2 of C f is given by
Proof It follows from (8a) that Since C φ is tensor on V 3 , again by the Cayley-Hamilton theorem, it satisfies where I 2 is the identity on V 3 . Taking the trace of both sides of (25), we obtain from (24) that The desired conclusion then follows from (21), (22a)-(22c), and (11), since tr In Sect. 4, we shall consider materials that obey the incompressibility constraint, I 3 = 1. There, (18) will turn into a differential equation that determines φ.
In preparation for this, in the following section we refresh the preliminaries of differential geometry of surfaces in a way that avoids local charts of coordinates, but resorts instead to a number of vector fields, which describe the correspondence between local movable frames in the reference and current configurations of a material surface.
Cartesian Connectors
Here, we introduce the notion of Cartesian connectors, which in our view constitute a viable alternative to Christoffel symbols. In terms of these connectors, we reformulate the classical theorema egregium of Gauss and the Codazzi-Mainardi compatibility conditions.
We reformulate the essentials of the differential geometry of smooth surfaces embedded in three-dimensional space. For definiteness, we shall assume that the mapping y that deforms S into S is of class C 3 .
Letting (r 1 , r 2 ) be the right principal directions, that is, the (normalized) eigenvectors of C, and (l 1 , l 2 ) the left principal directions, that is, the (normalized) eigenvectors of B, with corresponding principal stretches (common to both tensors) λ 1 > 0 and λ 2 > 0, we may represent ∇y, C, and B as follows (see, for example, [28, p. 74]), We shall assume that the Cartesian frames (r 1 , r 2 , e 3 ) and (l 1 , l 2 , ν) are oriented so that e 3 = r 1 × r 2 and ν = l 1 × l 2 . It should be kept in mind that both r 1 and r 2 lie in the (x 1 , x 2 ) plane; ∇ denotes the two-dimensional gradient in this plane, whereas ∇ s denotes the surface gradient on S . The connector c is a vector field in the plane such that The existence of c and the specific form of (28a), (28b) follow from the requirement that the right principal directions (r 1 , r 2 ) be orthonormal everywhere on S. 4 Clearly, if r 1 is known then c is defined as c := (∇r 1 ) T r 2 ; on the other hand, if c is assigned, at least locally, in the class C 1 , then r 1 (and r 2 ) can be determined up to a rigid rotation by solving equations (28a), (28b). To this end, however, c must be compatible; it follows from the symmetry of both ∇ 2 r 1 and ∇ 2 r 2 that the compatibility condition reads as which for a simply connected S implies that c = ∇Φ, where Φ is an appropriate scalar potential. Since we assume that both r 1 and r 2 are determined by y, we shall here consider c as known and satisfying (29).
In complete analogy to the frame (r 1 , r 2 , e 3 ) on S, we describe the corresponding frame (l 1 , l 2 , ν) as a field of orthonormal directors on S . Equations (28a), (28b) are generalized to where the connectors c * , d * 1 , and d * 2 are planar fields defined on S. A number of consequences for these fields follow from the integrability condition that requires the second gradients of y, l 1 , l 2 , and ν to be symmetric: they are listed below.
1. For the symmetry of ∇ 2 y (in its last two legs), the following second-order tensors must be symmetric, Thus, for the symmetry of ∇ 2 y, it must be In particular, (32a) and (32b) can be combined together to yield By recalling (27b), it becomes apparent from (33) that c * is completely determined by c and C.
2. Similarly, for the symmetry of ∇ 2 l 1 , it must be which can also be written in the equivalent forms 3. For the symmetry of ∇ 2 l 2 , (34a) is supplemented by or its equivalent form 4. Finally, the symmetry of ∇ 2 ν is guaranteed by (34b) and (36).
The connectors d * 1 and d * 2 can be given a geometric interpretation by computing the curvature tensor ∇ s ν of S . It readily follows from (27a) that Letting from (38) we arrive at which is duly symmetric, as by (39) equation (32c) reduces to Both the mean curvature H and the Gaussian curvature K of S can easily be derived from (40); they are given by An important conclusion follows by combining (34a) and (43) with the aid of (39), namely Since, as shown by (33), the left-hand side of (44) is determined by C (alongside its first and second spatial derivatives), so is K. In other words, the metric on S determines the Gaussian curvature of S . This is the manifestation in our setting of the celebrated theorema egregium of Gauss. Similarly, equations (34b) and (36) are related to the Codazzi-Mainardi equations (see, for example, [72, p. 144]). In the special case where both principal stretches (but not necessarily the principal directions of stretching) are uniform in space, equation (33) reduces to where c 1 := c · r 1 and c 2 := c · r 2 . It follows from (29), (45), and the identities that where we have set c 12 := r 1 · (∇c)r 2 . A comparison with (44) readily helps us to conclude that It is not difficult to check that (48) agrees completely with equation (22) of [52], which was deduced with the more traditional use of coordinates and Christoffel symbols. As appealing as formulae (42) and (43) may be, they are not especially expedient to compute H and K, for a given deformation y, as the link between the latter and the connectors is rather intricate. In the Appendix, we shall give other formulae for H and K valid for areapreserving deformations y; they are more accessible to direct computation and also show the role played by the second gradient ∇ 2 y in determining the principal curvatures of S .
Incompressible Elastomer Plates
In this section, we consider an incompressible elastomer plate, for which we assume that the mid surface S is inextensible and the whole body S is incompressible. That is, we assume that det C = 1 and det C f = 1. ( Here we consider the former constraint as a remnant of the latter, the one that survives when, in the limit as h → 0, only stretching energy is associated with the membrane S by an appropriate dimension reduction of the elastic energy stored in the three-dimensional body S. 5
Polynomial Approximation
It is our desire to compute averages of the elastic energy stored across the (small) thickness of the plate. To this end, it will suffice to represent φ as a polynomial in x 3 . Using the expressions for the invariants of C f presented in Sect. 2, in the following proposition we shall identify this polynomial.
Proposition 5 If we let
which complies with (3) for α > 0, then (49) Proof By Proposition 3, the constraints in (49) reduce to the equation where φ is as in (50). The desired result then follows by identifying in (52) the coefficients of equal powers of x 3 up to x 3 3 , and recalling that α > 0.
Remark 2
The asymptotic expansion for φ presented in Proposition 5 is consistent with a C 3 -regularity for φ in x 3 . As this hypothesis requires more regularity than that envisaged in Sect. 2, this is clearly a particular case of the framework described there.
Remark 3
The classical Kirchhoff-Love hypothesis, which requires φ ≡ x 3 , can thus be envisaged as the lowest approximation to φ in (50). The higher order approximation represented by (51) entails a local dependence of the thickness 2h * of the deformed plate on the invariant measures of curvature for S . Explicitly, this coupling is given by
Gent's Material
The elastic energy stored in a plate made of Gent's material is given by where I 1 = tr C f is the first invariant of C f , and μ and J m are positive material constants, which can be identified with a shear modulus and a stiffening parameter, respectively. The role of the latter is illuminated by the request that to which I 1 must be subjected for W G to be meaningful. Gent's constitutive law (54) was first proposed in [25]; it represents the simplest mathematical model for rubber elasticity that accounts for the limited extensibility of the polymeric chains constituting these materials. There is a vast literature on microscopic and phenomenological theories for rubber-like materials based on limited molecular extensibility, for which Beatty [5] coined the name of limited or restricted elastic models; we refer the reader to the emphatic review [31], which focuses on Gent's material.
Taking the limit as J m → ∞ in (54), we give W G the form which is the celebrated neo-Hookean stored energy density, a special case of the Mooney-Rivlin formula, where 0 < χ 1 is a dimensionless parameter. 6 Both W nH and W MR are incapable of describing the severe stiffening that occurs even at moderates stretches for soft biological membranes [30], whereas (54) is capable.
The theory proposed in [20] extended the long-standing tradition of statistical theories for ideal molecules constituted by freely joined rigid links subject to a non-Gaussian distribution for the end-to-end distance. 7 Building on this work, Beatty [4] motivated a constitutive law for rubber elasticity depending only on I 1 and incorporating the stiffening phenomena associated with a limited extensibility of the constituting chains. It was shown in [32] that (54), which has a genuine phenomenological origin, is a very accurate approximation to Beatty's molecular-based constitutive law; it retraces all qualitative features of the latter and it reproduces its quantitative predictions, with the advantage of being mathematically simpler, even amenable to explicit, closed-form solutions. Moreover, μ and J m are related to the molecular model by where n is the number density (per unit volume) of molecular chains, N is the number of links in each chain, k is the Boltzmann constant, and T the absolute temperature.
Here, we wish to show how stretching and bending energies are blended together in a thin sheet of Gent's rubber-like material complying with (49) and (55). This result will be achieved in Proposition 6 by integrating W G across the thickness of the plate using the polynomial expression for φ found in Proposition 5.
Proposition 6 Let φ be of class C 3 in x 3 , so that it can be expressed as in Proposition 5 and let the constraints (49) be enforced. Then, the following expression is valid for all
where (59c) 6 Clearly, (57) reduces to (56) for χ = 1.
Proof By inserting (11) into (54) and making use of (50) and (51), we expand W G in powers of x 3 up to x 2 3 ; standard computations lead us to where the dependence on x has been omitted to avoid clutter. Integrating this expression for W G across the thickness of the plate, we reach our desired conclusion.
Proposition 6 is the main result of this section. The quantities hw s and h 3 w 3 introduced in (59a) are interpreted as the stretching and bending elastic energy-densities (per unit area) of Gent's plates. Following [18], we shall call w s and w b the stretching and bending contents of Gent's elastic energy, respectively.
Remark 4
There are similarities between the expression for the plate's surface energy density arrived at in Proposition 6 and the elastic energy densities posited in geometric elasticity (see, for example, [3,[17][18][19]). Geometric elasticity of plates (and shells) does blend together stretching and bending energies, which scale with different powers of h; the former, like w s in (59b), is of a pure metric nature, while in the latter, as in (59c), metric and curvature measures are combined together in an invariant way. Interesting remarks on this contamination of stretching and bending measures in w b are offered in [82].
In the vanishing thickness limit, that is, as h → 0, if both w s and w b stay bounded, the stretching energy prevails over the bending energy and provides the leading deformation mechanism; for h sufficiently small, we may consider the bending energy as a perturbation to the stretching energy.
Remark 5
Among all tensors C on V 3 such that det C = 1, the stretching content w s in (59b) attains its minimum at C = I 2 , which is its unique minimizer. Indeed, letting λ 2 = 1 λ 1 in (27b), we can write w s as which attains its unique minimum for λ 1 = 1, where w s vanishes. Thus, in the absence of obstructive boundary conditions and external forces, the surfaces S that minimize w s are isometric immersions of S in three-dimensional space. By (48), all such surfaces have K = 0. Further minimizing w b on these immersions amounts at minimizing which is the form (for K = 0) of the energy density featuring in Helfrich's functional for flexible vesicles [29].
The two-step minimization outlined in Remark 5 is clearly highly hypothetical, for at least two reasons. First, boundary conditions and external forces are always present and are responsible for shaping the equilibrium configurations of plates (especially, elastomer plates, which are more responsive to mechanical stimuli). Second, the representations of stretching and bending contents in (61) and (62) miss the main points of the full-blown representations in (59b) and (59c), that is, that the measures of stretch influence the bending content as well and that both contents are blended together in (59a) in a way that depends on h and may give rise to interesting instability scenarios driven by the plate's thickness. Limiting forms of the stretching and bending contents such as w s and w b remain however indicative and will also be used in the following section to establish contact with a branch of literature in this field, where those limits have been established differently for a number of models.
Remark 6
When stretching and bending energies compete one against the other for an equilibrium, the way the surface is stretched affects its response to bending. The coupling between the two energies is not only conveyed through tr C, the sum of the principal stretches, it also involves the relative orientation of the eigenframes of the curvature tensor ∇ s ν and left Cauchy-Green tensor B.
Letting the former be represented as in (15) and the latter as in (27c), with λ 2 = 1 λ 1 , we can write where ϕ is the angle that l 1 makes with n 1 . This is the only contribution to w b that depends on ϕ. Even for given λ 1 , κ 1 , and κ 2 , minimizing w ϕ is not trivial. While w ϕ is independent of ϕ in the special case that λ 1 = 1 or κ 1 = κ 2 , in general, it has always two stationary points at ϕ = 0 and ϕ = π 2 , which are somehow expected, as there B and ∇ s ν share the same eigenframe. However, another pair of stationary points may arise, for which provided that λ 1 , κ 1 , and κ 2 make the right-hand side of (64) positive. These extra stationary points, when they exist, make w ϕ vanish, so that it attains its infimum. This shows that at equilibrium the relative orientation of B and ∇ s ν may give rise to interesting patterns on S .
Remark 7
We have assumed at the start of this section that S is inextensible and thus C is subject to det C = 1. This constraint can be easily relaxed, while still enforcing det C f = 1. A few changes occur in our analysis, which otherwise proceeds unaltered. We record here these changes for the interested reader. The polynomial representation formula for φ in Proposition 5 becomes while the expressions for w s and w b in (59b) and (59c) are to be replaced by and respectively. It is a simple matter to check that for det C = 1 equations (65), (66), and (67) reproduce the corresponding formulae derived above.
The great advantage offered by the incompressibility constraint det C f = 1 (and amply exploited in this section) is to determine φ directly on kinematic grounds, as shown in Proposition 5. For compressible materials, this advantage is lost. We need a different criterion to determine φ. In the following section, we shall show that such a criterion can be found in minimizing the elastic energy stored in the plate, for a given deformation y of the mid surface S.
Compressible Plates
In this section, we apply the method presented in Sect. 2 to compressible materials. We shall show how the modified Kirchhoff-Love hypothesis purported in this paper actually entails non-trivial normal strains for a compressible plate. Our analysis, which again is not confined to small strains, will conduce to a blending of stretching and bending energies. To ease the comparison between these latter energies and those already proposed in the literature (mostly for small strains), we shall also consider the small-strain limit for both examples we treat in detail below. In one case, we shall derive a Koiter-like potential [37][38][39][40] for the Ciarlet-Geymonat material [13]; this potential is also shown to agree with that recently derived in [14] for the same material. In the other case, we recover the bending energy derived in [22] as a rigorous Γ -limit on isometries for a variant of the Saint-Venant-Kirchhoff material.
The Ciarlet-Geymonat Material
Ciarlet and Geymonat [13] introduced a general class of hyperelastic potentials intended to provide an extension to compressible materials of the Mooney-Rivlin stored energy (see, for example, p. 189 of [10]). Here we shall consider a special example of this general class of materials, for which the stored elastic energy is where a > 0, b > 0, c > 0, and d are material constants. Letting denote the Green-Saint-Venant strain tensor (see, for example, [28, p. 70]), we show now that, for strains of sufficiently small norm |E f |, W CG can be given the classical form of the stored elastic energy for isotropic materials, with Lamé coefficients, λ and μ, appropriately related to the material constants in (68).
Proposition 7
In the limit of small strains the energy W CG in (68) can be given the form provided we set Proof It suffices to make use in (68) of the following equations Here, we continue to represent the function φ as in (50). However, no kinematic constraint will determine the functions α(x) and β(x); we need an alternative criterion, which we identify in minimizing separately the two lowest orders in h of the elastic energy integrated across the plate's thickness, for a given deformation y of the mid surface S. Hereafter, to improve clarity, the dependence on the in-plane variable x will be omitted.
Proposition 8 Let φ be given as
, with α > 0 to ensure local orientability to f in (1). For W CG as in (68), the minimum energy density (per unit area) that can be ascribed to S is represented as where and b 1 = tr (B∇ s ν). Correspondingly, α and β are determined as Proof By (11) and (18), we can write Making use of both these equations in (68), we readily arrive at which does not depend on either β or γ and, for given tr C and det C, is minimized for positive α at the value in (74c). Choosing α as in (74c), we similarly compute which is independent of γ and is minimized for β as in (74d). Inserting (74c) and (74d) in (77) and (78), respectively, we conclude the proof.
Remark 8
Although γ features in both I 1 and I 3 as expressed in (75) and (76), it does not affect w 1 in (77) and neither it does w 3 as long as α is chosen so as to minimize w 1 . Our minimization criterion leaves γ undetermined. To determine it, we should expand further the energy density w CG , so as to include terms of order h 5 , which we renounce doing here. Both w 1 and w 3 would however remain unaffected by the value of γ .
Remark 9 Equation (74c) shows clearly how in the compressible case our method differs even more markedly from the classical Kirchhoff-Love hypothesis, as α = 1 only for det C = 1. Moreover, as for incompressible materials, the bending content w 3 also depends on the relative orientation of the eigenframes of B and ∇ s ν via b 1 .
Remark 10
It is perhaps interesting to express both w 1 and w 3 in (74a) and (74b) in terms of the Lamé coefficients, λ and μ, associated with W CG in the linearized limit (70). By use of (71), we obtain that Remark 11 In the vanishing thickness limit introduced in Sect. 4, we easily find that w 1 is minimized by C = I 2 , so that correspondingly, again by Gauss' theorema egregium (which requires K = 0), w 3 takes the form It is thus useful to consider the form acquired by w CG in (73) when C is close to I 2 and, correspondingly, ∇ s ν is close to 0. Proposition 9 Let E := 1 2 (C − I 2 ). The following asymptotic representations are valid for w 1 and w 3 in (79a) and (79b), Correspondingly, α and β in (74c) and (74d) become Proof To prove (81a) and (81b) it suffices to make use of the following (simple) estimates Similarly, (81c) and (81d) follow from inserting (71) in (74c) and (74d) and then using again (82a)-(82e).
Remark 12
Both expressions for φ and w CG provided by Proposition 9 are precisely the same as those obtained in Theorem 5.2 of [14]. 8 This ensures well-posedness to the minimum energy problem in the limit of small strains. In particular, w CG with w 1 and w 2 as in (81a) and (81b) is the form appropriate to a plate of the elastic energy density envisaged in Koiter's theory for shells [39,40], in which stretching and bending energies are blended together, but are kept in a quadratic form. Equations (79a), (79b) above provide instead the stretching and bending contents for a fully nonlinear theory of plates made of the Ciarlet-Geymonat material.
A Variant of the Saint-Venant-Kirchhoff Material
Here, to provide a further application of the method proposed in this paper, we consider a variant of the classical Saint-Venant-Kirchhoff material studied in [22]. 9 The stored energy 8 It is perhaps worth recalling that (81b) is just the same as the classical formula for the strain energy stored in a moderately bent plate comprised of a linearly isotropic elastic material, see [47, p. 133], where the Lamé coefficients, λ and μ, are replaced by Young's modulus E and Poisson's ratio σ (see, for example, [47, p. 126]). Similarly, apart from a numerical prefactor due to a difference in scaling the plate's thickness, (81b) is also the same as equation (6.4) of [22], which expresses the Γ -limit on isometries of the elastic free energy of an isotropic nonlinear material, see also footnote 10 below. 9 As shown, for example, in [10, p. 155], the classical Saint-Venant-Kirchhoff material is characterized by the following stored energy function
density (per unit volume) of this material is
where λ and μ are material constants, which can be identified with the Lamé coefficients of this material, as shown by the following small-strain approximation to W VSK .
Proposition 10 Letting E f be defined as in (69), we can give W SVK the same approximate form in (68), valid for all isotropic materials, Proof The desired conclusion follows easily from remarking that A rigorous method was devised in [22] to determine the bending content w 3 of a plate on all isometric embeddings y of S in E . There, w 3 is obtained as a Γ -limit on the class of deformations that minimize the stretching energy. It was also proved in [22] that for all isotropic materials w 3 reads as the leading term in (81b) and the normal deformation φ has a quadratic representation with coefficients 10 in accord with the leading terms in (81c) and (81d). For isometric embeddings y, we can easily relax the polynomial approximation for φ. Although this refinement makes our kinematic description more accurate, the bending content w 3 is not affected, as shown below for the material with stored energy density W SVK .
Proposition 11
Let y be such that C = I 2 . Let φ in (1) be a function of class C 2 in x 3 that obeys (3). The minimum surface energy is 3 16 3 (where K = 0, since y is an isometry), which is attained for Proof W SVK can readily be rewritten as which has the same small-strain limit as (83) (see Proposition 10).
Conclusion
We have revised the classical Kirchhoff-Love hypothesis, making it more apt to derive the blending of stretching and bending energies of a plate from the free-energy functional of three-dimensional nonlinear elasticity. In summary, we have achieved two main results: (i) we have shown that measures of stretching enter the bending energy (in addition to the expected measures of bending); (ii) we have reconciled the Kirchhoff-Love hypothesis to standard Γ -convergence results for nonlinear plates on the ground where these can be compared with ours.
We have been concerned with developing a general method to obtain two-dimensional energies from three-dimensional ones and we tested it in a number of cases, thus reviving a good practice which Truesdell [75] lamented to be forgotten: "In mathematical practice today it is, unfortunately, often forgotten that to derive basic equations is even so much a mathematician's duty as to study their properties." Of course, there is much room for improvement and further extension of the proposed method.
First, the function φ introduced in (1) was almost invariably taken to be polynomial in x 3 . One wonders whether φ could be chosen in a more general class of functions without jeopardizing our conclusions. The only exploration we did along these lines was in Proposition 11, but for isometric embeddings of S; this did not affect the bending content w 3 , but had an effect on α, which changed at the order O(h 2 ), see (95). The question is then whether we can expect that, as a rule, the bending content is not affected by letting φ vary in a wider class of functions.
Second, and more importantly, the representation of the deformation f in (1) is not the most general possible. It would be interesting to replace (1) by where the unit vector d is a director field on S, which contributes to the deformation of the whole plate S on the same footing as y, representing strains across the plate's thickness.
Were we able to retrace our entire method starting from (96) instead of (1), the surface energy density w resulting from a parent volume density W would be a function of d and ∇d, as well as of y and ∇y.
Letting d · ν > 0 throughout the deformed surface S , we find ourselves in the mist of the Cosserat director-theory for plates (and shells). This theory, which goes back to the pioneering works of the Cosserat brothers [15,16], is admirably rephrased in modern terms in the book [1] (see, in particular, Chap. XIV). A full analysis of strain and equilibrium equations were first neatly developed in [21]. In connection with this theory, the classical Kirchhoff-Love hypothesis was also used in [56], always assuming d ≡ ν. A more general thermodynamic treatment of one-director surfaces was presented in [55]. As we also learn in Sect. 1.9 of [11], this theory is intimately related to the Reissner-Mindlin theory of plates [48,68,69], which indeed allows for the normals to the mid surface in the undeformed configuration not to remain normal to the deformed mid surface (as also illustrated in Sect. 5.2 of [33]).
All this body of knowledge suggests to take (96) as a general representation of the deformation field within a plate and use it to perform a dimension-reduction of the threedimensional stored energy to derive a genuine two-dimensional energy functional; a similar pursuit was undertaken in [14] (see, in particular Sect. 6.2). 11 It remains to face the difficulties offered by assuming (96) in our entire development, a task which, if not easy, might be desirable to undertake.
Conflict of interest
The authors declare that they have no conflict of interest.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Appendix: Cartesian formulae for H and K
Consider a deformation y of the planar surface S such that det C = 1, which ensures that y preserves the area of any portion of S. Letting (e 1 , e 2 ) be an orthonormal frame in the plane that contains S, we represent the deformation gradient ∇y as ∇y = a 1 ⊗ e 1 + a 2 ⊗ e 2 , where a 1 := (∇y)e 1 and a 2 := (∇y)e 2 . | 2022-12-10T14:53:53.881Z | 2021-02-11T00:00:00.000 | {
"year": 2021,
"sha1": "c60ab05979e71423a08b3adb9ebbc2b0df44fe89",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10659-021-09819-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "c60ab05979e71423a08b3adb9ebbc2b0df44fe89",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
247091999 | pes2o/s2orc | v3-fos-license | Genetic variants of interferon lambda-related genes and chronic kidney disease susceptibility in the Korean population
Background Chronic kidney disease (CKD) is a common condition leading to renal dysfunction and is closely related to increased cardiovascular and mortality risk. CKD is an important public health issue, and recent genetic studies have verified common CKD susceptibility variants. This research examines the interrelationship between candidate genes polymorphisms of interferon lambda (IFNL) induction, its signaling pathway, and CKD. Methods Seventy-five patients with advanced CKD and 312 healthy subjects (as controls) participated in this research. A replication set composed of 172 patients with advanced CKD and 365 controls was used for additional analysis. The genotype of single nucleotide polymorphisms (SNPs) was determined by the Axiom Genome-Wide Human Assay and SNaPshot assay. Results The SNP of IFNL3 was significantly associated with CKD in the codominant (p = 0.02) and dominant models (p = 0.02). In addition, the SNPs of IFNL2 were significantly associated with CKD in the dominant model (p = 0.03), and the SNP of interferon alpha receptor 2 (IFNAR2) was significantly associated with CKD in the log-additive model (p = 0.03). Concerning rs148543092, in the IFNL3 gene, a significant association was observed after pooling the original and replication sets. Conclusion These results indicate that SNPs in the IFNL induction and signal pathway may be associated with CKD risk in the Korean population. Finally, our results also show that the IFNL3 gene variant may be associated with CKD risk.
Introduction
Chronic kidney disease (CKD) is a worldwide health prob-lem. The overall prevalence of CKD globally is estimated to be 11% to 13% [1,2], and CKD is a major risk factor for cardiovascular diseases and all-cause mortality [3]. Addi-tionally, CKD has become a socioeconomic and medical issue for global healthcare [4]. Therefore, it is paramount to identify individuals that are at risk for the development and progression of CKD.
A significant association between large numbers of genes, their polymorphisms, and kidney function was observed in genetic studies. Therefore, it can be concluded that a strong genetic component exists in CKD [5,6]. The pathogenesis of CKD is complex and dependent on a broad spectrum of diverse etiologies. A major pathophysiology of CKD is persistent, chronic inflammation [7]. In the active phase of inflammation, immune cells migrate to the injury site, resolve the damage, and initiate the healing process. However, persistent inflammation is problematic, as it can lead to tissue damage and fibrosis. In addition, chronic inflammation is associated with various diseases including CKD [8].
Interferon (IFN), a marker of inflammation, may play a role in CKD development. However, the role of IFN in CKD is not well understood. Type I IFNs are central mediators of antiviral immunity and kidney inflammation [9]. Although type III IFN, known as IFN lambda (IFNL), has several similarities in function with type I IFNs, little is known about the role of IFNL in CKD.
Study subjects
This study enrolled 90 patients with CKD who were distributed by the Keimyung Human Bio-Resource Bank in 2012. In addition, 312 control subjects who participated in health checkup programs from the health promotion center from July to October 2008 participated in this study. The control group was defined as those with no clinical evidence for kidney impairment, cancer, hypertension, diabetes mellitus, dyslipidemia, and cardiovascular diseases. Among the 90 patients with CKD, 75 (83.3%) had an estimated glomerular filtration rate (eGFR) of less than 15 mL/min/1.73 m 2 . Since these patients could not represent the entire CKD group, we excluded patients with eGFR values above 15 mL/min/1.73 m 2 . A replication set consisting of 172 patients with advanced CKD and 365 controls was used for additional analysis.
Samples from 172 patients with advanced CKD were consecutively distributed by the Keimyung Human Bio-Resource Bank in 2018, and the controls were collected at the health promotion center of the Keimyung University Dongsan Medical Center (Daegu, Korea). Written informed consent was obtained from all the subjects. The approved protocol from the Institutional Review Board of the Keimyung University Dongsan Medical Center was used for this study (No. 2018-02-029).
Clinical characteristics and biomedical measurement
Participants' clinical characteristics, such as systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured. The body mass index (BMI) was calculated by weight divided by the square of the height (kg/m 2 ).
Biochemical markers were measured using samples in the fasted state. The levels of fasting blood sugar (FBS), triglyceride, total cholesterol (TC), low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, aspartate aminotransferase, alanine aminotransferase, albumin, blood urea nitrogen (BUN), creatinine, and uric acid were measured using an auto-analyzer (AD-VIA2400 Chemistry System; Siemens Healthcare Diagnostics Inc., Tarrytown, NY, USA). eGFR was calculated using the simplified prediction equation derived from chronic kidney epidemiology collaboration (modification of diet in renal disease): eGFR = 175 × standardized Scr −1.154 × age −0.203 × 0.742 [if female], where GFR is expressed as mL/min/1.73 m 2 of the body surface area and serum creatinine is expressed in mg/dL [11]. CKD was defined as eGFR of <60 mL/min/1.73 m 2 for 3 months or more.
Single nucleotide polymorphism selection and genotyping of the interferon lambda-related gene single nucleotide polymorphisms
Seventeen single nucleotide polymorphisms (IFNL3 gene 2SNPs, IFNL2 gene 2SNPs, IFNAR2 gene 2SNPs, TLR9 gene 2SNPs, IL-22 gene 2SNPs, IL10RB gene 2SNPs, IFNAR1 gene 1SNP, IRF7 gene 1SNP, JAK2 gene 1SNP, and STAT3 gene 1SNP) of the IFNL-related gene were selected based on database searches (http://ncbi.nlm.nih.gov/SNP). SNPs with <0.05 minor allele frequency, <0.1 heterozygosity, and unknown genotype frequencies in Asian populations were excluded. Human genomic DNA was extracted from peripheral blood samples using the Qiagen DNA Extraction Kit (Qiagen, Tokyo, Japan) and then stored at 20°C. The SNPs of the IFNL3, IFNL2, IFNAR2, TLR9, IL-22, IL-10RB, IFNAR1, IRF7, JAK2, and STAT3 genes were genotyped by direct sequencing. The following primers for the 17 SNPs were used to amplify the genomic DNA (Table 1). Polymerase chain reaction (PCR) conditions included 32 cycles at 92°C for 30 seconds, 60°C for 50 seconds, and 70°C for 40 seconds. PCR products were identified on 1.5% agarose gel by electrophoresis. Furthermore, the PCR products were sequenced by the DNA analyzer (ABI Prism 3730XL; Applied Biosystems, Foster City, CA, USA) to analyze the genotypes of each SNP. Finally, the genotypes were determined using SeqManII software (DNASTAR Inc., Madison, WI, USA).
Genotyping of replication SNPs was screened using the single base primer extension assay with ABI PRISM SNaP-Shot Multiplex kit (ABI, Foster City, CA, USA) according to the manufacturer's protocol. Analysis was conducted using the Genemapper software (version 4.0; Applied Biosystems).
Statistical analysis
IBM SPSS version 24 (IBM Corp., Armonk, NY, USA) and R version 3.2.2 (R Foundation for Statistical Computing, Vienna, Austria) were used for statistical analysis. The results were considered statistically significant when p < 0.05. Student t test was used for comparisons between the two groups among continuous variables. Additionally, con- Table 1. Polymerase chain reaction primers of the SNPs in the interferon lambda-related genes
Demographic and clinical characteristics of the participants
The demographic characteristics and clinical parameters of the study subjects are summarized in Table 2. The original set consisted of 312 control subjects included 157 males and 155 females with a mean age of 46.7 ± 10.3 years. The CKD group was composed of 75 adults and involved 36 males and 39 females with a mean age of 50.2 ± 12.1 years. In the replication set, 365 control subjects included 177 males and 188 females with a mean age of 50.6 ± 13.8 years. The CKD group was composed of 172 adults and included 92 males and 80 females with a mean age of 59.2 ± 15.1 years. In the original and replication sets, the sex distribution of the subjects was not significantly different in the two groups. Additionally, in the original and replication sets, BMI, SBP, DBP, BUN, creatinine, uric acid, FBS, and triglyceride levels in the CKD group were significantly higher compared to the control group. Conversely, eGFR, total protein, albumin, TC, HDL cholesterol, and LDL cholesterol levels in the CKD group were significantly lower compared to the control group. In the control and CKD groups, the genotype distribution of the 17 polymorphic SNPs was in the HWE. IFNL3, IFNL2, IF-NAR2, TLR9, IL-10RB, IL-22, IFNAR, IRF7,
Replication of the IFNL3, IFNL2, and IFNAR2 genes' single nucleotide polymorphisms
Comparing genotypic frequencies between cases and controls for all SNPs analyzed achieved a significant nominal value in three polymorphisms located in three genetic regions. We attempted to replicate associations involving IFNL3, IFNL2, and IFNAR2 using a second sample set (Table 4). No significant associations involving IFNL2 and IF-NAR2 were observed in the replication set. Regarding rs148543092, in the IFNL3 gene, a significant association was observed after pooling the original and replication sets (p = 0.02, OR = 2.50, 95% CI = 1.14-5.47; p < 0.001, OR = 0.92, 95% CI, 0.89-0.95) ( Table 4).
Association of IFNL3 single nucleotide polymorphism with clinical characteristics
After adjustments for age, sex, BMI, hypertension, diabetes mellitus, and dyslipidemia as covariates, we examined whether the genotype distribution of IFNL3 gene polymorphism, rs148543092, is associated with clinical characteristics (creatinine, eGFR, uric acid, total protein, and albumin) in the original and replication sets of the CKD group. In addition, in the original and replication sets, cre-atinine, eGFR, uric acid, total protein, and albumin levels exhibited no significant difference in the genotype distribution (Table 5).
Discussion
This study examines the association between the polymorphisms of IFNL3 (rs148543092 T > C), IFNL2 (rs8103362 A > G), IFNAR2 (rs1051393 G > T), TLR9 (rs187084 T > C), IL-22 (rs2227513 T > C), and CKD development in patients The p-values were analyzed using a the t test or b the chi-square test. The p-values were analyzed using the chi-square test: a p = 0.02, odds ratio (OR), 2.50 (95% confidence interval [CI], 1.14-5.47); b p < 0.001, OR, 0.92 (95% CI, 0.89-0.95). T/T (n = 172) with advanced CKD. The SNPs of IFNAR2 (rs1051393), IFNL2 (rs8103362), and IFNL3 (rs148543092) were significantly associated with CKD. Among them, the frequency of rs148543092 in IFNL3 was significantly higher in CKD than the control group in the original and replication sets. Persistent, low-grade inflammation is considered an essential component of CKD, playing an important role in its pathophysiology [12]. Patients with CKD exhibit elevated cytokine levels and dysregulated cytokine metabolism, leading to increased circulating acute-phase proteins [13]. In addition, IFN, an inflammatory cytokine, may play a regulatory role in the development and progression of CKD. However, the role of IFN in CKD is not well understood, especially in IFNL.
Additionally, type III IFN (IFNL) is associated with a cytokine family that has several similarities in functions with type I IFNs (either IFN-α, IFN-β, or IFN-α/β). The four IFNL proteins (IFNL1, IFNL2, IFNL3, and IFNL4) and 17 IFN-α/β proteins (13 IFN-α subtypes, IFN-β, IFN-ω, IFN-ε, and IFN-κ) are encoded by genes in humans [14]. Located in human chromosome 19, genes encoding IFNL have a similar gene structure with the 5-exon gene of the IL-10 cytokine family [15]. IFNL has several biological features, which begin with IFNL effectiveness. The efficacy of IFNL is most pronounced in epithelial cells where it explicitly strengthens the immune systems that protect the surface of the upper skin that is exposed to general and pathogenic microorganisms [16]. IFNL is involved in inflammation, one of the main pathophysiologies of CKD, and is expected to affect CKD development. This is the first study to identify the association between IFNL and CKD to the best of our knowledge.
IFNL has emerged as a new immune control cytokine with a particular function controlling damage to maintain an immune balance and limit immunology. In addition, IFNλ limits inflammation to prevent damage to the host by chronic illnesses including asthma, auto-immune diseases, and colitis [17]. The genetic association of IFNL gene polymorphisms among humans expands to various illnesses such as allergies, nonalcoholic fatty liver disease, and several other viral diseases caused by human immunodeficiency virus and hepatitis C virus infections [18]. The difference in expression levels by the IFNL3 genotype was shown in numerous studies. For example, recent research outcomes verified this result in ex/in vivo condi-tions. These results demonstrate that differences in IFNL3 expression levels by the alleles at the three functional SNPs (rs28416813, rs4803217, and rs59702201) may play a role in the disease [19][20][21]. Furthermore, a recent study revealed that genetic variants of IFNL3/4 play an essential role in developing lupus nephritis and systemic lupus erythematosus in the Taiwanese population [22]. However, little is known about the association between IFNL and CKD. In the present study, we demonstrate that the SNP of IFNL3 (rs148543092) is significantly associated with CKD development in patients with advanced CKD. Furthermore, these results are consistent with the entire CKD cohort (Supplementary Table 2, available online).
Additionally, several researchers have reported SNPs of IFNAR2 in hepatitis B virus (HBV) infections. Specifically, IFNAR2 polymorphisms may be involved in chronic HBV infection susceptibility among the Thai population [23]. It may also be involved when determining IFN response and predictive markers of HBV infections among the Chinese Han population [24]. Ma et al. [25] reported that the polymorphism of IFNAR2 (rs1051393 G > T) is a missense changing from phenylalanine to valine. This SNP may be important in the risk of HBV infection by influencing the expression of IFNAR2 protein on the cell's surface, resulting in an antiviral response and damaged signal transduction. Our result also suggests that IFNAR2 polymorphisms (rs1051393 G > T) are associated with CKD. This research found that the T allele of IFNAR2 (rs1051393 G > T) was higher in the CKD group compared with the control. The interrelationship of this SNP may be a codominant effect shown by the inheritance analysis model (major allele homozygotes vs. minor allele homozygotes). Therefore, this study indicates that the mechanism underlying the association between IFNAR2 SNP (rs1051393 G > T) and CKD may control IFNAR2 expression, which affects the type I IFN effect.
CKD and end-stage kidney disease are featured by increased proinflammatory cytokine levels and inflammatory labeling. Cytokines may control the risk of developing kidney disease [13] and induce resident cells to proliferate and influence metalloproteinases, bioactive lipids, the expression of adhesion receptors, reactive oxygen/nitrogen species, procoagulant activity of the endothelium, and aberrant matrix metabolism. In addition, these molecules may be the action mediators of the renin-angiotensin sys-tem and hemodynamic factors [26][27][28][29][30][31][32][33]. IL-10, an anti-inflammatory cytokine with numerous functions, is primarily secreted by monocytes and lymphocytes. IL-22, an IL-10related cytokine, activates the upward adjustment of the acute-phase reactor. It also guides JAK/STAT activation in several cell lines, including hepatomas, intestinal epithelial cells, and mesangial cells [34]. Meta-analysis outcomes have shown that the IL-22 gene rs1179251 polymorphism (but not rs2227485 polymorphism) may be a cancer risk factor [35]. The rs2227485 SNP of IL-22 may have a connection with the risk and multifocality of primary thyroid cancers according to Eun et al. [36]. However, this research did not show that the association between polymorphisms (rs2227513 T > C; rs2227485 G > A) of the IL-22 gene and CKD development exhibited an association with rs2227484 polymorphisms.
Furthermore, the second sample set was used to analyze replicate associations involving IFNL3, IFNL2, IFNAR2, TLR9, and IL22. No significant associations involving IFNL2, IFNAR2, TLR9, and IL22 were observed in the replication set. Whereas concerning rs148543092, in the IFNL3 gene, a significant association was observed after pooling the original and replication sets. These results suggest that IFNL3 polymorphisms are associated with CKD. However, there were no significant differences between the clinical characteristics and genotypes of IFNL3.
There are several limitations to this study. First, this study was a single-center study and the sample size was relatively small. However, we performed a genetic analysis of the association between IFNL induction and signal pathway genes, such as IFNL3, IFNL2, IFNAR2, TLR9, IL-22, and IL-10RB and CKD, for the first time. Second, we analyzed advanced CKD rather than entire CKD patients due to the characteristics of our study cohort. However, even when entire CKD patients were analyzed, the same SNP of IFNL3 was associated with CKD. Third, homozygous genotypes were observed in CKD patients in the replication set. However, the heterozygous genotypes were observed in the original set which indicated that CKD had IFNL3 polymorphisms.
In conclusion, the outcome of this study indicates the possibility of an association between IFNL induction polymorphisms and signal pathway genes with CKD in the Korean population. Furthermore, our results indicate that the IFNL3 gene variant may be associated with CKD risk. Therefore, early interventions in patients with high-risk genotypes may delay CKD progression. However, further large-scale prospective studies are necessary to establish the role of IFNL in CKD. | 2022-02-26T00:06:19.205Z | 2022-02-23T00:00:00.000 | {
"year": 2022,
"sha1": "62246f36c912a94eb355b4c3ef885998dfbb5454",
"oa_license": "CCBYNCND",
"oa_url": "https://www.krcp-ksn.org/upload/pdf/j-krcp-21-075.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0776286748617cebc7180b8596e59ce48be46468",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237633320 | pes2o/s2orc | v3-fos-license | Soil greenhouse gas fluxes from tropical coastal wetlands and alternative agricultural land uses
. Coastal wetlands are essential for regulating the global carbon budget through soil carbon sequestration and greenhouse gas fluxes (GHG: CO 2 , CH 4 and N 2 O). The conversion of coastal wetlands to agricultural land alters the magnitude and direction (uptake/release) of these fluxes. However, the extent and drivers of change of GHG fluxes is still unknown for many tropical regions. We measured soil GHG fluxes from three natural coastal wetlands: mangroves, saltmarsh, and 15 freshwater tidal forests, and two alternative agricultural land use, sugarcane farming and pastures for cattle grazing (ponded and dry conditions). We assessed variations throughout different climatic conditions (dry-cool, dry-hot and wet-hot) within two years of measurements (2018-2020) in tropical Australia. The wet pasture had by far the highest CH 4 emissions with 1,231 ± 386 mg m -2 d -1 , which were 200-fold higher than any other site. Dry pastures and sugarcane were the highest emitters of N 2 O with 55 ± 9 mg m -2 d -1 (wet-hot period) and 11 ± 3 mg m -2 d -1 (hot-dry period, coinciding with fertilisation), respectively. 20 Dry pastures were also the highest emitters of CO 2 with 20 ± 1 g m -2 d -1 (wet-hot period). Comparatively, the three coastal wetlands measured had lower emission, with saltmarsh up taking -0.55 ± 0.23 of N 2 O and -1.19 ± 0.08 g m -2 d -1 of CO 2 during the dry-hot period. During the sampled period, sugarcane and pastures had higher total cumulative soil GHG emissions (CH 4 + N 2 O) of 7,142 and 56,124 CO 2eq kg ha -1 y -1 compared to coastal wetlands with 144 to 884 CO 2-eq kg ha -1 y -1 . Converting unproductive sugarcane land or pastures (especially ponded ones) to coastal wetlands could provide significant GHG 25 mitigation.
will inform emission factors for converting wetlands to agricultural land uses and vice versa, filling in a knowledge gap identified in Australia (Baldock et al., 2012) and tropical regions worldwide (IPCC, 2013). 60
Study sites
The study area is located within the Herbert River catchment in Queensland, northeast Australia (Fig 1a). The region has a tropical climate with a mean monthly minimum temperature from 14 to 23˚C and mean monthly maximum temperature from 25 to 33˚C (Australian Bureau of Meteorology, ABM, 2020;1968 Table S3). The average rainfall is 2,158 mm y -1, with 65 the highest values of 476 mm during February (ABM 2020;1968 Table S3). The Herbert basin covers 9,842 km 2 , from which 56% is grazing, 31% is conserved wetlands and forestry, 8% is sugarcane, and 4% is other land uses (Department of Science and Environment, QLD, DES, WetlandInfo, 2020). Wetlands in this region were heavily deforested in the past century due to rapid agricultural development, primarily for sugarcane farming (Griggs. 2018). Before clearing, the land was mostly covered by rainforest and coastal wetlands, mainly Melaleuca forest, grass and sedge swamps (Johnson, Ebert, & 70 Murray, 1999).
We selected five sites, including three natural coastal wetlands (Fig. 1): a mangrove forest (18º 53' 42ʺ S, 146º 15' 51ʺ E), a saltmarsh (18º 53' 43ʺ S, 146º 15' 52ʺ E) and a freshwater tidal forest (18º 53' 45ʺ S, 146º 15' 52ʺ E), and two common agricultural land use types of the region, a sugarcane farm (18º 53' 44.6ʺ S, 146º 15' 53.2ʺ E) and a pasture for fodder 75 grazing. The pasture had different levels of inundation; some areas were covered with shallow ponds (50-100 cm depth), wet grassy areas (hereafter "wet pasture"; '18º 43' 8ʺ S, 146º 15' 50ʺ E) and drier areas (hereafter, "dry pasture"; 18º 43' 7ʺ S, 146º 15' 50ʺ E). The natural coastal wetlands and the sugarcane site were located within the same property at Insulator Creek, while the ponded pasture was 20 km north at Mungalla Station. The mangroves were dominated by Avicennia marina with few plants of Rhizophora stylosa, and the saltmarsh was dominated by Sueda salsa and Sporobolus spp. Landwards, the 80 freshwater tidal freshwater forest, a wetland commonly known as "tea tree swamp", was dominated by Melaleuca quinquenervia trees. While the mangroves and saltmarsh are directly submerged by tides (5-30 cm), the tidal freshwater forest is indirectly affected by tidal fluctuations, such as during large spring tides, when tidal water can push groundwater above the forest. The coastal wetlands were adjacent to a sugarcane farm with an area of ~110 ha (Fig. 1b). The sugarcane is fertilised once a year with urea at a rate of 135 kg N ha -1 and harvested during May-June, while the foliage is left on the soil surface 85 (trash blanket) after harvest. The ponded pastures in Mungalla Station extend over 2,500 ha and support ~900 cattle throughout the year by providing fodder to cattle during dry periods. The selected ponded pastures were covered by Eichhornia crassipes (water hyacinth) and Hymenachnae amplexicaulis (Fig. 1g-h). Each of the five sites was sampled during three periods drycool (May-September), dry-hot (October-December) and wet-hot (January-April; Table 1). During each time, soil physicochemical properties and GHG fluxes were measured as detailed below. 90 Soil texture analysis (% sand, silt, clay) was done with a simplified method for particle size determination (Kettler et al, 2001). Soil electrical conductivity (EC) and pH were measured using a conductivity meter (WP-84 TPS, Australia) in soil/water slurry at 1:5. Soil subsamples were air-dried, sieved (2mm), ground (Retch™ mill) and analysed for %N and %C using an elemental analyser connected to a gas-isotope ratio mass spectrometer (EA-Delta V Advantage IRMS, Griffith 115 University). Additionally, soil samples from the top 10 cm were collected during each sampling to measure gravimetric soil moisture content and bulk density.
Greenhouse gas fluxes
We measured GHG fluxes (CO2, CH4 and N2O) at each site for three consecutive days during each sampling period except for 120 the dry-cool period of 2018, when mangroves, saltmarsh and sugarcane were surveyed for one day. The sampling was done between 9:00 to 11:00 am, representing the mean daily temperatures, thus, minimising variability of cumulative seasonal fluxes based on intermittent manual flux measurements . Additionally, we assessed the variability of our measurements with tidal inundation in mangroves and saltmarsh, which were regularly inundated (~10-30 cm). For this, we measured GHG emissions during a low (0.7m) and a high tide (2.8m; Lucinda, 18° 31' S; 146° 23 'E) in the dry-cool period 125 of 2019. We found that CH4 fluxes did not significantly vary between the low and high tide within all coastal wetlands.
We used static, manual gas chambers made of high-density, round polyvinyl chloride pipe, which consisted of two units: a base (r =12 cm, h =18 cm) and a detachable collar (h =12 cm;Hutchinson and Mosier, 1981;Kavehei et al, 202). The chambers had lateral holes that could be left covered with rubber bungs at low water levels and left open at high water levels 135 to allow for water movement between sampling events. When the wetlands were inundated for the experiments, we used PVC extensions (h = 18 cm). Five chambers were set ~ 5cm deep in the soil at random locations one day before sampling to minimise the disturbance of installation during the experiment (Rashti et al, 2015). The chambers were selectively located on soil with minimal vegetation, roots, and crab burrows. We were careful not to tramp around the chambers during installation and sampling. The fact that emissions were not significantly different among days (p >0.05) provided us with confidence that 140 disturbance due to installation was not problematic.
At the start of the experiment, gas chambers were closed. A sample was taken at time zero and then after one hour with a 20 ml syringe and transferred to a 12 mL-vacuumed exetainer (Exetainer, Labco Ltd., High Wycombe, UK). During the dry-hot season, linearity tests of GHG fluxes with time were conducted by sampling at 0, 20, 40 and 60 min (Rashti et al, 145 2016). For the rest of the experiments, linearity tests were performed in one of the five chambers at each site; R 2 values were consistently above 0.70. During each experiment, soil temperature was measured next to each chamber. At the end of the experiment, the depth of the base was recorded from five points within each chamber to calculate the headspace volume. The obtained volumetric unit concentrations were converted to mass-based units using the Ideal Gas Law (Hutchinson and Mosier, 1981). 150 The GHG concentrations of all samples were analysed within two weeks of sampling with a gas chromatograph (Shimadzu GC-2010 Plus). For N2O analysis, an electron capture detector was used with helium as the carrier gas, while CH4 was analysed on a flame ionisation detector with nitrogen as the carrier gas. For CO2 determination, the gas chromatograph was equipped with a thermal conductivity detector. Peak areas of the samples were compared against standard curves to 8 determine concentrations (Chen et al, 2012). Seasonal cumulative GHG fluxes were calculated by modifying the equation described by Shaaban et al. (2015;Eq. 2):
Equation 2
Where; 160 Ri = Gas emission rate (mg m -2 hr -1 for CO2 and μg m -2 hr -1 for CH4 and N2O), Di = number of the sampling days in a season, 17.38 = number of weeks in each period, assuming these conditions were representative of the annual cycle (see Table 1).
Annual cumulative soil GHG fluxes (CH4 + N2O) were calculated by integrating cumulative seasonal fluxes. These estimations did not account for soil CO2 values as our methodology with dark chambers only accounted for emissions from 165 respiration and excluded uptake from primary productivity. The CO2-equivalent (CO2-eq) values were estimated by multiplying CH4 and N2O emissions by 25 and 298, respectively (Solomon, 2007), which represent the radiative balance of these gases (Neubauer, 2021).
Statistical analyses 170
GHG flux data were tested for normality through Kolmogorov-Smirnov and Shapiro-Wilk tests. The data was then analysed for spatial and temporal differences with one-way Analyses of Variance (ANOVA), where site and season were the predictive factors and the replicate (chamber) was the random factor of the model. When data were not normal, they were transformed (log10 or 1/x) to comply with the assumptions of normality and homogeneity of variances. Some variables were not normally distributed despite transformations and were analysed with the non-parametric Kruskal-Wallis test and Mann-Whitney U Test. 175 A Pearson correlation test was run to evaluate the correlation of GHG with measured environmental factors. Analyses were done with SPSS (v25, IBM, New York, USA), and values are presented as mean ± standard error (SE).
Soil physicochemical properties
Soil physical and chemical parameters (mean values 0-30 cm) varied among sites (Table 2, see full results of statistical analyses 180 in Supplementary Material). As expected, gravimetric moisture content was highest in the coastal wetlands and wet pasture (> 26%) and lowest in the sugarcane and the dry pasture (< 14%). All soils were acidic, especially the freshwater tidal forest and the wet pastures with values < 5 throughout the sediment column; mangroves had the highest pH with 6.0 ± 0.1. The lowest EC was recorded in the pastures (247 ± 38 and 190 ± 39 µS cm -1 for the dry and wet pasture, respectively), and highest in the three natural coastal wetlands with 1,418 ± 104, 8,049 ± 276 and 8,930 ± 790 µS cm -1 for tidal freshwater wetland, saltmarsh 185 and mangroves, respectively.
Soil bulk density was highest in sugarcane (1.5 ± 0.1 g cm -3 ) and lowest in the freshwater tidal wetland (0.6 ± 0.1 g cm -3 ). For all sites, %C was highest in the top 10 cm of the soil and decreased with depth, with highest values in the freshwater tidal wetland (5.1 ± 0.6%) and lowest in the saltmarsh (1.2 ± 0.1%). Soil %N ranged from 0.1 ± 0.0 to 0.4 ± 0.1% at all sites, except in the freshwater tidal wetland, where it reached values of 0.6 ± 0.0% in the top 10 cm (Table 2). 190
Greenhouse gas fluxes
Soil emissions for CO2 were significantly different among sites and times of the year (t =155.09, n =237, p < 0.001; Fig. 2a).
The highest CO2 emissions were measured during the wet-hot period in the dry pasture, where values reached 20.31 ± 1.95 g m -2 d -1 while the lowest values were measured in the saltmarsh, the only site that acted as a sink of CO2 with an uptake rate of -0.59 ± 0.15 g m -2 d -1 . In the pastures, CO2 emissions were twice as high when dry with cumulative annual emissions of 5,748 195 ± 303 g m -2 y -1 compared to when wet, with 2,163 ± 465 g m -2 y -1 . For the coastal wetlands, cumulative annual CO2 emissions were highest in freshwater tidal forests with 2,213 ± 284 g m -2 y -1 , followed by mangroves with 1,493 ± 111 g m -2 y -1 and lowest at the saltmarsh with uptake rates of -264 ± 29 g m -2 y -1 .
The wet pasture had the highest total cumulative soil GHG emissions (CH4 + N2O) with 56,124 CO2eq kg ha -1 y -1 followed by dry pasture 23,890 CO2eq kg ha -1 y -1 and sugarcane 7,142 CO2eq kg ha -1 y -1 . While coastal wetlands had mangroves and freshwater tidal forests, respectively. Overall, the three coastal wetlands measured in this study had lower total cumulative GHG emissions at 1,263 CO2-eq kg ha -1 yr -1 compared to the alternate agricultural land uses, which emitted 87,156 CO2-eq kg ha -1 yr -1 .
Greenhouse gas emissions and environmental factors 220
Overall, we found that not one single parameter measured in this could explain GHG fluxes for all sites except land-use. The In this study, we found that the three coastal tropical wetlands measured in this study (mangroves, saltmarshes and freshwater tidal forests) had significantly lower GHG emissions compared to two alternative land uses common in tropical 240 Australia (sugar cane and grazing pastures). Notably, we found that coastal wetlands had 200 times lower CH4 emissions and seven times lower N2O compared to wet pastures and sugarcane soils, respectively. While future studies should measure GHG from other wetlands, land uses, and within other tropical regions, these results support the idea that the management or conversion of unused agricultural land could be converted to coastal wetlands could result in significant GHG mitigation.
245
The variability of GHG fluxes was best explained by land use and wetland type; however, some trends with seasons were evident. For instance, CO2 and N2O emissions were lowest during the dry-cool periods. Reduced emissions at low temperatures are expected as the temperature is a main driver of any metabolic process, including respiration and nitrificationdenitrification. Mangroves tend to have higher CO2 emissions as temperature increases (Liu and Lai 2019), and terrestrial forests have significantly higher N2O emissions during warm seasons (Schindlbacher et al, 2004). Emissions of CH4 also tend 250 to increase with temperature as the activity of soil methane-producing microbes (Ding et al, 2004) and the availability of carbon is higher in warmer conditions (Yvon-Durocher et al, 2011). However, as most of the studies on GHG fluxes, were conducted in temperate and subtropical locations where differences in temperature throughout the year are much larger than those in tropical regions. For tropical regions, increased GHG emissions are likely to be strongly affected by the "Birch effect", which refers to short-term but a substantial increase of respiration from soils under the effect of precipitation during the early 255 wet season (Fernandez-Bou et al, 2020).
The main factor associated with GHG fluxes was land use and type of wetland. Notably, coastal wetlands, even the freshwater tidal forests, had much lower emissions compared to the wet pastures. This large difference could be attributed to the presence of terminal electron acceptors in the soils (e.g. iron, sulphate, manganese) of the coastal wetlands, which could 260 inhibit methanogenesis (Kögel-Knabner et al, 2010;Sahrawat, 2004). Sulphate reducing bacteria are also likely to outcompete methane-producing bacteria (methanogens) in the presence of high sulphate concentrations in tidal wetlands, resulting in low CH4 production. Competition between methanogens and methanotrophs may result in a net balance of low CH4 production despite freshwater conditions (Maietta et al. 2020). Additionally, microorganism living within the bark of Melaleuca trees can consume CH4 (Jeffrey et al, 2021), so it is possible that similar bacteria within the soil could reduce CH4 emissions. 265 Interestingly, variability within CH4 fluxes among sites was very high, despite them being very close to each other (Fig. 1b).
These differences highlight the importance of land use in GHG fluxes, which are likely to significantly alter the microbial community composition and abundance, which can change rapidly over small spatial scales (Martiny et al, 2006;Drenovskyet al, 2009).
Our results are consistent with other studies, which have shown the importance of land use in GHG emissions. For instance, in a Mediterranean climate, drained agricultural land use types, pasture and corn, were larger CO2 emitters compared to restored wetlands (Knox et al. 2015). Clearing of wetlands for agricultural development, such as the drainage of peatlands, results in very high CO2 emissions (Nieveen et al, 2005;Veenendaal et al, 2007;Hirano et al, 2012), and restoration of these wetlands could decrease these emissions (Cameron et al, 2020). Additionally, some of the wetland types, such as marshes, 275 were occasional sinks of CO2 and CH4, consistent with previous studies where intertidal wetlands sink of GHG at least under some conditions or during some times of the year (Knox et al, 2015;Maher et al, 2016).
The fluxes measured in the coastal wetlands of this study (-1,191 to 10,970 mg m -2 d -1 for CO2, -0.2 to 3.9 mg m -2 d -1 for CH4, and -0.2 to 2.8 mg m -2 d -1 for N2O) are within the range of those measured in other wetlands, worldwide. For CO2, 280 fluxes can range between -139 and 22,000 mg m -2 d -1 (Stadmark and Leonardson 2005;Morse et al. 2012), for CH4, from -1 to 418 mg m -2 d -1 (Allen et al. 2007;Mitsch et al 2013;Cabezas et al. 2018), and for N2O, from -0.3 to 3.9 mg m -2 d -1 (Hernandez and Mitsch 2006;Morse et al. 2012). Despite being in tropical regions, the fluxes from this study were not notably higher compared to wetlands in other climates. The general lower nitrogen pollution in Australia's soils and waterways compared to other countries may partially explain the lower emissions. However, the GHG flux measurements from this study did not 285 account for the effects of vegetation, which can alter fluxes. For instance, some plant species of rice paddies (Timilsina et al., 2020) and Miscanthus sinensis (Lenhart et al., 2019) can increase N2O emissions, and some tree species can facilitate CH4 efflux from the soil (Pangala et al. 2013). Finally, changes in emissions between low and high tides were detected for CO2 and N2O. Thus, future studies that include vegetation and changes within tidal cycles will improve GHG flux estimates for coastal wetlands. 290
Management implications
Under the Paris Agreement, Australia has committed to reducing GHG emissions 26 -28% below its 2005 levels by 2030.
With annual emissions of 153 million tonnes of carbon dioxide equivalent (Mt CO2-eq y -1 ), Queensland is a major GHG emitter in Australia (~ 28.7% of the total in 2016; DES, 2016). Of these emissions, about 18.3 Mt CO2-eq y -1 (14%) are attributed to 295 agriculture, while land-use change and forestry emit 12.1 Mt CO2-eq y -1 (DES, 2016). Production of CH4 from ruminant animals, primarily cattle, contribute 82% of agriculture-related emissions (DES, 2016). Therefore, any GHG mitigation strategy from land-use change could be important for Australia to achieve its national goals.
This study supports the application of three management actions that could reduce GHG emissions. First, the 300 conversion of ponded pastures to coastal wetlands is likely to reduce soil GHG emissions. Our results showed that wet pastures emit 56 ton CO2-eq ha -1 y -1 of total GHG (CH4 + N2O) compared with 0.2 ton CO2-eq ha -1 y -1 , 0.1 ton CO2-eq ha -1 y -1 and 0.9 ton CO2-eq ha -1 y -1 from mangroves, freshwater tidal forest, and saltmarshes, respectively. This implies that about 55 ton CO2-eq ha -1 y -1 emissions from the soils could be potentially avoided by converting wet pastures to coastal wetlands. The carbon mitigation for GHG emissions from soil solely could provide ~ AUD 860 ha -1 yr -1, assuming a carbon value of AUD 15.37 per 305 ton of CO2-eq (Australian Government Clean Energy Regulator, 2018). This mitigation could be added up to the carbon sequestration through sediment accumulation and tree growth that results from wetland restoration. Legal enablers in Queensland are in place to manage unproductive agricultural land this way (Bell-James and Lovelock 2019), and could provide an alternative income source for farmers.
310
A second management option would be to reduce the time pastures are kept under water. Dry pastures produced significantly less CH4 with ~0.005 kg ha -1 d -1 than wet pastures with 6 kg ha -1 d -1 . For comparison, an average cow produces 141 g CH4 d -1 (McGinn et al, 2004), and our study area supported around 900 cattle over 2,500 ha throughout the year, equivalent to 19 kg ha -1 y -1 compared to 2 kg ha -1 y -1 and 2090 kg ha -1 y -1 CH4 from dry and wet pasture respectively. This 315 implies that nearly 99% of the CH4 emissions came from wet pastures, while dry pasture and grazing cattle had a low share in total CH4 emissions. Therefore, land use management of wet pastures which are used to feed grazing cattle in Queensland may be a significant opportunity to reduce agriculture-related CH4 emissions. Future studies should increase the number of sites of ponded pastures to account for variability in hydrology, fertilisation, and cattle use. However, the very high difference (2-3 orders of magnitude) between dry and ponded pastures provides confidence that pasture management could provide significant 320 GHG mitigation throughout the year.
Finally, fertiliser management in sugarcane could reduce N2O emissions. Higher N2O emissions of 17.63 mg m -2 d -1 were measured in sugarcane following fertilisation during the dry-hot season. Comparatively, natural wetlands had low N2O emissions (0.16 to 2.79 mg m -2 d -1 ); even the saltmarsh was an occasional sink. Thus, improved management of fertiliser 325 applications could result in GHG emission mitigation. Some activities include split application of nitrogen fertiliser in combination with low irrigation, reduction in fertiliser application rates, the substitution of nitrate-based fertiliser for urea (Rashti et al, 2015), removing mulch layer before fertiliser application (Pinheiro et al, 2019;Xu et al, 2019 Zaehle andDalmonech, 2011) or conversion of unproductive sugarcane to coastal wetlands.
Conclusion 330
The GHG emissions from three coastal wetlands in tropical Australia (mangroves, saltmarsh and freshwater tidal forests) were consistently lower than those from two common agricultural land use of the region (sugarcane and pastures) throughout three climatic conditions (dry-cool, dry-hot and wet-hot). Ponded pastures, which emitted 200 times more CH4, and sugarcane emitted seven times more than any natural coastal wetland. If these high emissions are persistent in other locations and within other tropical regions, conversion of pastures and sugarcane to similar coastal wetlands could provide significant GHG 335 mitigation. As nations try to reach their emission reduction targets, projects aimed at converting or restoring coastal wetland can financially benefit farmers and provide additional co-benefits derived from coastal wetland restoration.
Competing interests
The authors declare that they have no conflict of interest. | 2021-09-26T07:19:53.052Z | 2021-09-16T00:00:00.000 | {
"year": 2021,
"sha1": "000f82c2a6b37b8277c7f9939f337b7c019badb2",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/18/5085/2021/bg-18-5085-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c114c0f73ef71da6b047e606dbddf4306f7a861c",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
225296875 | pes2o/s2orc | v3-fos-license | DSS LANDS: A Decision Support System for Agriculture in Sardinia
Recently, the DSSs application is strongly increasing in the agricultural sector due to continuous climate changes and the need to conduct more productive and sustainable agriculture. In this paper, we describe the prototype agricultural DSS LANDS developed for monitoring the main crop productions in Sardinia. The DSS collects, organizes, integrates, and analyses several types of data with different mathematical models. In particular, a case study on forecasting potato late blight is presented. We employed the Negative Prognosis model and Fry model to forecast the period in which it is opportune to carry out fungicide treatments useful against the appearance of the pathogen. The experiments allowed to outline the best criteria for local conditions as well as the evaluation showed the effectiveness of the approach in a concrete case study.
Introduction
Decision Support Systems have become notable tools to enhance the agricultural production. Agricultural production is highly dependent on weather, climate and water availability and is adversely affected by the weather and climate-related disasters [1]. Natural disasters can result in complex issues related to crop production. It is not always possible to prevent the occurrence of these natural events, but a proper planning can considerably reduce their effects. So far farmers have made in-season decisions based on their experiences and intuition. Nevertheless, their experiences are insufficient to predict a decision-making process for a long term, which can improve yield productivity and avoid unnecessary cost related to harvesting, use of pesticide and fertilizers. In addition, by 2050, according to the Food and Agriculture Organization (FAO), the climate changes are expected to cause water scarcity and serious declines yield of the most important crops in developing countries. It means that the agriculture process will have to adapt to climate change, but it can also help mitigate the effects of climate change through the recent technologies as Decision Support Systems (DSS).
A DSS can be defined as a computer-based system that supports decision makers in solving a decision problem [2]. These tools can lead users through clear steps and suggest optimal decision paths or may act more as information sources to improve the evidence base for decisions [3]. Recently, they have been introduced in the agriculture as an indispensable tool to face the growing challenge of conducting sustainable agriculture which increase the quantity and quality of agricultural output while using less input (water, energy, fertilizers, pesticides, etc.). This new modern farm approach that bases its applicability on the use of technologies to detect and decide what is "right" is called Precision Farming (PF) [4,5]. Nowadays, many governments in the world are investing big amount of money to encourage the researchers and companies to develop Decision Support Systems which use agricultural data to help the adoption of Precision Farming.
In this paper, we describe the system and the tests conducted through the DSS LANDS (LAORE Architecture Network Development for Sardinia) developed. It supports Sardinian farmers in decision-making and it manages different data in order to forecast and increase yield productivity and decrease the costs of agricultural operations. The DSS, it has been developed in collaboration with the LAORE Sardinia Agency. LAORE Sardinia Agency deals with providing advisory, education, training and assistance services in the regional agricultural sector.
The paper is structured as follows. Section 2 provides background information and outlines the reasons that drove the adoption and no-adoption of DSSs in Europe, especially in Italy. Section 3 describes the architecture and the forecasting models used in the short case study. We conclude the paper in section 4.
Decision Support Systems
DSS have evolved significantly since their early development in the 1970s. Over the past three decades, DSS have taken on both a narrower or broader definition, while other systems have emerged to assist specific types of decisionmakers faced with specific kinds of problems [2]. One of the first definitions was given by Keen and Morton [6] that defined decision support systems as computer systems that collect resources and use the ability of computer to increase quality of decisions by focusing on semi structured problems. Recently, a DSS is defined as human-computer systems which collect information, process information and provide information based on computer systems [7]. However, the researchers agree that the main objective of DSSs is to support and improve decision making [8].
DSS can be composed of four main subsystem which are Data Management subsystem, Model Management subsystem, Knowledge-based subsystems and User Interface subsystem [8]. The functionality of Data Management subsystem is to manage the data that will be used as information to make decisions in the Knowledge-based subsystem. The Model component consists of a variety of models that assist decision makers in decision making. The Knowledge-based is the hearth of the system and it manages the problem-solving process to generate the final solution. The User Interface allows the users to encourage the interaction with the system to obtain information.
Generally, DSS has been classified into three categories based on problems for decision making: structured, unstructured and semi-structured.
Agricultural Decision Support System
DSSs have been introduced in the agriculture as an indispensable tool mainly for two reasons. First, to face the continuous climate changes that cause serious damage to production. Second, to conduct a more sustainable agriculture which increase the quantity and quality of agricultural production while using less water, energy, fertilizers and pesticides, or rather, to support the Precision Farming technologies.
In the last decade, their applications have increased thanks to the advent of new technologies, such as Cloud Computing, Data Mining, Machine Learning, Artificial Intelligent and major investments by numerous research agencies and governments all over the world.
Agricultural DSSs perform the following activities: (i) they collect, organize, and integrate several types of information required for producing a crop; (ii) they analyse and interpret the information; and (iii) they use the analysis to recommend the most appropriate action or action choices. For example, DSSs can provide farmers information on plant growth or plant disease risk useful for scheduling treatments according to the actual need of the plant [9]. However, designing a DSS is quite complex; it requires knowledge from various multidisciplinary areas, such as crop agronomy, computer hardware and software, mathematics and statistics to analyse data. For example, to understand crop growth, it is necessary to know how each variable affects crop growth [10].
In a global level, in the agricultural sector, there is not a single agricultural DSS adopted worldwide, but over the years several DSSs have been developed for a wide range of cultivation practices concerning crop management and crop irrigation. Many of them have been developed and evaluated with different crops and different climatic conditions. Manos et al. (2004) [11] identified five fields of applications: Diagnostic-Forecasting DSSs, Advisory DSSs, Control DSSs, Educational -Informational DSSs, Operational DSSs. Although the use of DSS simplifies decisionmaking in agricultural production and it is applied in several application sectors, DSSs have not been adopted with great enthusiasm by managers of farms. Their adoption has been low. Many researches have been conducted for understanding the reason of DSSs non-adoption in agriculture. These researches identified the following factors that influence the adoption of DSSs by farmers: profitability, user-friendly design, time requirement for DSS usage, credibility, adaptation of the DSS to the farm situation, information update, and level of knowledge of the user [12]. However, many of these factors, have been reduced by the increased availability of personal computers, increased access to the Internet, and increased development of web-based systems [13]. The adoption and the development of agricultural DSSs in Europe was faster than in Italy. The factors that have limited its diffusion have been identified in Mipaff (2017) [14] that recognize as the main cause the difficulty of using precision technologies in a heterogeneous territory with particular characteristics.
In the Europe context, Holzworth et al. (2015) [15] identifies the most relevant DSSs from two thousand to today: DSSAT, APSIM, CropSyst, EPIC and STICS. The decision support system for agrotechnology transfer (DSSAT) is a collection of independent programs that operate together. It incorporates models of 16 different crops with software that facilities the evaluation and application of the crop models for different purpose [16]. The Agricultural Production Systems Simulator (APSIM) contains an array of modules for simulating growth, development and yield of crops, pastures and forests and their interactions with the soil. It has been used in a broad range of applications, including support for on-farm decision making, farming systems design for production or resource management objectives, assessment of the value of seasonal climate forecasting [17]. The cropping systems simulation model (CropSyst) CropSyst is a suite of programs designed to work co-operatively, providing users with a set of tools to analyse the productivity and the environmental impact of crop rotations and cropping systems management at various temporal and spatial scales [18].The Environment Policy Integrated Climate (EPIC) is able to manage decisions related to drainage, irrigation, water efficiency, erosion (wind and water), atmospheric conditions, fertilizer, the control of pests, sowing dates, tillage and waste management cultivation [19]. The Simulateur mulTIdisciplinaire pour les Cultures Standard (STICS) simulates crop growth as well as soil water and nitrogen balances driven by daily climatic data. It calculates both agricultural variables (yield, input consumption) and environmental variables (water and nitrogen losses) [20].
In spite of the European context several DSSs have been adopted since their appearance in the agricultural sector, in Italy few DSSs have emerged to provide decision support systems. Recently, their adoption is intensifying thanks to increase in the use of Precision Farming technologies. The diffusion of these technologies has been slow due to the following factors: heterogeneous environments, territorial characteristics, age/level of education and company size [14]. To Incentivise employment and scientific research is the Ministry of Food and Forestry Agricultural Policies, which in Mipaff (2017) [14] emphasizes the importance of developing specific tools for data analysis, with DSS functions to tackle the ongoing climate changes that are compromising the main crops of the territory. Since today, in Italy have emerged DSSs for crop management, mainly for wine and cereal production and irrigation management. Analysing the literature, among the major contributions emerge Vite.net for the decision-making support of the vineyard, Granoduro.net for decision support durum wheat crop and IRRINET for decision support for irrigation. Vite.net is developed for sustainable management of vineyards and is intended for the vineyard manager. The system provides in real-time several information for each vineyard as the defence against fungal disease and insects, the growth of the plant, the thermal and water stresses and many others [9]. Granoduro.net provides plot-specific and upto-date decision supports about weather, fertilisation, crop growth, weed control, and disease and mycotoxin risk [21]. IRRINET system provides to farmers a day-by-day information on how much and when to irrigate crops, implementing a real-time irrigation scheduling [22]. The latter is also used in Sardinia.
The contribution of this paper is the development of an agricultural DSS for monitoring the main crops in Sardinia, where the DSSs adoption have been slow due to the conformation and heterogeneity of the territory that requires the development of specific decision support systems.
DSS LANDS Project
DSS LANDS was developed to help LAORE technical and Sardinian farmers in decision-making about agricultural management based on the principles of Precision Farming. It was designed mainly to take data-driven decision and not to replace the decision maker.
The goals of LANDS are to: (i) optimize the resources management through reduction of certain inputs (e.g., chemicals and naturals resources, etc.) (ii) predict crop risk situations (e.g., diseases, weather alerts etc.) (iii) increase the quality of decisions for field management (iv) reduce environmental impact and production cost. It integrates different and specific modules for monitoring the main crop productions in Sardinia: citrus, artichoke, wheat, corn, olive, potato, peach, tomato, rice, vine. Currently, the DSS proposed is a prototype being tested for monitoring the potatoes crop.
Architecture
The agricultural DSS is composed of three components [24]: An integrated system for semi-real-time monitoring of crop components and storage of their data; These sources include ARPAS (Regional Agency for the Protection of the Sardinian Environment) weather stations, field sensors and external providers; A models system which performs through several mathematical and forecasting models a cross and dynamic analysis of different types of data. Their elaboration and interpretation allow us to provide the best strategies to be applied in the field in order to forecast possible risk event situations which can damage the production [25,26]; A cross-platform application used by LAORE technical and farmers to upload crop data collected during the field survey and to visualize the up-to-date information for managing the cultivation in the form of alerts and decision supports. It is available by smartphone, tablet and personal computers with different operating systems. These features allow the farmers to take advantage of the application without worrying about the device in use, to access it in any place (e.g., in the field, in the company etc.) and to simplify and enhance the agricultural management process. All information is in a graphic format that uses symbols and colors to advice and inform in an immediate, effective and unambiguous way the status of each crop management component. Internet connectivity also allows a timely updating of the features as soon as new analysis results are available and without any user intervention.
The Figure 1 describes a conceptual diagram of the system with three main stages. In the first stage the data are collected at fixed intervals from different sources: weather stations, external providers and data uploaded to the crossplatform by LAORE technical during the field survey. In the second stage, the data are received from the Data Receiver which manages and controls the quality of data and then it stores them into Env DB (Environmental Database) and Potato DB. After that, the data are analysed through several agricultural mathematical models.
Finally, in the third stage, the output is stored and sent to the cross-platform application for the interpretation by the decision maker. The output is visualized in the application as graphs and guidelines through different and specific dashboards. Each dashboard is a collection of widgets that give to the farmer an overview of the metrics and let them monitor many metrics at once, so they can quickly check the health of their cultivation.
Case of study
LANDS was tested during the 2018 spring season to forecast and tackle the risk of Phytophthora Infestans cryptogamic attacks for potato crop also known as late blight or potato blight. Potato blight is one of the most devastating diseases of potato world over, including Sardinia. In the Region the continuous climate changes such as the rains close together, the high humidity and the abrupt changes of the temperatures are putting at risk the potatoes production. For this reason, the experimentation phase started as a support in the decision-making process of this cultivation.
The tested are conducted in the potato fields monitored and managed by the LAORE Agency. We have implemented two disease prediction models retrieved from literature: Negative Prognosis model [23] and Fry model [27,28]. The joint use of the two algorithms allows to forecast the period which it is opportune to carry out fungicide treatments useful against the appearance of the pathogen.
The models identify the number of treatments need during a growing season as a function of time and meteorological data acquired continuously from ARPAS weather stations.
The analysis of the Negative Prognosis Model predicts the period where the late blight epidemics are not likely to occur and the timing of the first treatment. In order to achieve an accurate prediction, the system receives, manages and stores with fixed frequency the following data: hourly temperature of the day, hourly humidity of the day, hourly winds, day degrees calculated with different methodologies, Eto calculated with different mathematical formulas. From these data, the model takes as input: hourly temperature (°C), relative humidity (%), and rainfall (mm). After the server has received the input parameters the model calculates with different formulas the risk values and the accumulated risk values. This last, is the values that allows to determinate the date of the first treatment. The Figure 2 shows the trend of the accumulated risk index recorded from 12/03/2018 to 29/04/2018.
Figure 2. Accumulated risk recorded from 12/03/2018 to 29/04/2018
The tested conducted allowed to identify a local threshold which recognize when the disease is expected to occur. The warning period is indicated when the accumulated risk value exceeds the threshold of 130 and the first treatment is applied when the threshold reaches the value 150. In the case of Figure The experiments carried out allowed to outline the best criteria for local conditions through the Fry model developed. The treatments after the first are indicated when one of the following cases occurs: (i) the accumulated precipitations are greater than 20 mm, (ii) the risk value of the previous night is 8 and also the sum of the blight units exceeds 40 for cultivar susceptible.
Conclusion
In the present paper we have seen how the DSSs are widely used in the agricultural sector. They have become notable and indispensable tools to conduct a more sustainable and productive agriculture which is difficult to sustain due to the continuous climate changes. Although several DSSs for monitoring various cultures have been developed, their adoption has been slow for two reasons: technical limitations of the DSSs and to farmer attitude towards DSSs.
Today, the situation is changing thanks to the increased availability of personal computers, increased access to the Internet and increased development of web-based systems. Even in Italy and especially in Sardinia few DSS have been adopted. The major contribution of this work is the development of the DSS LANDS in collaboration with the LAORE Sardinia Agency to monitor the main crops in Sardinia, a place where the adoption/diffusion of DSS is complicated for the territory heterogeneity. Currently, the DSS is a prototype being tested for monitoring the potato culture. In particular, the DSS through the Negative Prognosis Model and the Fry Model elaborates weather data from meteorological stations to forecast the period in which is opportune to carry out fungicidal treatments against the pathogen late blight outbreak. The short case of study conducted allowed to adapt, calibrate and outline the local parameters in order to produce accurate predictions. However, LANDS is at an early stage of the project. To date, it is still early to be able to assess the benefits of its use in the field. Future experiments will allow to validate predictive dynamical models and evaluate if LANDS is the tool able to respond to the challenges emerging in the agricultural field according to Precision Farming methods
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-10-28T18:56:41.018Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "7d226da1afcbb6f0a39394f3bf349721414ba5dd",
"oa_license": "CCBY",
"oa_url": "https://hightechjournal.org/index.php/HIJ/article/download/39/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f07b6702279e92a3e83eee2c1dac580870e71fa4",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231687842 | pes2o/s2orc | v3-fos-license | Color Tuning of Face-Selective Neurons in Macaque Inferior Temporal Cortex
Abstract What role does color play in the neural representation of complex shapes? We approached the question by measuring color responses of face-selective neurons, using fMRI-guided microelectrode recording of the middle and anterior face patches of inferior temporal cortex (IT) in rhesus macaques. Face-selective cells responded weakly to pure color (equiluminant) photographs of faces. But many of the cells nonetheless showed a bias for warm colors when assessed using images that preserved the luminance contrast relationships of the original photographs. This bias was also found for non-face-selective neurons. Fourier analysis uncovered two components: the first harmonic, accounting for most of the tuning, was biased toward reddish colors, corresponding to the L>M pole of the L-M cardinal axis. The second harmonic showed a bias for modulation between blue and yellow colors axis, corresponding to the S-cone axis. To test what role face-selective cells play in behavior, we related the information content of the neural population with the distribution of face colors. The analyses show that face-selective cells are not optimally tuned to discriminate face colors, but are consistent with the idea that face-selective cells contribute selectively to processing the green-red contrast of faces. The research supports the hypothesis that color-specific information related to the discrimination of objects, including faces, is handled by neural circuits that are independent of shape-selective cortex, as captured by the multistage parallel processing framework of IT (Lafer-Sousa and Conway, 2013).
Introduction
What role does color play in the neural representation of objects? Some cells in inferior temporal cortex (IT) are shape selective, and many of these cells are also modulated by color (Komatsu and Ideura, 1993;Edwards et al., 2003). Do shape-selective IT cells play a role in discriminating among the typical colors of the shapes to which they are tuned? We take up this question by measuring the color responses of face-selective neurons in fMRIidentified face patches of macaque monkey; we evaluate the possible role of the color tuning by relating the information content at the neural population level with the distribution of face colors (Xiao et al., 2017).
The extent to which face-selective neurons in IT are color-tuned is unsettled. A widespread but not universal assumption is that face cells do not carry color information. The assumption is supported by the observation that luminance contrast is by itself sufficient for face recognition (Kemp et al., 1996;Sinha et al., 2006) and the lack of cross face/color adaptation in psychophysical experiments (Yamashita et al., 2005). Consequently, many studies of face perception use exclusively colorless images (Kanwisher et al., 1997;Ohayon et al., 2012). But while color is not essential for determining face identity, color nonetheless relays important information related to social communication, about health, emotion, and sex (Setchell and Wickings, 2005;Changizi et al., 2006;Waitt et al., 2006;Nestor and Tarr, 2008;Gerald et al., 2009;Leopold and Rhodes, 2010;Webster and MacLeod, 2011;Lefevre et al., 2013;Nakajima et al., 2017;Petersdorf et al., 2017). Moreover, faces viewed under low pressure sodium light (which impairs retinal mechanisms for encoding color) have a paradoxical appearance: they appear green, regardless of race (Hasantash et al., 2019). Such seemingly anti-Bayesian phenomena may arise if neural representations and how they are decoded are optimal with respect to the statistics of the environment (Wei and Stocker, 2015); efficient coding thus predicts a neural population whose tuning curves best discriminate among the most common face colors (Hasantash et al., 2019). One possibility is that this face-relevant color information is encoded by face-selective neurons.
To what extent could face-selective cells discriminate among face colors? Two studies have glanced the question. One found no sensitivity to color among face-selective neurons (Perrett et al., 1982). The other study found that of 22 cells, some showed higher firing rates for naturally colored face photographs compared with unnaturally colored photographs (Edwards et al., 2003), suggesting that face-selective neurons carry color signals. Quantitative measurements of color-tuning functions of face-selective neurons have, to our knowledge, not been made, which precludes an answer to the question.
The discrimination potential of a population can be estimated by the Fisher information, which depends on the distribution of tuning peaks, tuning widths, and amplitude modulation across the neural population (Ganguli and Simoncelli, 2010; Fig. 1). If a population activity is to be used to discriminate a given attribute, efficient coding predicts that its Fisher information should reflect the distribution of the attribute in the environment (Wei and Stocker, 2015). If face-selective cells were being used to discriminate face color, then the Fisher information across the population should reflect the natural distribution of face colors. Here, we use fMRI-guided electrode recording of neurons in the middle and anterior face patches of macaque monkeys. We discovered that face-selective cells, as a population, were biased for warm colors. The resulting Fisher Information shows a striking dip that coincides with the peak in the distribution of face colors documented in a large database of measurements of human face colors (Xiao et al., 2017). The results suggest that face-selective neurons are not optimally distributed to enable the discrimination of face colors. We discuss what role color responses of face-selective cells may play in visual processing.
Subjects
Three male rhesus macaques (Macaca mulatta), weighing 8-10 kg, were implanted with an MRI-compatible plastic (Delrin) chamber and headpost. Surgical implantation protocol has been described previously (Lafer-Sousa and Conway, 2013). Designation of the subjects are M1 (monkey 1), M2 (monkey 2), and M3 (monkey 3). M1 and M3 had chambers over the right hemisphere; M2 had a chamber on the left hemisphere. All procedures were approved by the Animal Care and Use Committee of the National Eye Institute and complied with the regulations of the National Institutes of Health.
Functional imaging targeting of face patches
The fMRI procedures we use for localizing face patches have been described (Tsao et al., 2006;Lafer-Sousa and Conway, 2013;Rosenthal et al., 2018). Two of the animals (M1, M2) are the same as the animals used in Lafer-Sousa and Conway (2013); the face-patch data and color-tuning data are the same as in the earlier reports. Here, we present an analysis of the fMRI color-tuning data restricted to the face patches of IT. All monkeys were scanned at the Massachusetts General Hospital Martinos Imaging Center in a Siemens 3T Tim Trio scanner. Magnetic resonance images were acquired with a custom-built four-channel magnetic resonance coil system with AC88 gradient insert, which increases the signal-to-noise ratio by allowing very short echo times, providing 1-mm 3 spatial resolution and good coverage of the temporal lobe. We used standard echo planar imaging (repetition time = 2 s, 96 Â 96 Â 50 matrix, 1 mm 3 voxels; echo time = 13 ms). Monkeys were seated in a sphinx position in a custom-made chair placed inside the bore of the scanner, and they received a juice reward for maintaining fixation on a spot presented at the center of the screen at the end of the bore. An infrared eye-tracker (ISCAN) was used to monitor eye movements, and animals were only rewarded by juice for maintaining their gaze within ;1°of the central fixation target. Magnetic resonance signal contrast was enhanced using a microparticular iron oxide agent, MION (Feraheme, 8-10 mg/kg of body weight, diluted in saline, AMAG Pharmaceuticals), injected intravenously into the femoral vein just before scanning.
Visual stimuli were displayed on a screen subtending 41 by 31 degrees of visual angle (dva), at 49 cm in front of the animal using a JVC-DLA projector (1024 Â 768 pixels). The subset of presented stimuli used here to localize face patches consisted in achromatic square photographs of faces and body parts presented centrally on a neutral gray screen (;25 cd/m À2 ) and occupying 6°. They were shown in 16 32-s blocks (16 repetition times per block, repetition time = 2 s, two images per repetition) presented in one run sequence. The images were matched in average luminance to the neutral gray, maintaining roughly constant average luminance (;25 cd/m À2 ) throughout the stimulus sequence. For faces stimuli we used: 16 unique images front-facing of unfamiliar faces (eight human, eight monkey) repeated twice within a block. The bodies/ body parts block comprised 32 unique images of monkey and human bodies (no heads/faces) and body parts. Face patches were localized by contrasting responses to faces versus responses to body parts. A total of 18 runs were used to localize face patches in M1 and M2, and 16 runs were used to localize face patches in M3.
Physiologic recordings
A plastic grid was fitted to the inside of the recording chamber to enable us to reproducibly target regions within the brain, following details reported previously (Conway et al., 2007). We used sharp epoxy-coated tungsten electrodes (FHC), propelled using a hydraulic manual advancer (Narishige). Voltage traces were digitized and saved with a Plexon MAP system (Plexon Inc.). Spike waveforms were sorted offline with the Plexon Offline Sorter, and single units were defined on the basis of waveform and interspike interval.
Recordings were performed in a light-controlled room, with the animals seated in sphinx position. Animals were acclimatized to head restraint to minimize head movement during recordings. Animals maintained fixation on a spot on a monitor 57 cm away; the monitor was a CRT Barco subtending 40 by 30 dva, operating at 85 Hz and at a resolution of 1024 Â 768 pixels. Eye position was Figure 1. Simulated population of 50 cells responding to a circular variable and corresponding Fisher information. A population with von Mises tuning curves of width t', amplitude a', and distribution of peaks p' yields a Fisher information with one peak. Each of these three parameters can be individually adjusted to create a population with a different Fisher information. Uniformly increasing tuning width (t'') while holding a' and p' constant yields a two-peaked Fisher information. Independently adjusting the distribution of tuning curve amplitudes (a") or the distribution of the peaks (p") creates an asymmetric Fisher information. monitored throughout the experiments using and infrared eye-tracker (ISCAN). The monitor was color calibrated using a PR-655 SpectraScan Spectroradiometer (Photograph Research Inc.); we achieved 14-bit resolution for each phosphor channel using Bits11 (Cambridge Research Systems).
Stimuli
Screening stimuli consisted in 10 grayscale exemplars of each of the 14 following categories: animals, buildings, human faces (front view), monkey faces (front view), human faces (3/4 view), monkey faces (3/4 view), fruits, furniture, monkey bodies (no face), human bodies (no face), places, technology objects, indoor places, natural scenes, vehicles (see examples Fig. 2). All stimuli were presented on a static luminance white noise background of 7.5 by 7.5°of visual angle.
Stimuli for the color-tuning experiments were exemplars of faces of unfamiliar humans and monkeys, front or 3/4 view (16 exemplar of frontal view and 8 of 3/4 view for each species), fruits known to the monkey (16 exemplars) and bodies/body parts (without head) of unfamiliar humans and monkeys (eight exemplars for each subcategory), yielding a total of 96 stimuli. For both sets of experiments using colored stimuli (main condition in which the colored stimuli preserved the luminance contrast of the original achromatic images, and the equiluminant condition), we defined 16 target hues equally spaced along the CIELUV color space (values provided in Table 1). For the main condition (Fig. 3), each pixel value of the original achromatic image was remapped to the most saturated target color of the same luminance value within the monitor gamut. For the equiluminant condition, each pixel value of the original image was remapped to a pixel on the equiluminant plane, of the same hue as the target but different saturation based on the pixel luminance. Determination of equiluminance was Judd-Vos corrected for the underestimation of the contribution of S-cones to the standard luminosity function (Vos, 1978).
Note that one way of making false-colored images that has been used in some studies involves the digital equivalent of superimposing a color filter over a black and white picture. In these images, the white of the original image is replaced with a relatively saturated color. Although easy to generate, these false-colored images are luminance compressed compared with the original image: the black in the colored version remains the same luminance as the original, but the white is now a lower luminance than the white in the original. Moreover, it is possible that the estimation of luminance for a given color is inaccurate (for discussion, see Conway, 2009). Such inaccuracies would introduce variability in the luminance contrast among the different colored images generated using the color-filter method. For example, when an achromatic image is falsely colored by applying a color filter, such that the white in an original image is replaced with a given equiluminant color, the resulting set of differently colored, photometrically equiluminant, versions of the image may have different luminance-contrast ranges that systematically vary by hue: the red, yellow, and green images may have a higher luminance range than the blue and purple images, because the contribution of S-cones to the luminosity function is underestimated. Thus, despite being ostensibly equiluminant, the brightest blue in the blue image may be lower luminance than the brightest red in the red image, yet the black will be the same luminance in both images. Variation in the responses to differently colored images created in this way cannot be interpreted as color tuning because they may reflect variable sensitivity to the range of luminance contrasts in the set of differently colored images. The method used presently mitigates the possible impact of chromatic aberration and variability in macular pigmentation, and gives rise to images that are more naturalistic: the images not only retain the luminance contrast of the original images but also appear differently colored, rather than achromatic but viewed through a colored filter (see Golz and MacLeod, 2002).
Procedure
One experimental session started by targeting a microelectrode to a face patch, recording a single unit (selected online based on waveform), mapping the receptive field by hand, and choosing the visual-field location that gave the highest response to the screening stimuli. Then we recorded the neural activity while presenting the battery of screening stimuli. If a cell appeared face selective, we then recorded responses to the colored stimuli presented in random order. During both screening and main experiment stimuli were presented for 200 ms followed by a 200-ms blank (gray background). The animals were rewarded by juice for maintaining their gaze within ;1°of the central fixation dot for a specific duration (that duration usually started at 3 s but was decreased during the experiment to adapt to the animal motivational state). Once a cell was found recording took place until the animal stopped working thus yielding a variable number of trials per session, ranging from 369 to 15,355 (Interquartile range (IQR) = [2945,6755]). Across all sessions the number of repetitions for a given stimulus by hue combination ranged between 0 and 15. Equiluminant stimuli were presented in a subset of sessions.
Response window
The response of each neuron was quantified within a response window defined using the average response to all stimuli, in 10-ms bins. The baseline firing rate of each cell was defined as the average response from 50 ms before stimulus onset to 10 ms after stimulus onset. The response window for each cell was determined as one continuous time period initiated when, within two consecutive time bins, the neural response increased above 2.5 SDs above the baseline firing rate and terminated either when the response dropped below 2.5 SDs of the baseline firing rate in two consecutive bins or after 200 ms following the start of the response window (the shorter of the two options was used). Cells were only included in the analysis if the response window was initiated between 50 and 250 ms after stimulus onset, and if the neural response was excitatory (i.e., cells showing suppressive responses to stimulus onset were not included).
Face selectivity
The present analysis focused on the color tuning of face-selective neurons. Face selectivity was assessed using the following index: where R is the average response to stimulus, computed as the difference between the firing rate during the response window and the firing rate during background. Face-selectivity index (FSI) values range from À1 to 11, with values above 0 indicating a higher response to faces compared with bodies and fruits. All analyses, with one exception, were restricted to neurons that showed an FSI !1/3, corresponding to a response to faces at least twice that of the response to other non-face stimuli. The exception was the last analysis (see Fig. 12), in which we examined the relationship between hue preference and face selectivity. For that last analysis we included an additional 61 cells (ML: 49 and AL: 12) that were recorded by targeting the same face patches but had an FSI below 1/3. The total number of recorded cells was thus 234 (74% of the targeted cells had an FSI of at least 1/3).
Color tuning
To ensure that the results are not tainted by any cells that were not face selective, we focus on the responses to color of those 173 cells that showed a FSI ! 1/3. An analysis of the color responses of the entire population of recorded neurons, which included some cells with low FSI, is shown in Figure 12. In the color-tuning analyses we pooled responses across the different face stimuli for each hue. Across all cells, pooling over stimuli, the number of trials per hue ranged from 10 to 634 (IQR = [119, 277]).
Significance
We determined for each cell whether there were significant variations in net firing rate across the 16 hues by computing the coefficient of variation (the ratio of the SD across hues to the mean) of the data recorded for the neuron compared with the distribution of coefficients of variation obtained by 1000 permutations of hue labels. We considered color modulation to be significant when the p value was below a ¼ 0:05:
Description: vector sum
Color responses were also quantified by determining the vector sum of the color response. This analysis is enabled because hues are circularly distributed: we can consider the neuron's response to a hue as a vector whose direction is the hue angle, and the vector norm is the strength of the response within the response window compared with baseline. We normalized the vector norms among hues so that the total sum over hues was one. The strength of the hue preference is estimated by the direction and norm of the vector sum. Equation 1 describes the normalized vector computed for each hue; we then sum these vectors using Equation 2: The preferred hue direction of the cell is the angle ofṽ. The strength of the hue preference is the norm ofṽ;and can take values ranging from 0 (no hue preference) to 1. The norm of the vector sum therefore reflects the narrowness of the color tuning.
Description: Fourier analysis
Color responses can be analyzed using Fourier analysis (Krauskopf et al., 1982;Stoughton et al., 2012) that identifies the set of sine waves (frequency, phase angle, and amplitude) that capture the shape of the color-tuning function. We extracted for each cell the normalized amplitude and phase angle of the first eight harmonics; most of the power was captured by the first two harmonics. The first harmonic has a single peak when plotted in polar coordinates of color space (i.e., a vector pointing to one color), the second harmonic identifies an axis in these coordinates (i.e., the poles of the axis identify an opponent color pair). Confidence interval of the mean was obtained by resampling cells with replacement and computing the angular mean 1000 times (for the second harmonic, for all cells we used the peak between 0°and 180°).
Correlation between fMRI and electrophysiology
We correlated the single cell activity for each of the 16 hues for all single cells of all three monkeys separately for each face patch, to the average percent signal change to these 16 hues obtained by interpolating from the signal change to the 12 hues used in the fMRI experiment [facepatches were identified over the two hemispheres of M1 and M2 of the current study, details of stimuli and region of interest (ROI) definition can be found in Rosenthal et al., 2018]. Note that to make the Bold response and neurons' firing rate more comparable, we averaged the net firing rate over the entire window from stimulus onset to the onset of the next stimulus (400 ms). The firing rate is thus lower than when selecting a response window tailored to each cell.
Population information
To compute the population of face-selective cells' Fisher information, we fitted each cell's net average firing rate response to the 16 hues by a von Mises function of the form r u ð Þ ¼ a1b exp k cos u À u pref ð Þ ½ ð Þ , with k .0, a.0 and b ! 0 (median mean squared error across all cells of 0.38 spikes/s). The population Fisher information is given by: Assuming independent Poisson noise, we can derive that: where f n represents the tuning function for cell n.
For visualization, we also present the population information smoothed with a Savitsky-Golay filter (window of 50°, first order polynomial). We also computed the 95% confidence intervals of the Fisher information using nonparametric bootstrapping with 1000 iterations.
Finally, we performed the same analysis with the original 16 CIELUV hue angles projected along the two chromatic axes: 180À0°corresponding to greenish to reddish hues, and 270À90°corresponding to bluish to yellowish hues.
Distribution of natural face color
Xiao et al. (2017) measured the spectra of skin on the cheek, forehead, back of hand and inner arm of 960 participants of four ethnicities (White, Chinese, Kurdish, and Thai) under D65 illuminant, and reported the mean and SD of the values for each body part and each ethnicity in CIELAB color space. We averaged the forehead and cheek L, a* and b* means and SDs (Table 2 from Xiao et al., 2017 ). We then computed a weighted average across all ethnicities for both mean and SD (using Table 1 from Xiao et al., 2017). Using standard conversion matrices from CIELAB to XYZ and XYZ to CIELUV, we obtained an estimate of the mean and SD of the distribution of face color in CIELUV hue angle, represented as a von Mises distribution in Figure 11.
Results
Functional magnetic resonance imaging was used to identify regions of IT that were more responsive to faces than to bodies and fruits, a standard contrast used to identify face patches (Tsao et al., 2006;Lafer-Sousa and Conway, 2013). We targeted microelectrodes to the ML face patch in two monkeys (M1 and M2) and the AL face patch in three monkeys (M1, M2, and M3; Fig. 2A). To screen for face-selective neurons, we measured the responses of each cell to a battery of grayscale images of 14 categories (Fig. 2B). Face-selective neurons, such as the two examples in Figure 2B, were defined as those that showed at least a 2-fold greater response to faces than bodies and fruits. This selection criterion yielded 102 single units in ML and 71 single units in AL. Face-selective cells in AL had a higher FSI than cells in ML (Mdn ML = 0.60, Mdn AL = 0.92, Mann-Whitney-Wilcoxon rank-sum test U = 1842, p , 0.001; Fig. 2C); cells in AL and ML had similar firing rates (Mdn ML = 6.38, Mdn AL = 6.28, U = 3671, p . 0.88). All face-selective cells showed a significant main effect of face image (repeated measures ANOVA on a cell-by-cell basis; responses were the average firing rate during the response window to each face images; analysis restricted to the 102 neurons that were tested with at least three presentations of each face image; all neurons were p , 0.05). The preferred face was more likely a monkey face (70% of the cells), and more likely a 3/4 view than a front view (64% of the cells). Across all cells, the preferred stimulus triggered a firing rate with a median 2.3 times higher than when using the average we are using for all analyses (IQR = [1.7, 3.0]). We next measured responses to color for the face-selective neurons. Color responses were obtained using monochromatic versions of photographs of faces, bodies, and fruits (Fig. 3A); the colored stimuli evenly sampled CIELUV color space (Fig. 3B, top panel). We chose to define the stimuli in CIELUV color space because this space captures the representation of color within the V4 Complex (Bohon et al., 2016), which provides input into IT (Kravitz et al., 2013). The chromaticity of the stimuli can be transformed into cone-opponent "DKL" space, which reflects the cone-opponent cardinal mechanisms evident in the lateral geniculate nucleus (Derrington et al., 1984;Sun et al., 2006;Fig. 3B, bottom panel). Evaluating color responses in DKL space is useful because it has a physiological basis; throughout the paper, the CIE colors corresponding to the poles of the cone-opponent axes (L.M, M.L, S1, and S-) are provided to facilitate this evaluation. To create a given image in a target color, we replaced each pixel in the original gray-scale image with the target hue of the same luminance value as the pixel. Thus, the false-colored images maintained the luminance contrast of the original image. Figure 4 shows the responses of a representative sample of six face-selective neurons to the colored images of faces, bodies, and fruit; cells 1-3 were recorded in face patch ML; cells 4-6 were recorded in face patch AL. The top panels show the average responses to images of faces, bodies, and fruits. As predicted given the screening criterion, responses were always substantially larger to faces than to the other stimulus categories, with FSIs ranging from 0.44 to ;1. For each cell, we defined a time period for quantification of the responses (Fig. 4A,B, blue bars). We used a single continuous time period for all cells whose duration was tailored to each cell. The response of cells showed complex temporal dynamics. For example, cell #4 showed two peaks (at 105 and 225 ms) and the intervening firing rate dipped back to baseline, in cells such as this one, the time period for quantification only included the initial peak. Using multiple time periods for some cells such as cell #4 did not change the main conclusions (data not shown). The median latencies and the Table 2. Cells in ML showed a shorter latency than cells in AL within each monkey. But the variability in latencies for cells in ML or AL across monkeys was greater than the difference in latency between ML and AL within any monkey. Figure 4B, top panels, shows poststimulus time histograms (PSTHs) of the responses to the colored faces, averaging across the four different face categories we used (monkey and human faces, frontal faces and 3/4view faces; see Fig. 2B). The orientation of the PSTH shows time on the y-axis and image color on the x-axis. The stimulus onset is at 0 s, and darker gray corresponds to higher firing rate. The responses of the neurons are delayed by a latency reflecting the time for visual signals to be processed by the retina and relayed through the visual-processing hierarchy to IT. The cells in Figure 4 were representative of the population: three of the cells were modulated by the color of the stimulus (cells #1, #3, #6), reflected in the average response over the response window (black trace below the PSTH; for significance calculation, see Materials and Methods). Among the population of face-selective neurons, ;25% were significantly modulated by color (23/102 cells in ML; 21/71 cells in AL). Cells #2, #4, and #5 were not modulated significantly by color. Figure 4B, red traces, shows the best-fitting sine wave following Fourier analysis of the color responses, described below.
Note that the firing rates shown in Figure 4B, bottom panels, are averages over the response window and so are lower than the peak firing rates shown in Figure 4A.
Although many cells were modulated by color, the variance in firing rate caused by changes in color was modest. For example, the firing rate in cell #3 varied between 18 and 22 spikes/s above background, which corresponds to 610% of the average stimulus-driven response. Across the population, the variation in firing rate because of hue was 624% of the average stimulus-driven response. Approximately 76% of the stimulus-driven response can be therefore attributed to the luminance contrast of the images. Figure 4C plots the cells' responses as a function of color angle; the norm of the vector sum is shown as the bolded black line, and varies between 0 and 1 (0 = equal net firing rate for all hues identical, i.e., no color tuning; values for each cell are shown in black text). To further quantify the results, we determined the best-fitting sine wave of the color-tuning response for each cell (Fig. 4B, red lines). Many cells (71/173) were best fit by the first harmonic (a single cycle), which shows that these cells have a single preferred color (see cells #3, #6; Fig. 4), but some cells were best fit by the second harmonic (21/173), indicative of a preference for a color axis in color space, rather than a single-color direction (cell #1; Fig. 4). The amplitude and phase angle of the best fitting harmonic is shown in red in Figure 4C. The color preferences assessed by the norm of the vector sum and the normalized amplitude of the first harmonic were highly correlated (Pearson r = 0.93, p , 0.001). Among the 44 cells showing significant color modulation, the power of both the first and the second harmonic was higher than the noise level estimated as the power to the eighth harmonic (Fig. 5, red lines); 37/44 cells showed highest power to the first harmonic; 5/44 showed highest power to the second harmonic; 1/ 44, to the third; and 1/44 to the fourth. We found no evidence that the color selectivity of the cells depended on the cells' face preference: for each neuron, we computed the color selectivity (as the normalized amplitude of the first Fourier component) for each face image. We then rank-ordered the face images by descending average firing rate and ran a repeated measures ANOVA with the ranks as independent variables. There was no-significant main effect of the rank on color selectivity (p . 0.13). Figure 4 provides evidence that some face-selective cells were sensitive to color. Among the population, was there a consistent color preference? Figure 6 shows the color responses of all the face-selective cells rank-ordered by the significance of the color tuning, with the most significantly color-tuned cells at the top. Each row shows data from one cell. The gray level shows the normalized response to the given color (the sum of the values across colors for a given cell is 1). Many of the most significantly color-tuned neurons in both ML and AL preferred warm colors, as evident by the dark regions on the upper left and right of the panels in Figure 6. But there were some exceptions. For example, cells represented by rows 6 and 7 in the ML panel and rows 1 and 2 in the AL panel of Figure 6 showed a preference for greenish colors. Figure 7 quantifies the color responses of the population of single cells using Fourier analysis. Figure 7A, left panel, shows a polar histogram of the peak color direction for cells with maximum power to the first harmonic; significantly color-tuned cells are shown in dark gray. These results confirm the population bias toward the red pole of the 0À180°chromatic axis, corresponding to the L.M pole of the L-M cone-opponent axis. This bias is also evident when analyzing the color direction of the best-fitting first harmonic for all cells in the population (including those that did not have maximum power in the first harmonic; Fig. 7B, left panel). In contrast, cells with maximum Error bars show 95% confidence intervals; the red line shows the best fitting sine wave, and an asterisk is provided if the color tuning for the neuron was significant (see Methods). C, Polar plot showing normalized responses to all hues; the sum of the responses to the 16 colors is normalized to equal 1. The bold black text states the norm of the vector sum. The red line shows the normalized amplitude and phase of the best fitting sine wave for neurons whose best fit was a first or second harmonic; the red text states the value of the normalized amplitude of the best-fitting sine. The small black lines on the edges of the circle show the cardinal axes of the cone-opponent color space.
power to the second harmonic showed a phase angle biased for the 90À270°axis, corresponding to modulation along the S-cone axis (Fig. 7A, right panel). And this bias was also evident when analyzing the best fitting second harmonic for all cells in the population (including those that did not have maximum power in the second harmonic; Fig. 7B, right panel). The pattern of results shown in Figure 7 was evident when analyzing data for each animal separately (Fig. 7C).
How does color tuning relate to color selectivity? If color tuning reflects a computational operation of the circuit one might predict that within the population more color-selective cells will have more consistent color tuning. Figure 8 quantifies the polar direction of the norm of the vector average (i.e., the peak color preference; y-axis), color selectivity (x-axis), significance of color tuning (symbol gray value), and number of stimulus repeats obtained (symbol size). The data points to the right of the plot converge on 0°(the L.M pole of the L-M axis), consistent with the prediction. Significantly color-tuned neurons, defined by the p , 0.05 threshold had a mean preferred hue angle that did not differ from insignificantly color-tuned neurons (Watson-Williams test, F (1,171) = 0.07, p = 0.80) but had a significantly lower variance (marginal distribution, Wallraff test, x 2 = 11.59, p , 0.001; Fig. 8). This effect cannot be attributed to variance in the amount of data collected for different neurons. Indeed, splitting the population into two groups, above and below the median number of trials collected per cell, yielded a similar variance in the peak color for two groups (Wallraff test, x 2 = 1.10, p = 0.29).
The face-selective neurons responded strongly to all the colored stimuli, even those of suboptimal color (see Fig. 4). We attribute the strong color-independent responses to the luminance contrast of the stimuli (regardless of the color, the stimuli preserved the luminance contrast of the original images). We can directly dissect the role of color and luminance contrast on the cell responses by using equiluminant colored stimuli (Fig. 9A). These stimuli were created by replacing the range of gray values in the original images with colors of a constant gray value but different saturation: higher luminance gray values were replaced with more saturated color. Responses to these equiluminant stimuli were substantially lower than responses to the colored stimuli that preserved luminance contrast (Mdn Iso = 0.99, Mdn Main = 7.64, Wilcoxon rank-sum test U = 4643, p , 0.001; Fig. 9B). These results show that pure color is not sufficient to strongly drive face-selective cells. Because there is no accepted metric for relating color contrast and luminance contrast (Shevell and Kingdom, 2008), it is often difficult to compare responses to equiluminant stimuli with responses to luminance contrast stimuli. In the present study, this difficulty is mitigated for several reasons. First, the maximum color contrast of the equiluminant stimuli was the highest that the gamut of the display could produce. If color were a sufficient drive of the neural activity of face-selective neurons, the equiluminant stimuli we used should elicit strong responses. Second, using stimuli of comparable color and luminance contrast, other neurons in the visual system show clear preferences for the color stimuli (in V1: Conway, 2001;Johnson et al., 2004;Horwitz and Hass, 2012;in V4: Conway et al., 2007;Bohon et al., 2016; in IT: Komatsu and Ideura, 1993;Lafer-Sousa and Conway, 2013), confirming that these stimuli are capable of eliciting strong responses when neurons are responsive to color. We previously measured the color responses across IT using fMRI (Lafer-Sousa and Conway, 2013;Rosenthal et al., 2018). To directly compare the results of the fMRI with the cell data, we quantified the neural responses within a 400-ms time window starting at the stimulus onset. This time window encompasses the 200-ms duration of the stimulus and the 200-ms interstimulus gray period. Figure 10A shows the average response for the population, in ML (solid line) and AL (dotted line). The plot shows significant differences among the responses to the colors (non-parametric Friedman test, x 2 = 210.69, p , 0.001) and the responses in ML are highly correlated with those in AL (Pearson r = 0.89, p , 0.001). Figure 10B shows the average response across all face-selective cells to face images in each of the 16 colors. This plot underscores two main conclusions. First, responses to all colored images were strong, which we attribute to the fact that all the colored exemplars preserved the luminance contrast of the original achromatic images, the luminance contrast is a main determinant of face-selective responses (Ohayon et The data presented above quantify the color-tuning properties of face cells. If the color responses of face-selective cells reflect a role these cells play in discriminating face colors, we predicted that the Fisher information of the population would correspond to the distribution of face colors. The color statistics of face skin are available in a large database of calibrated measurements derived from multiple ethnicities (Xiao et al., 2017). We assume that the color statistics of bare macaque face skin shows a comparable bias to that found across humans (and we assume that neural measurements in macaque monkeys extend to the human case). Figure 11C shows the Fisher information computed for the neural data as a function of hue angle using von Mises function to describe the cells' response (Fig. 11A). Superimposed on the panels is the distribution of face colors (dashed curves). The peak of the distribution of face colors does not correspond to maxima in the Fisher information; to the contrary, the likely colors of faces correspond to a dip in the Fisher information, which implies that the population is poor at discriminating the colors of faces. We did the same analysis projecting the original data along the 0-180°axis in CIELUV space, which approximates the L-M axis (Fig. 11B,C, middle panels), to evaluate whether face cells contain more information about the color component of faces that is most relevant for dynamic social signaling (the red pole of the green-red axis; Hasantash et al., 2019). In this analysis, the Fisher information peaks for reddish colors, consistent with the idea that face cells are color-tuned in a way that can contribute to the discrimination of L.M values. For comparison, Figure 11B,C, right panels, shows the analysis for data along the vertical axis in CIE color space, which approximates the S-cone axis. Faceselective cells did not show selective tuning along the S-cone axis; the Fisher information analysis implies that the cells do not carry as much information along the Scone axis as they do along the L-M axis.
Discussion
The population of face-selective neurons showed broad color tuning with a bias for reddish colors. The Fisher information, which reflects how well the neural population can discriminate among colors, shows a dip which coincides with the peak in the distribution of face colors measured across human ethnicities (Xiao et al., 2017). This pattern of results implies that face-selective cells in macaque IT are not optimally tuned to discriminate the colors of human faces. It is conceivable that face-selective neurons in macaque IT are optimally tuned to discriminate macaque face colors, although this would require a substantial difference in the colors of macaque faces compared with human faces, which is unlikely since the primary determinants of face coloring (oxygenated hemoglobin and melanin) are the same in both species. It is also conceivable that the color responses of the neurons may be stronger if color was manipulated in spatial patterns on the face to reflect the spatial distribution of natural color changes over the face. Thus, the color responses we report provide a lower bound. The color tuning curves were measured using images that preserve the normal luminance contrast relationships of face photographs. In a second series of experiments, we found that face-selective cells were not very responsive to pure color (equiluminant) images of faces, which underscores the importance of luminance contrast for face selectivity (Ohayon et al., 2012), and provides single-unit evidence supporting the fMRI observation that face patches respond more strongly to luminance contrast compared with equiluminant color (Lafer-Sousa and Conway, 2013). Taken together with prior work, the research supports the idea that color-specific information related to the discrimination of face colors is likely handled by neural circuits that are independent of face patches. This interpretation is consistent with the multistage parallel processing framework of IT, in which face-biased regions are largely non-overlapping with color-biased regions (Conway, 2018).
We related the neurophysiology results to behavior using an information framework. Optimal neural coding suggests that there should be a good match between neural tuning and the statistics of those parts of the environment that are relevant (Simoncelli and Olshausen, 2001;Ganguli and Simoncelli, 2010). Faces occupy a distinct gamut in color space (Crichton et al., 2012;Chauhan et al., 2015;Xiao et al., 2017). If face-selective cells participate in discriminating among the colors of faces, the Fisher information of the population of neural responses should correspond to the distribution of face colors. The neurophysiological results refute this prediction. Most of the significantly color-tuned face-selective neurons cells were best described as having broad tuning, with a single peak in the color-tuning function. On average, the colortuning peaks across cells were to warm (L.M) colors (Fig. 7), corresponding to the typical colors of faces. The Fisher information curve is bimodal and the color-discrimination potential of face-selective neurons is therefore worse for face colors compared with greens and purples (Fig. 11, peaks at 84 and À66 hue angle).
These results suggest that some other population of neurons is responsible for discriminating the colors of Figure 10. Comparison of color tuning measured using microelectrode recording of single cells in face patches and fMRI. A, Average above-background firing rate computed over a 400-ms time window that begins with the stimulus onset (peak responses away from 0 can be accounted for by summing the first two harmonics of the response). B, Average above-background response for all face-selective cells (N = 173) to face images in 16 colors (the color of the traces corresponds to the colors of the images, see Fig. 3). C, Correlation between average response across the population of single units and fMRI color tuning assessed in the face patches of monkeys M1 and M2 (see Materials and Methods).
faces. Functional MRI response patterns in both macaque monkeys and humans show a multistage organizational scheme governed by a repeated eccentricity template, in which color-biased tissue is sandwiched between faceselective tissue (foveal biased) and place-selective tissue (peripheral biased) in parallel streams along the length of the ventral visual pathway (Lafer-Sousa and Conway, 2013;Lafer-Sousa et al., 2016;Conway, 2018). This organization provides the possibility that color-specific information about objects, including faces, could be extracted by neural circuits besides the face patches. But we note that within the face-selective population we studied, some cells were color-tuned with peak tuning away from reddish colors; these cells were, curiously, the most color selective in the population (Fig. 6, top rows). These neurons may represent a distinct category of face-selective cells that could, conceivably, be optimally tuned to discriminate the colors of faces.
What role, if any, does the color tuning of face cells play in visual processing? We consider three possibilities. First, the information content with regards to color was not zero, so the cells could discriminate face colors, but non-optimally. Second, the color component of faces to which humans are most sensitive, regardless of race, is the aspect that varies in response to changes in emotion and health, which is encoded selectively along the L-M chromatic axis (Hasantash et al., 2019). Could the color tuning of face-selective neurons optimally discriminate just this component? Consistent with this possibility, we found that the average tuning and the Fisher information increased for stimuli with larger L.M values. The pattern of results is similar to the ramp-tuning functions of face-selective neurons for other stimulus features (Freiwald et al., 2009). Thus, the selectivity we observed implies that the extent of L-M color contrast in a face is a relevant feature encoded by face-selective neurons. Finally, we wonder whether the color tuning may serve to Figure 11. Analysis of the Information represented in the population. A, Parameters of the von Mises fits over the 173 face-selective cells used to compute the population information. B, Average net firing rate across the population. C, Population Fisher information (thin line), smoothed Fisher information (thick line), and 95% confidence intervals. On both panels, the dashed line represents the distribution of natural face skin color, and the y-axis limits for Fisher information is kept constant across all three analyses. The left column corresponds to the analysis performed in CIELUV space, the middle one to the analysis projected along the greener to redder chromatic axis (positive values indicating redder), and the last one to the analysis projected along the bluer to yellower chromatic axis (positive values indicating yellower). enhance the face-discrimination computations of the neurons. On average, faces have a warmer coloring than backgrounds (Rosenthal et al., 2018). It is plausible that the color responses would increase the firing rates of face-selective neurons when a face is encountered. Such modulation would presumably promote the role of these neurons in face recognition. According to this interpretation, the modulation by color of face-selective cells is analogous to the modulation of neural activity manifest when a subject engages in an attentional task (Wurtz and Mohler, 1976;Treue, 2001;Maunsell, 2015).
Is the color tuning we describe specific for faces? Quantitative analysis of the color statistics of those parts of scenes that we label, shows that objects tend to be systematically biased compared with backgrounds: objects, not just faces, tend to be distinguished from backgrounds along the u' direction of color space, which corresponds, roughly, to warm coloring (Gibson et al., 2017;Conway et al., 2020). Tuning to warm coloring may therefore facilitate the computations of many cells in IT, not just face-selective neurons. This hypothesis is supported by fMRI maps of the color tuning across IT, which show a band running along the posterior-anterior axis that is more strongly modulated by the likely colors of objects (Conway, 2018;Rosenthal et al., 2018). This band is centered on the face patches but is not restricted to them. Consistent with the fMRI results, we found that the non-face-selective cells, often found on the margins or just outside of face patches, also showed a weak bias for warm colors (Fig. 12). Moreover, others have reported that the optimal stimuli for IT cells are often of warm coloring (Ponce et al., 2019). We speculate that the modulation by color is likely not a specific property of face cells but may reflect a feature of IT that facilitates the computation by IT of object recognition generally.
Although most cells had a single peak in the color-tuning function, some face-selective neurons were best fit by two peaks, with maximum power to the second harmonic in the Fourier analysis (see example cell #1; Fig. 4). The color selectivity of this subset of neurons, assessed as the phase angle of the second harmonic, was aligned with the S-cone axis in color space (Fig. 7). Is the tuning to the second harmonic meaningful? One possible concern could have been that these responses reflect a luminance artifact: estimates of equiluminance may not be accurate, especially for colors that modulate S-cones (Vos, 1978). We avoided this potential pitfall in the present work because the stimuli preserved the luminance contrast of the original images: the blackest and whitest regions in the colored versions of each image were the same as in the original image. Any luminance artifacts attributed to vagaries in the determination of equiluminance would be masked by preservation of the luminance contrast of the original image.
The response bias to colors along the S axis was surprising; we think the results provide the first measurements of color tuning biases within extrastriate cortex that reflect the cardinal mechanisms. The cardinal mechanisms correspond to the color tuning of the cone-opponent cells that represent the first postreceptoral stage of color encoding and are reflected in the anatomy and physiology of the lateral geniculate nucleus (Derrington et al., 1984;Martin et al., 2001;Sun et al., 2006;Roy et al., 2009). The cardinal mechanisms are evident in behavioral work that is thought to isolate these subcortical contributions to color vision (Krauskopf et al., 1982;Eskew, 2009). The observation that cortical cells reflect the cardinal mechanisms is surprising because the distinct chromatic signatures associated with the cardinal mechanisms diffuse near the input layers to primary visual cortex (Tailby et al., 2008), and the organization of color undergoes progressively more uniform representation of color space through the visual-processing hierarchy (Bohon et al., 2016;Liu et al., 2020). The present results show that chromatic signatures corresponding to the cardinal mechanisms reemerge in extrastriate cortical circuits far along the putative visual-processing hierarchy, and they raise the possibility that the behavioral results reflecting the cardinal mechanisms may derive from responses not only of subcortical circuits, but also of extrastriate circuits. | 2021-01-24T06:16:14.284Z | 2021-01-14T00:00:00.000 | {
"year": 2021,
"sha1": "7ec52fc5cec7f3cc14eeef48254d33bdfa459637",
"oa_license": "CCBY",
"oa_url": "https://www.eneuro.org/content/eneuro/8/2/ENEURO.0395-20.2020.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "923e4d47599ea8bbf42742cfd2f71e791275eb67",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26046889 | pes2o/s2orc | v3-fos-license | Impact of post-dialysis calcium level on ex vivo rat aortic wall calcification
Objectives Vascular calcification is a frequent complication in chronic haemodialysis patients and is associated with adverse outcomes. Serum calcium and phosphate levels and imbalances in calcification regulators are thought to contribute to the process. In this regard, the dialysate calcium concentration is a modifiable tool for modulating the risk of vascular calcification. We explored pre- and post-dialysis phosphate and calcium concentrations in stable chronic haemodialysis patients treated by dialysis with the KDIGO-suggested 1.5 mmol/L calcium dialysate to investigate the effects on ex vivo calcification of rat aortic rings. Approach and results At the end of haemodialysis, mean serum calcium levels were increased in 88% of paired pre-/post-dialysis samples, while mean serum phosphate and parathyroid hormone levels were decreased. Rat aortic ring cultures grown at the same calcium and phosphate concentrations revealed that pre- and post-dialysis resulted in a similar degree of calcification. By contrast, haemodialysis with unchanged serum calcium resulted in a 5-fold reduction in calcium deposition. Conclusion Dialysis with the widely prescribed 1.5 mmol/L calcium dose results in persistent high serum calcification potential in a sizable proportion of patients, driven by increased post-dialysis calcium concentration. This could potentially be mitigated by individualising dialysate calcium dosage based on pre-dialysis serum calcium levels.
Introduction
Ectopic calcification of blood vessels, also known as vascular calcification, is among the most common complications in patients receiving chronic haemodialysis, and is associated with cardiovascular events and all-cause mortality [1,2].Elevated serum phosphate is a predictable feature of chronic haemodialysis patients in the absence of dietary phosphate restrictions or supplemental phosphate binders, and contributes to the substantial morbidity and mortality rates in this population [2][3][4].Accumulation of calcium-phosphate crystals in the aortic wall is the main hallmark of vascular calcification.Incremental increases in the phosphate concentration increase calcium-phosphate crystal formation [5] by shifting the H 2 PO 4 -/HPO 4 2-equilibrium to the right [6], which promotes the association of HPO 4 2-with calcium to form brushite (CaHPO 4 2H 2 O), octacalcium phosphate (Ca 8 H 2 (PO 4 ) 6 5H 2 O) and hydroxyapatite (Ca 10 (PO 4 ) 6 (OH)), the main crystal forms in ectopic calcification and bone.The calcium-phosphate product (CaxPi) is also associated with an increased risk of vascular calcification; however, the mortality risk associated with CaxPi levels is similar to that of phosphate alone [3,4], and this product is not a determinant of vascular calcification [5,7].The calcium concentration is a greater determinant of calcium-phosphate crystal formation than the phosphate concentration [5].For example, calcification may not be induced by elevated phosphate concentration when the calcium concentration is low [5].Therefore, even if phosphate is eliminated during dialysis, calcification can still occur provided that the calcium concentration is increased.
Since the dialysate calcium concentration is a modifiable factor that determines whether serum calcium levels increase during haemodialysis, in the present study we analysed the effects of calcium and phosphate concentrations during haemodialysis on the deposition of calcium-phosphate crystals in the rat aortic wall.
Pre-and post-haemodialysis blood samples were collected in heparin-containing tubes and immediately centrifuged at 4˚C for 5 min at 5000 rpm.Plasma samples were frozen in liquid nitrogen and stored at -80˚C until further use.In 24 patients, phosphate levels were measured
Aorta isolation and calcification assay
Rats were euthanised by carbon dioxide inhalation, and aortas were perfused with saline and removed according to previous descriptions [9,10] and frozen immediately.Four or five rings were obtained from each aorta (n = 24) and cultured ex vivo (37˚C, in a humidified atmosphere of 5% CO 2 ) in 6-well culture plates using Minimum Essential Medium (MEM; Gibco, Paisley, UK) containing 1 mM L-glutamine, 100 IU/mL penicillin, 100 μg/mL streptomycin and the indicated calcium and phosphate concentration, as previously described [5].The medium was replaced every 2 days and contained 45 calcium (Perkin Elmer; Boston, USA) as a radiotracer.After 6 days of incubation, aortic rings were dried and weighed, and radioactivity was measured using liquid scintillation (Ultima Gold, 6013329, Perkin Elmer) and a liquid scintillation counter (Tri-Carb 2900TR, Perkin Elmer).
Statistical analyses
Finally, aortic rings were cultured in medium containing the same post-haemodialysis Ca and Pi concentrations (2.4 mmol/L and 1.5 mmol/L, respectively) or pre-haemodialysis Ca and Pi concentrations (2.1 mmol/L and 2.5 mmol/L, respectively) as those measured above, and the results are shown in Fig 3 and S2 Table.No significant differences were observed in calcium deposition on the aortic wall ex vivo (1654.5 ± 485.2 μmol/g and 1472.9 ± 472.1 μmol/ g, respectively; n = 15) between Ca and Pi concentrations measured pre-and post-dialysis.By contrast, the amount of calcium deposited in the aortic wall was significantly lower (900.6 ± 254.5 μmol/g; p < 0.01) when the calcium concentration was 2.1 mmol/L (as would be expected in post-haemodialysis samples if the calcium level did not change during dialysis) and the phosphate concentration was 1.5 mmol/L (as observed in post-haemodialysis samples).
In addition, similar results were obtained when the ionized calcium concentration measured above (1.1 mmol/L and 1.3 mmol/L pre-and post-haemodialysis, respectively) was used.In this case, a significant increase (p < 0.05; n = 16) in calcium deposited on the aortic wall was observed under post-haemodialysis conditions (345.8 ± 72.3 μmol/g; 1.3 mmol/L calcium and 1.5 mmol/L phosphate) compared with pre-dialysis conditions (270.5 ± 66.4 μmol/g; 1.1 mmol/L calcium and 2.5 mmol/L phosphate).By contrast, the amount of calcium deposited in the aortic wall was significantly lower (83.9 ± 16.1 μmol/g; p < 0.001) when the calcium concentration was 1.1 mmol/L and the phosphate concentration was 1.5 mmol/L (Fig 3).
Discussion
In biological systems, phosphorus is found in hard tissues (mainly bone, 85%), soft tissues (14%) and extracellular fluid (1%).Of the 1% that is present in extracellular fluid, 85% exists as the free inorganic phosphate ion (Pi) in solution.The normal range for plasma inorganic phosphate is 0.8-1.5 mmol/L.An elevated phosphate level is a risk factor for vascular calcification, cardiovascular disease and mortality [3,4]; therefore, phosphate is considered a uremic toxin that needs to be eliminated during haemodialysis.In this study, we observed a reduction in the plasma phosphate concentration during haemodialysis, from 2.5 mmol/L (pre-dialysis) to 1.7 mmol/L (post-dialysis).However, ~70% of post-haemodialysis samples remained above 1.5 mmol/L.By contrast, the calcium concentration was higher in post-dialysis plasma in 88% of paired samples analysed (2.1 vs. 2.4 mmol/L for mean pre-and post-dialysis plasma calcium concentrations, respectively).Under these pre-and post-haemodialysis levels of calcium and phosphate, calcium accumulation in the aortic wall was similar.Thus, a conventional haemodialysis session, using a fixed and non-individualised dialysate calcium dosage within the range suggested by international guidelines [8] did not improve the vascular calcification risk associated with calcium and phosphate concentrations found in serum.By contrast, a significant reduction in calcium accumulation in the rat aortic wall was observed when mimicking post-haemodialysis conditions associated with neutral post-dialysis calcium levels.This assumes that post-haemodialysis calcium remained unchanged during the haemodialysis session (2.1 mmol/L calcium and 1.5 mmol/L phosphate).Similar results were also observed when ionized calcium levels were used.
Post-dialysis calcium levels depend on pre-dialysis plasma calcium, dialysate calcium and ultrafiltration.Lower post-dialysis ionised calcium levels are associated with an increase in serum PTH levels, while an increase in post-dialysis ionised calcium results in suppression of PTH, as observed both in the present study and previously [11,12].While we did not perform detailed ionised calcium balance studies, the observed suppression of serum PTH in patients in whom serum calcium increased during dialysis suggests that the observed increase in serum calcium was physiologically relevant.In this regard, increased calcium levels are associated with the maintenance of a similar procalcific effect ex vivo, despite the reduction in the phosphate concentration, as shown here.Thus, the present findings support the concept of personalisation of dialysate calcium based on individual patient pre-dialysis serum calcium to maintain balanced calcium levels.The current KDIGO CKD-MBD guidelines also emphasise the concept of individualisation of dialysate calcium [8].Our findings suggest that dialysate calcium of 1.5 mmol/L results in a persistent post-dialysis risk of vascular calcification in a sizable proportion of patients, and further individualisation would be desirable.Moreover, Ok et al, observed a reduction in the progression of coronary artery calcification when the dialysate calcium level was 1.25 mmol/L, supporting the notion that lowering dialysate calcium would be effective in preventing progression of vascular calcification [13].However, if the dialysate calcium concentration is too low, this could result in haemodynamic instability and arrhythmia, hence the need for dialysate calcium individualisation.
Among the weaknesses of the present study, in the rat aortic ring culture conditions, we did not model the increase in pH and serum bicarbonate associated with the haemodialysis procedure, which could affect the positive base balance and cause metabolic alkalosis [14,15].Given that metabolic alkalosis, or even the correction of acidosis, is associated with increased vascular calcification risk under uremic conditions [16][17][18][19], our approach may have underestimated the procalcification effect of post-haemodialysis calcium and phosphate concentrations.For example, Villa-Bellosta et al. observed an increase in calcium-phosphate deposition at alkaline pH in an in vitro calcification model, while no calcium-phosphate deposition was observed under acidosis conditions [5].Moreover, post-dialysis alkalinisation results in increased alkaline phosphatase activity [20].This consequently lowers the availability of pyrophosphate, the main endogenous inhibitor of calcium-phosphate crystal formation and growth.Finally, our experiments did not model the dynamic nature of the calcium concentration during the 48 h between haemodialysis sessions.However, haemodialysis patients may experience repeated (thrice weekly) serum calcium peaks over many years (decades in countries with low transplantation rates), which, combined with evidence of positive calcium balance in a significant number of patients during haemodialysis [21], supports the clinical relevance of the findings.
In conclusion, the present findings suggest that the frequent current practice of using a one-size-fits-all dialysate calcium concentration may result in the persistence of calcificationprone serum calcium and phosphate concentrations post-dialysis, and support the concept of dialysate calcium concentration individualisation.While the international guidelines already propose the individualisation of the dialysate calcium concentration, the suggested default calcium dialysate value may still be too high and might promote vascular calcification in a sizable proportion of patients [8].
Fig 3 .
Fig 3. Calcium deposition in the rat aortic wall ex vivo under calcium and phosphate concentrations matching pre-and post-haemodialysis levels in haemodialysis patients.Rat aortic rings were incubated ex vivo in MEM medium containing the indicated concentrations of calcium and phosphate.The medium was replaced every 2 days and contained 45 calcium as a radiotracer.After 6 days of incubation, aortic rings were dried and radioactivity was measured by liquid scintillation counting.Results are represented as mean ± SD from three independent experiments, with 15 or 16 rings per condition (five or six rings per condition per experiment).One-way ANOVA and Tukey's multicomparison test were used for statistical analysis.Calcium and phosphate concentrations are in mmol/L.*p < 0.05; ***p < 0.001.Pre-dialysis conditions were used as a reference.PreHD, prehaemodialysis; PostHD, post-haemodialysis.https://doi.org/10.1371/journal.pone.0183730.g003
Table 1 . Demographic and clinical characteristics of the study population. Data
two independent extractions at each time point.This study was conducted according to the Declaration of Helsinki and approved by the Ethics Committee of Research of University Hospital Fundacio ´n Jime ´nez Dı ´az.Participants were identified by number only.Inclusion criteria stipulated that only consenting stable adults on chronic haemodialysis with a life expectancy of over 6 months were included.There were no excluding criteria based on compliance, calcium, phosphate, parathyroid hormone (PTH) or vitamin D levels, or concomitant medications.Spain).The protocol was approved by the Fundacio ´n Jime ´nez Dı ´az (FIIS-FJD) ethics committee and conformed to directive 2010/63EU and recommendation 2007/526/EC regarding the protection of animals used for experimental and other scientific purposes, enforced in Spanish law under RD1201/2005. in Results in Figs 1, 2, 3 and Table 1 are presented as mean and standard deviation (SD) or median (interquartile range).The Wilcoxon matched pairs test was used for statistical analysis of results presented in Figs 1A and 2. The Mann-Whitney test was used in Fig 1C.One-way ANOVA and Tukey's multicomparison test were used in Fig 3. Statistical significance was determined with GraphPad Prism 5 and was assigned at p < 0.05. | 2018-04-03T05:56:05.661Z | 2017-08-23T00:00:00.000 | {
"year": 2017,
"sha1": "42e09122c0b9a22d3730e141c86d79321d099bc8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0183730&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "42e09122c0b9a22d3730e141c86d79321d099bc8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1730250 | pes2o/s2orc | v3-fos-license | Excision of Lipodermatosclerotic Tissue : An Effective Treatment for Non-healing Venous Ulcers
In longstanding venous ulcers, the development of lipodermatosclerosis of the skin surrounding the ulcer is common. According to our clinical experience lipodermatosclerosis impairs the opportunities for the ulcer to heal. In this combined retrospective and prospective study the lipodermatosclerotic skin area was excised in 7 non-healing venous ulcers and then covered with split skin graft. All 7 legs had previously been treated with super®cial venous surgery. Laser Doppler scanning of the ulcer area was performed preand postoperatively. Five of the 7 ulcers healed within 4 months and 1 healed within 9 months. Laser Doppler scanning showed increased blood ̄ow in the lipodermatosclerotic skin area, which was decreased after the operation. This study indicates that excision of the lipodermatosclerotic skin area followed by split skin grafting can accomplish healing in non-healing venous leg ulcers that have failed to respond to previous super®cial venous surgery.
Lipodermatosclerosis, found exclusively in connection with venous insuf®ciency, is characterized by induration of the skin, often associated with hyperpigmentation (1). The cause of this phenomenon is incompletely understood. Disturbances in microcirculation have been found in the lipodermatosclerotic skin area. Incompetence of the venous valves gives rise to venous hypertension in the leg, which is transmitted to the capillaries, causing raised intracapillary pressure. Fagrell (2) used vital capillaroscopy to show that the number of capillaries is reduced, with the remaining capillaries becoming dilated and tortuous. Studies using the laser Doppler probe have shown an increased basal blood¯ow in the lipodermatosclerotic skin in the supine position, compared with normal skin (3 ± 5). In addition other authors have shown a reduced capability to dilate the vessels following local ischaemia (6) and in response to local heating (7).
Nelze Ân et al. (8) and Myers et al. (9) have shown that up to 40% of venous ulcers are caused by super®cial venous incompetence alone, with or without incompetence of the perforators and without any deep incompetence. Many of these patients can be treated successfully with venous surgery. Darke & Penfold (10) performed saphenous ligation on patients with venous ulcer and saphenous vein incompetence in combination with perforator incompetence, and a 90% healing rate was achieved. In some instances deep vein reconstructions are possible. Patients who are not suitable for surgical treatment are obliged to undergo lifelong compression therapy. Most venous ulcers heal with local and compression therapy. Some ulcers, however, do not heal in spite of intensive therapy, including super®cial venous surgery. In such patients skin grafting is often used as a complement. From our own experience with pinch grafting, venous ulcers have a lower healing rate than leg ulcers of other aetiologies (11). The aim of this study was to elucidate whether excision of the lipodermatosclerotic skin area, and thereby the area with disturbed microcirculation, supplemented with split skin grafting can achieve healing in therapy resistant or chronically recurring venous ulcers in patients with isolated deep venous insuf®ciency.
Patients
Seven legs from 5 patients with venous ulcers and lipodermatosclerosis (4 females, 1 male) attending the Leg Ulcer Clinic at the Department of Dermatology in Malmo È were included in the study. All ulcers were therapy resistant or chronically recurring. The patients had a mean age of 72 (range 52 ± 84) years. In different cases the longest ulcer diameter ranged from 6 to 120 mm. Five of the ulcers were painful. In 6 patients pinch grafting had previously been tried without permanent healing. On all patients super®cial venous surgery had previously been performed and all had an isolated deep venous insuf®ciency (Table I), con®rmed with duplex ultrasonography (Acuson 128XP/10, Acuson, Mountain View, CA, USA) when included in the study. All patients had been treated with compression therapy for at least 6 months after super®cial vein surgery without healing. All patients had an ankle-brachial index of w1 or a palpable pulse in the dorsal pedal artery. The patients had been under treatment for more than 2 years preoperatively and the presence of ulcers could be registered retrospectively from the patient records.
Laser Doppler scanning
Laser Doppler scanning of the lipodermatosclerotic skin area was made preoperatively in 6 cases and postoperatively at least 1 year after the operation in all 7 cases. All investigations were made in the supine position and at room temperature. The laser Doppler scanner used in this study (Lisca Development AB, Linko È ping, Sweden) allows measurements of the skin blood¯ow over an area 12612 cm with 4096 measurement points (pixels), and the data is collected without touching the tissue (12). The data is presented as a colour-coded image in 6 colours from dark blue to red with increasing¯ow. Using this technique it is possible to investigate the regional distribution of the skin blood¯ow in the set area. Before the examination, metal markings were placed at the margins of the lipodermatosclerotic skin area. The margins of the lipodermatosclerotic area could thus be traced in the laser Doppler-scanning picture. The margins of the grafted area were marked postoperatively (Fig. 1). The mean blood¯ow was assessed preoperatively in the lipodermatosclerotic skin area and postoperatively in the grafted area.
Surgery
In all patients the lipodermatosclerotic skin area was excised down to the muscle fascia. The longest diameters of the excised skin area varied between 80 and 150 mm. A split skin graft was obtained from the proximal thigh. The skin graft was meshed with expansion rate 1.5. The recipient area was pre-treated with ®brin glue (Tisseel 1 Duo Quick Immuno Sweden AB, Solna, Sweden) to achieve graft adherence and the margins were ®xated with staples. The skin graft was covered with a silicone net dressing (Mepitel 1 Mo È lnlycke AB, Mo È lnlycke, Sweden). This was protected with loosely applied cotton swabs ®xated with a low elastic bandage. Histological examination was made of the excised material in all patients. Postoperatively the patients were restricted to bed rest for 7 ± 10 days. Thrombosis prophylaxis was given routinely. All the patients left hospital with compression bandage.
Follow-up
Postoperatively the patients were followed every month until healed and then every sixth month for 2 years postoperatively.
RESULTS
Five of the 7 ulcers healed within 4 months after the operation and remained healed during the follow-up. One ulcer healed after 9 months and stayed healed during follow-up (Fig. 2). The¯ow in the lipodermatosclerotic area was high before operation (2.66 ± 4.08 V) and markedly reduced postoperatively (0.60 ± 2.64 V) (Fig. 3). An example of the regionally raised blood¯ow in the lipodermatosclerotic area is easily seen in Figure 1A. Postoperatively the¯ow is normalized (Fig. 1B). All the histological specimens showed an increased amount of transected capillary loops in the papillary dermis and ®brosis of the reticular dermis and subcutis.
DISCUSSION
In longstanding venous ulcers with pronounced lipodermatosclerosis it is often impossible to achieve healing in spite of intensive local treatment, even including skin transplantation of the ulcer. This view is supported by our own ®ndings of lower healing rates after pinch grafting in venous leg ulcers compared with leg ulcers of other aetiologies (11). A possible explanation for this is the disturbed microcirculation in the lipodermatosclerotic area (2). Thus it is logical to assume that surgical removal of the lipodermatosclerotic area would be a b bene®cial for ulcer healing. Our study showed favourable results with excision of the lipodermatosclerosis in 6 out of 7 therapy resistant venous ulcers that had previously shown no response to adequate venous surgery. The effectiveness of removing lipodermatosclerotic tissue has previously been reported using so-called shave therapy (13). The present study, using a laser Doppler scanner, making it possible to measure the regional distribution of the blood¯ow, showed a raised basal skin blood¯ow in the lipodermatosclerotic skin areas. This is in concordance with previous point measurements using the laser Doppler probe (3, 5 ± 7). The postoperative laser Doppler scan shows¯ow reduction and even normalization in the transplanted area, which may be the explanation for the better skin graft survival. Excision of the sclerotic skin area thus removes the unfavourable ulcer bed, enhances the chances for graft ingrowth and probably reduces the risk for ulcer recurrences. Despite the low number of patients included in the study, our results indicate that removal of the sclerotic tissue, and thereby the area with pathological microcirculation, is an alternative in the treatment of non-healing venous ulcers. Further studies are in progress. It should be noted that lipodermatosclerosis surgery should be tried only after all other therapeutic measures, including adequate venous surgery, have failed. | 2018-04-03T03:16:17.869Z | 2000-01-01T00:00:00.000 | {
"year": 2000,
"sha1": "13420980e8a1c91a463918c30f54d5ca2b0c12a5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1080/000155500750012478",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "21f1cb09456c20da3d07df1c3703e74c7b54250c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55029728 | pes2o/s2orc | v3-fos-license | Brevibacillus Thermoruber 9X-GLC, Bacteria Isolated from Hot Compost, Producer of a Beta-Glucosidase Resistant to Glucose Inhibition
Centro de Investigación en Biotecnología Aplicada del Instituto Politécnico Nacional, Km 1.5 Carretera Estatal Tecuexcomac-Tepetitla, 90700, Tepetitla, Tlaxcala, México Centro Universitario de Vinculación y Transferencia de Tecnología, Benemérita Universidad Autónoma de Puebla, Prolongación de la 24 Sur y Avenida San Claudio, 72570, Puebla, Puebla, México Departamento de Biotecnología y Bioingeniería, Av. Instituto Politécnico Nacional 2508, Centro de Investigación y de Estudios Avanzados, Gustavo A. Madero, San Pedro Zacatenco, 07360, México, 4 Departamento de Bioquímica, Facultad de Medicina, Building D, Universidad Nacional Autónoma de México, 1Floor, Ciudad Universitaria, 04510, México
Introduction
Cellulose is a lineal biopolymer formed by glucose units joined by β-1-4 glycosidic bonds. The structural nature of cellulose makes it insoluble in water and prevents enzyme attack, an effect known as recalcitrance (Himmel et al., 2007). Hot compost of lignocellulose byproducts of agro-industrial processes have been reported as excellent sources for the isolation of thermotolerant microorganisms useful for the production of oxidoreductases and hydrolases with potential application in the conversion of cellulose to fermentable sugars, biofuels and chemicals (Himmel et al., 2007).
Enzymatic degradation of cellulose is an important step in bioethanol production from plant biomass, it requires synergistic action of multiple enzymes, mainly endo-β-1,4glucanase, cellobiohydrolase (exo-glucanase) and β-glucosidase (Fujita et al., 2004). Beta-glucosidases are present in bacteria, fungi and plants and show hydrolytic activity on cellobiose and aromatic compounds such as arbutin and silicin present in some plant tissues (Tajima et al., 2001;Spiridonov and Wilson, 2001;Park et al., 2002;Bogas et al., 2007). Enzymatic systems lacking β-glucosidase or those producing it in small amounts show incomplete saccharification of plant cellulose and usually show 158 product inhibition (Gusakov and Sinitsyn, 1992). Being cellobiose an inhibitor of endo-glucanase, β-glucosidase usually becomes the bottleneck of cellulose saccharification and thus bioethanol production. Accumulation of cellobiose during enzyme hydrolysis can decrease overall cellulose hydrolysis through inhibition when the active site of cellulases is blocked (Sørensen et al., 2013), Desirable characteristics of cellulase microbial producers include tolerance to ethanol and thermotolerance when considered as part of a mixed culture for simultaneous saccharification and fermentation processes (Sørensen et al., 2013). During enzymatic hydrolysis and saccharification of cellulose, the end-product (glucose) is generally not removed and product inhibition occurs decreasing the reaction rate and cellulose hydrolysis. Most of publications concerning cellulase production have focused on fungal enzymes, nevertheless the use of bacteria for enzyme production has attracted interest, mainly because bacteria show shorter growth period, allowing faster enzyme production. Furthermore bacteria inhabit a wide variety of environments, this makes some strains resistant to extreme conditions. Bacillus genus shows a high growth rate and easy adaptability to adverse environments and not excessive nutritional requirements, thus this work aimed to find Beta-glucosidase producers in this microbial group. In this article we report the isolation and primary characterization of a novel thermotolerant and ethanol tolerant bacterium, Brevibacillusthermoruber 9X-GLC, as well as its β-D-glucosidase activity. The strain was isolated from sugar cane bagasse hotcompost, it showsβ-D-glucosidase activity highly tolerant to glucose inhibition in concentrated culture supernatants. The strain grows at 55°C and tolerates 5% (v/v) ethanol in the culture medium. 9X-GLC produces a highly glucose tolerant (>200 mM) β-D-glucosidase in submerged fermentations containing eitherglucose, cellobiose, cellodextrins, avicel or xylan, as sole carbon source. The putative structural gene encoding β-D-glucosidase was amplified and sequenced, revealing high homology to bacterial glucosidases.
Enrichment
Cultures and Isolation of Microorganisms Soil and compost samples were obtained from different locations in the State of Veracruz, Mexico. For the isolation of microorganisms with β-glucosidase activity, collected samples were suspended in liquid Bacillus Mineral Medium (BMM), with the following composition (g/l): K2HPO4 (0.5); NH4NO3 (0.5); MgSO4 (0.2); and either cellobiose or cellodextrins (0.1) as sole carbon source (De la Cerna, 2011). To obtain spore forming bacilli, soil or compost suspensions were incubated at 80°C during 48 h. Aliquots from the enrichment cultures were then diluted and inoculated on agar plates containing the same mineral medium. Plates were incubated at 50°C for 24 to 72 h. Prevalent colonial morphotypes were isolated and purified. Pure cultures of fast growing gram-positive rods were subcultured and maintained at -70°C or freeze-dried for further use.
Production of Cellodextrins
The method for the production of cellodextrins was based on De la Cerna (2011) and Eveleigh, D.E. (personal communication, 2009). Microcristalline cellulose (Avicel) (20 g) were suspended in distilled water (50 mL −1 ) and hydrolyzed by slow addition of 100 mL −1 HCl followed by 20 mL −1 H 2 SO 4 . The mixture was then heated at 100°C for 10 min and cooled down at room temperature. The pH was adjusted to 7.0 with KOH (40% w/v) and the mixture diluted with distilled water to a final volume of 200 mL −1 . The mixture was then transferred to a 1 L Erlenmeyer flask. For the final step of cellulose hydrolysis, 2 mL −1 of endoglucanase (Celluclast 1.5L, Sigma Aldrich, USA), were added to the prehydrolyzed cellulose preparation. The reaction was carried at 45°C for 3 h under agitation in a rotary shaker at a fixed rate of 150 rpm. The mixture was then centrifuged to separate insoluble residual cellulose and supernatant was cooled down to reach 4°C and treated with cold ethyl alcohol (96% v/v from a local supplier). Addition of ethanol (100 mL −1 at -20°C) was used to obtain a cloudy suspension containing colloidal cellodextrins. The solution was ultrafiltered through a 1 kDa membrane (Millipore, USA) in order to obtain a concentrated retentate (approximately 4.8 g) of a white mass with mainly soluble cellodextrins. The concentrated preparation was washed twice with 25 mL −1 of cold ethyl alcohol (96 % v/v) and the insoluble residue was air-dried. The final dried preparation contained 4.2 g of pure, water-soluble cellodextrins. 159 β-glucosidase Activity Assay β-glucosidase activity was assayed through a method based on that previously described (Cai et al., 1998) by measuring the presence of ρ-nitrophenol liberated by enzyme hydrolysis of ρ-nitrophenylβ-Dglucopyranoside (Sigma Aldrich). Reaction media consisted of 200 µL phosphate buffer 0.1 M pH 6; 700 µL ρ-nitrophenylβ-D-glucopyranoside 10 mM (Sigma Aldrich, USA) and 100 µL of enzyme extracts. The enzymatic reaction was performed at 45°C for 30 min and stopped by the addition of 200 µL of sodium carbonate 1 M. Absorbance was measured at 405 nm using a spectrophotometer (Hatch DR 5000, USA) and the enzyme activity was expressed as β-glucosidase Units, where one Unit was defined as the amount of enzyme needed to produce 1 µmol of ρ-nitrophenol per minute.
Preparation of 9X-GLC cell lysate
After cultivation of 9X-GLC at 45°C during 48 h, 100 m L −1 of broth from LPM were centrifuged at 4835 g for 20 min at 4°C. The whole cell pellet was resuspended in 1 m L −1 of deionized water. To the cell suspension, glass beads (106 µm diameter, Sigma Aldrich, USA) were added (0.1 g) along with 0.05 g of glass microbeads (3.3 mm diameter). The mixture was incubated at -20°C for 1 h. After thawing, the mixture was vigorously shaken with a vortex for 5 min and then incubated again at -20°C for 5 min. This freeze-shake procedure was repeated two more times. After final thawing, 100 µL solution containing lysozyme (0.5 mg mL −1 ) were added and the mixture was gently shaken for 50 min at room temperature followed by vortexing for 2 min. Mixture was then centrifuged for 10 min at 4835 g at 4°C. The supernatant was recovered and PMSF added to a final concentration of 1 mM. The lysis supernatant was then ultrafiltered through a 0.22 µm filter and placed in a phosphate buffer (pH 6.0; final concentration 0.1 M).
Identification of 9X-GLC Strain
Genomic DNA from isolate 9X-GLC (DNA extraction kit from Zymoresearch, USA) was used as a template to amplify 16S rDNA sequence using the primers CU-01P46F 3´and CU-031-P1540R, described by Edwards et al. (1989). A PCR reaction was performed in a mixture containing 0.2 µM of each primer, 0.5 µg of genomic DNA, 200 µM of dNTPs and 0.05 U of DNA Polymerase (HotStarTaq Plus DNA Polymerase, QIAGEN, USA) with 5 µl of 1x PCR buffer. The reaction mixture was incubated for 25 cycles as follows: 95°C/5 min, 94°C/30, 60°C/30 sec, 72°C/1 min and a final extension time of 1 min at 72°C. The amplification PCR product was purified using the QIAquick Gel Extraction Kit (QIAGEN, USA) and sequenced. The sequence obtained was analyzed by alignment with sequences from the NCBI database. The sequence was also used to build a phylogenetic tree using the program Phylogeny.fr (Dereeper et al., 2008
Isolation of Microorganisms with β-Glucosidase Activity
Spore-forming thermotolerant bacilli strains were isolated from 17 compost and soil samples. The isolates obtained were cultured at either 50 or 25°C. Only 4.2% of all isolates grew at 50°C and 25°C, while 95.8% grew exclusively at 25°C. Cellodextrins were used as carbon source only by 20% of all isolates, the remaining isolates could only grow on cellobiose. All bacillus isolates were tested for their ability to grow on different carbon sources using BMM. Thirteen isolates were selected for their ability to grow on both cellobiose and cellodextrins. Isolate 9X-GLC was able to grow on agar plates and liquid cultures using cellodextrins or cellobiose as a sole carbon source.
Biochemical Characterization of Extracts
Thirteen enzyme extracts from selected isolates were also evaluated with respect to their β-glucosidase activity at different pH and temperatures. Table 1 presents results on the optimal temperature for β-glucosidase activity in UF concentrates of culture supernatants for different bacillus strains. Most extracts showed the highest enzyme activity at 45°C. But extracts from 4 particular isolates showed optimal enzyme activity at temperatures higher than 50°C: 10C, 11C, 2AIC and 9X-GLC. Crude UF concentrates of culture supernatants were also tested for thermal stability and glucose inhibition of β-glucosidase activity. Figure 1 shows the results of thermal stability of βglucosidase of 6 selected concentrated extracts, incubated for 4 h at 50°C. Extracts from isolates 2AIC, 3AIC and 9X-GLC maintain 70-80% of the initial enzyme activity, while all other extracts showed only 50-60% of the initial activity. The strain 9X-GLC showed high thermal stability, since it retained 75% of βglucosidase activity after 4 h of incubation at 50°C.
The effect of glucose (0 to 200 mM) on β-glucosidase activity from selected bacillus culture extracts is presented in Fig. 2. The enzyme extracts from 9X-GLC retain 57% of the initial enzyme activity after incubation for 30 min with glucose, at a concentration of 200 mM. A commercial enzyme preparation, Novozym 188 (Sigma-Aldrich), with higher β-glucosidase activity than the produced by the selected isolate (ca. 500 times), showed a very high thermal stability (almost 68% of the initial activity at 50°C, after incubation for 4 161 h), but a sharp drop (98%) of activity, when incubated in the presence of 200 mM glucose (3.6% w/v). Figure 3 shows enzyme activity of concentrated culture extracts from 9X-GLC strain after cultivation in media containing different carbon sources. Extracts obtained from 9X-GLC cultures containing either cellobiose or cellodextrins showed the highest β-glucosidase activities.
The effect of the carbon source on the ability of 9X-GLC strain to grow in LPM at 45°C is presented in Table 2. The results show that glucose, cellobiose and cellodextrins were the best substrates for 9X-GLC growth, while pectin showed the lowest viable cell yield after 48 h incubation.
Thirteen selected bacillus strains were tested for their ability to grow in presence of ethanol (5% v/v) as this is an important issue for simultaneous saccharification and fermentation processes. From these isolates, only 5 showed growth at a detectable rate Fig. 4. The strain 9X-GLC showed a clear ability to grow in the presence of ethanol.
Presence of Enzyme Activity
The β-glucosidase activity from cell lysates of 9X-GLC is higher than the activity present in supernatants of the same culture (Fig. 5A). This was found after comparison of supernatant and disrupted cells extract. It is showed in Fig. 5B that 45°C was the optimal temperature for β-glucosidase production during 9X-GLC cultivation in liquid media.
Molecular Identification of 9X-GLC
The sequence obtained from 16S rDNA amplification was used to construct a phylogenetic tree (Fig. 6). This phylogenetic tree and the homology percentage of the sequence identify this isolate as Brevibacillusthermoruber. It is observed that Brevibacillusthermoruber, access number KJ722521.1 from NCBI has the maximal identity percentage to the selected study strain Brevibacillusthermoruber KU255843 with 97% identity. Furthermore, the identification of the strain was also confirmed by a professional molecular identification service (Accugenix, Delaware, USA) (not presented data).
Discussion
When cultured in LPM containing cellodextrins, a high cellobiohydrolase activity was measured in the ultrafiltered culture supernatant (<100 kDa, > 1 kDa) of 9X-GLC, obtained from 48 h cultures at 45°C. The enzyme activity was 55 mU mL −1 at 50°C using ρnitrophenyl β-D-cellobioside as substrate. According to most authors (den Haan et al., 2013;Zoglowek et al., 2015), this enzyme is responsible for glucooligosaccharides hydrolysis, producing both glucose and cellobiose as main reaction products. This is to our knowledge the first reported Bacillus species capable of simultaneous catabolism of cellobiose and cellodextrins with quantifiableenzyme activity of βglucosidase and cellobiohydrolase. The ability to catabolize both cellobiose and cellodextrins is relatively unusual among microorganisms. Some microbial isolates have been reported to use both cellobiose and cellodextrins as carbon sources. Yeast isolates such as Candida wickerhamii, Candida lusitaniae and Dekkera intermedia (Freer and Detroy, 1982;Kilian et al., 1983;Blondin et al., 1982) were found to successfully hydrolyze cellobiose through β-glucosidase, but only C. wickerhami could metabolize both cellobiose and cello oligosaccharides. Anaerobic bacteria have also been reported to catabolize cellobiose and cellodextrins. Classic examples are: Bifidobacterium breve and Bacteroidespolypragmatus (Pokusaeva et al., 2011;Mackenzie et al., 1986), both strains capable of hydrolysis and catabolism of these oligosaccharides.
The optimal temperature for enzyme activity of 9X-GLC and thermal stability were found high among evaluated isolates. Results reported in a purified βglucosidase produced by Bacillus subtilis (Argungu et al., 2014), showed optimal temperature at 60°C and pH 7.0. On the other hand, during the characterization of βglucosidase from an isolated Bacillus halodurans strain, the optimal conditions were 45°C and pH 8.0 (Naz et al., 2010). Enzyme activity and stability at high temperatures (>50°C) are important features in cellulases used in bioethanol production, especially in SSF processes (Singhania et al., 2013).
Inhibition of β-glucosidase activity by glucose is a common problem during hemicellulose saccharification and also a keystone of bioethanol production by the use of saccharifying enzymes (Saha and Bothast, 1996).
During a study with aryl-β-glucosidase from Trichoderma spp, enzyme activity was completely inhibited at 1% (w/v) glucose; while the characterized enzyme produced by Microsporabispora (Waldron et al., 1986) was inhibited by 35% in the presence of glucose at a concentration of 10% (w/v). Strain 9X-GLC was selected from all microbial isolates, because of its high resistance to glucose inhibition, as well as its high thermal stability.
Carbon sources tested for β-glucosidase induction in 9X-GLC showed that cellobiose was the best inducer of enzyme activity, these results are similar to those reported in bacteria isolated from soil (Busto et al., 1995) where cellobiose was notably a better inducer of β-glucosidase activity than carboxymethylcellulose. Being cellobiose and cellodextrins the natural substrates for β-glucosidase in this microorganism, it seems clear these substrates promote the enzyme expression.
β-glucosidase activity from cell lysates of 9X-GLC were higher than the activity present in supernatants of the same cultures. These results are consistent to other report (Kim et al., 2012), which suggests that βglucosidases are mainly bound to bacterial cells in Bacillus and related genera. In the referred study with bacillar isolates from agricultural environments, it was found that all β-glucosidase activity in Bacillus subtilis strains was located only in the whole cell pellet, while no activity was detected in culture supernatants. Furthermore, in a work with Bacillus licheniformis (Dhillon et al., 1985), the strain could grow in a medium with cellobiose as the sole carbon source, but it was not possible to detect β-glucosidase activity neither in the cell fractions, nor in the culture supernatants The authors attributed the ability to catabolize cellobiose to the initial reaction of cellobiose phosphorylase.
During a work not presented in this study, a gene related to β-glucosidase activity was sequenced and deposited in the NCBI GenBank database under accession number KU25584. The sequence was annotated using the bioinformatics tool RAST (Aziz et al., 2008). To obtain the sequence, oligonucleotides were designed from a gene present in the genome with accession number GCA_000454065.1 from Brevibacillusthermoruber 423 (Yildiz et al., 2013), since it is the identity for 9X-GLC according to mentioned results. By the use of the alignment tool BLAST from NCBI, a gene belonging to the superfamily 3 of glycosidases was identified. The gene is described as βhexosaminidase (EC 3.2.1.52) and contains 2118 base pairs. This group of enzymes is widely present in bacteria and yeast and the enzymes present in this family are β-glucosidase, Beta-hexosaminidase or chitobiase.
The partial sequence obtained consisted of 709 base pairs and the local alignment of this sequence gave identity to the family 3 of glycosidases. From the genome walking amplification, 4 sequences were obtained and subsequently sequenced via the Illumina 164 technology. A total of 314,356 reads was obtained. These sequences and the previous 709 bp sequence were assembled using Genious ™ version 8.1.2 (Kearse et al., 2012) to get an assembled 3405 base pair sequence, from this sequence an ORF containing 2085 base pairs was identified which corresponds to a member of the family 3 of glycosidases. Its alignment and comparison in databases showed strong homology to catalytic domains present in others glycosidases, mainly in Beta-hexosaminidases and β-glucosidase present in Bacillus and related genera.
From a Basic Local Alignment Search and Conserved Domains Finder Tool (Marchler-Bauer et al., 2009;Marchler-Bauer and Bryant, 2004), BglX domain is described as a sequence belonging to a β-glucosidase or a related glycosidase. As it is confirmed by some authors (Painbeni et al., 1992;Busto et al., 1995;Kim et al., 2012) glycosidases such as β-glucosidase are enzymes with variable activity, acting on substrates like cellobiose, arbutin, salicilin, esculin, chitin aglycones or non-reducing ends of cellodextrins.
To analyze if the active site profile of this protein may give information about the family of β-glucosidase, it was constructed a structural model for the protein sequence with the I-Tasser server (Zhang, 2008;Roy et al., 2010;Yang et al., 2015), the best model presented a good score of C (-1.66) with a TM score of 0.51±0.15. The structural model correlate to the glycoside hydrolases of group 3, in particular to the subfamily of β-glucosidases of two domains: one N-terminal domain the (β/α) 8 barrel fold (TIM-barrel) and C-terminal domain αβα sandwich fold, which is structural homolog to the structure of Nacetylglucosaminidase of Bacillus subtilis (PDB id: 3bmx): The best structural model constructed by I-tasser presented the best structural alignment (TM-score of 0.791) with the β-glucosidase of Bacillus subtilis.
A characteristic of the binding site of the two domain N-acetylglucosaminidase analyzed is the presence of the Asp-His catalytic dyad, which suggests a unique glycoside hydrolysis mechanism similar to the catalytic triad of serine proteases.
When both structures were aligned the catalytic dyad residues His234 and Asp318 of Nacetylglucosaminidaseof Bacillus subtilis match structurally with the position of the residues His and Asp that form the catalytic dyad present in our model. Also it is present a homolog residue in Bacillus subtilis, the Asp at position 232 that is in hydrogen bond distance for the His and it is required for the mechanism acid/base.
In a study (Mayer et al., 2006) two enzymes present in Cellulomonasfimi were characterized, aβ-Nacetylhexosaminidase and a β-N-acetylglucosaminidase/βglucosidase. The β-N-acetylglucosaminidase found showed β-glucosidase activity and belongs to the glycosidase family 3 as well as the protein related to gene sequenced in our work. A glycosyl hydrolase belonging to family 3 (Choi et al., 2009) was identified as β-Nacetylhexosaminidase, it showed specificity against βlinked N-acetyl glucosamine sugars and showed activity on pNPGlc, the substrate commonly used for β-glucosidase assay. Other reports have demonstrated the broad activity of glycosyl hydrolases on different substrates (Painbeni et al., 1992;Busto et al., 1995;Kim et al., 2012). Despite the showed capability of 9X-GLC to grow on media with cellobiose or cellodextrins, as well as the measured β-glucosidase activity of the strain and the characterization of the sequenced generepresentindicia suggesting that the gene described here encodes an enzyme with β-N-acetylglucosaminidase and β-glucosidase. Further research is needed to fully describe the specificity of this enzyme and contribute to the annotation of this group of hydrolases.
Conclusion
A bacillus Gram positive strain was isolated from hot compost, with 16S ribosomal sequence with 97% identity reported for Brevibacillusthermoruber, the isolated microorganism is able to grow on media with cellobiose or cellodextrins as only energy source. From a characterization on optimal pH and temperature for activity of β-glucosidase, Brevibacillusthermoruber 9X-GLC extract presented its optimal activity at 55°C and pH 6.0, furthermore its resistance to glucose is noticeable. A gene codifying for this enzyme was amplified and sequenced, such enzyme has high identity percentage with reported glycosidases genes like Betaglucosidases, chitobiases and Beta-hexosaminidase. The enzyme produced by this strain could be used for the production of fuels and chemicals starting from lignocellulosic plant biomass.
Acknowledgments
We acknowledge National Council of Science and Technology (CONACYT) of the Mexican Government for scholarship and research grants for the development of this study.
Funding Information
This study was funded by SENER (Energy Department) and CONACYT (National Council of Science), during the project "Desarrollo y adaptación de tecnología para la conversión de subproductoslignocelulósicosenetanolcarburante" #151370 in Mexico. | 2019-04-02T13:11:57.921Z | 2017-10-20T00:00:00.000 | {
"year": 2017,
"sha1": "5312ed52489321edc7cb1cf4a2ddab9b4996afa1",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajbbsp.2017.157.166",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f273d16e5e129f0607247ec82cb10f6c804220db",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
1945531 | pes2o/s2orc | v3-fos-license | Discriminant and concurrent validity of a simplified DSM-based structured diagnostic instrument for the assessment of autism spectrum disorders in youth and young adults
Background To evaluate the concurrent and discriminant validity of a brief DSM-based structured diagnostic interview for referred individuals with autism spectrum disorders (ASDs). Methods To test concurrent validity, we assessed the structured interview's agreement in 123 youth with the expert clinician assessment and the Social Responsiveness Scale (SRS). Discriminant validity was examined using 1563 clinic-referred youth. Results The structured diagnostic interview and SRS were highly sensitive indicators of the expert clinician assessment. Equally strong was the agreement between the structured interview and SRS. We found evidence for high specificity for the structured interview. Conclusions A simplified DSM-based ASD structured diagnostic interview could serve as a useful diagnostic aid in the assessment of subjects with ASDs in clinical and research settings.
Background
Autism spectrum disorders (ASDs) comprise a group of neuropsychiatric disorders that include autistic disorder, Asperger's disorder, and pervasive developmental disorder not otherwise specified (PDD-NOS). They are distinguished from other psychiatric disorders by the presence of deficits in reciprocal social behavior, variously accompanied by deficits in communication, and/ or repetitive or stereotyped behaviors. The DSM-III-R [1] and DSM-IV [2] have operationalized the required diagnostic criteria necessary for establishing diagnoses of ASDs based on the presence or absence of a set of categorical symptoms. While a thorough evaluation by an expert clinician who has significant experience in specific diagnoses is considered the best method of diagnosing complex conditions such as ASDs, structured diagnostic interviews have been developed to help non-expert clinicians elicit the required information for these diagnoses. The most widely used structured interview tool for establishing a diagnosis of autism in the research setting is the Autism Diagnostic Interview-Revised (ADI-R). This interviewer requires specelized training in order to administer it, and the training to become proficient in its administration is expensive and time consuming. Additionally, the ADI-R takes at least 2 hours to complete making it of limited feasibility in clinical settings and in large population-based studies.
In contrast to the 93 questions and associated complex algorithms of the ADI-R, the DSM includes only 16 items in its diagnostic criteria for ASDs. Moreover, because the literature clearly indicates that most individuals with ASD have other comorbid conditions such as Attention-Defecit/Hyperactivity Disorder (ADHD) and mood disorders [3][4][5][6][7] and the ADI-R by itself does not assess other disorders that are pervasive in this population, another means for diagnosing other conditions is necessary and increases the length of time needed for a full assessment. This situation calls for the development of simplified instruments to aid in the assessment of ASDs in clinical and non-clinical settings.
Several attempts have been made to simplify the complexity of the assessment process for youth with ASDs. One such effort is development of the Social Responsiveness Scale (SRS), a paper and pencil instrument that can be completed by parents or teachers in 15-20 minutes. Constantino and colleagues [8,12] demonstrated its concurrent and discriminant validity as a measure of ASDs. As part of these efforts, Constantino et al. [8] compared the SRS with the ADI-R in 61 child psychiatric patients. Correlations between SRS scores and ADI-R algorithm scores for DSM-IV criterion sets were on the order of 0.7. SRS scores were unrelated to I.Q. and exhibited inter-rater reliability on the order of 0.8. Though SRS is a valid quantitative measure of autistic traits, the instrument lacks the ability to distinguish autism from the spectrum (Asperger's disorder and PDD-NOS) in individuals with ASD [8]. Despite the utility of the SRS as a screening tool for ASD, there continues to be a need to have a simplified DSM-based structured diagnostic interview module to aid in the diagnosis of individuals with ASD in clinical and research settings.
The main aim of the present study was to evaluate the concurrent and discriminant validity of a simplified, relatively brief, structured, diagnostic interview closely linked to the defining features of ASDs in the DSM (see Table 1). To examine the concurrent validity of this instrument, we examined its correspondence with a gold standard expert clinician's diagnoses in a large sample of clinically referred youth with ASDs. In addition, we examined its correspondence with the SRS because of the previously documented excellent correspondence between the SRS and the ADI-R [8]. To examine the discriminant validity of the DSM-ASD structured diagnostic interview, we calculated its specificity comparing subjects with ASD with those from a large sample of clinic referred youth with ADHD. We hypothesized that our DSM-based structured diagnostic interview for ASDs would have good concurrent and discriminant validity.
Methods
Participants ASD subjects were youth, ages 4 to 23 years of age, consecutively referred to a specialized program for the treatment of ASDs at a university-affiliated hospital. The diagnosis of ASD was established by a comprehensive psychiatric evaluation conducted by an board-certified psychiatrist experienced in evaluating ASD (GJ). The psychiatric diagnostic interview was conducted with the subject and caretaker (usually parent/s) and incorporated information from multiple sources when available (psychiatric records, schools, social services). Based on this clinical evaluation, all ASD subjects met Diagnostic and Statistical Manual of Mental Disorders -Fourth Edition (DSM-IV) diagnostic criteria for autistic disorder, Asperger's disorder, or PDD-NOS.
Psychiatric comparison participants were derived from consecutive referrals to a pediatric psychopharmacology program at a major academic center from 1991 to 2008. Children were referred for psychiatric evaluation and psychopharmacological intervention for behavioral and emotional difficulties and not for evaluation of any specific disorder. There was no selection bias based on social class or insurance restrictions. We included subjects if they met diagnostic criteria for ADHD on a structured diagnostic interview (Schedule for Affective Disorder and Schizophrenia for School-Age Children Epidemiologic Version [K-SADS-E]) [9]. The status of ASD in the ADHD control participants was assessed by the DSM based structured interview for ASD (see below). The structured diagnostic interview was completed by highly trained and supervised psychometricians from interviews with the parent, usually the mother. We computed kappa coefficients of agreement between these raters and experienced board certified child and adult psychiatrists and licensed clinical psychologists. Based on 500 assessments, the median kappa coefficient was 0.98. Kappa coefficients for ADHD was 0.88. Before final diagnostic assignments were made, information derived from these interviews was reviewed blindly by a committee of expert clinicians that included board certified child and adult psychiatrists and experienced licensed psychologists. We estimated the reliability of the diagnostic review process by computing kappa coefficients of agreement for clinician reviewers. For these diagnoses, the median reliability between individual clinicians and the review committee assigned diagnoses was 0.87. Kappa coefficients for ADHD was 1.0.
Materials
For each ASD subject, a parent also completed The Social Responsiveness Scale (SRS) [10] a 65 item rating scale that measures the severity of autism spectrum symptoms including elements of reciprocal social behaviors (39 items), social use of language (6 items), and behaviors characteristic of children with autism spectrum disorders (20 items). Each item on the scale is rated on a Likert scale from "0" (never true) to "3" (almost always true). The psychometric properties of the SRS have been well established [11][12][13][14].
DSM-based structured interview for ASD
To assess ASDs we developed DSM-based diagnostic criterion into an interview format (see Table 1) using 4, Marked abnormalities in the production of speech, including volume, pitch, stress, rate, rhythm, and intonation 5. Did s/he repeat words or phrases s/he has just heard, in place of responding to what was said? Did s/he often use the wrong pronouns to refer to her/himself or others, or refer to him/herself in the third person, as "he wants a remarks cracker?" 5. Marked abnormalities in the form or content of speech, including stereotyped and repetitive use of speech; use of "you" when "I" is meant; idiosyncratic use of words or phrases; or frequent irrelevant remarks (c) Stereotyped and repetitive use of language or idiosyncratic language 6. Did s/he seldom, if ever, start a conversation with someone else, even if s/he might talk to her/himself? (a) Encompassing preoccupation with one or more stereotyped and restricted patterns of interest that is abnormal either in intensity or focus the parent as the informant. Diagnostic assessment of ASDs by this interview required a lifelong severe and pervasive deficit in development of reciprocal social interaction, communication, and restricted patterns of behavior. ASD subjects were defined as subjects meeting criteria for autistic disorder or PDD-NOS. To be given the diagnosis of autistic disorder, the participant had to meet DSM-III-R diagnostic criteria of eight out of sixteen symptoms with at least two symptoms from each of the three aforementioned domains of PDD. A diagnosis of PDD-NOS was received if more than two of the required symptoms were met with at least one symptom present from each of the three domains of PDD. This DSM-III-R based structured interview for ASD was developed prior to the release of DSM-IV criterion for ASD. This interview for ASD was administered to all the participants in this study. As data collection for the psychiatric comparison group in this study preceded the advent of DSM-IV, in order to maintain consistency in assessment the DSM-III-R criterion was retained beyond the release of DSM-IV. This DSM-based structured interview for ASD is added as a module to K-SADS-E and is administered by the trained interviewer in similar manner as the structured interview. All questions in the structured interview are asked in yes/no format. If the interviewee positively endorses a question, interviewers have specific follow-up questions they are required to ask. These questions include ages a symptom began/ ended, if such statements have been true in the past month, and specific examples to elaborate on responses to the initial probes. Responses to follow-up questions help to determine whether each criterion is met. Supporting the interrater reliability of this diagnostic interview, the kappa coefficient of agreement for ASDs between the raters and experienced board certified child and adult psychiatrists and licensed clinical psychologists was 0.90. For ASDs, the reliability between an individual clinician and the review committee assigned diagnoses was 0.88. Table 1 also addresses the correlation between DSM-III-R criteria and DSM-IV criteria that clearly indicate that our DSM-based interview sufficiently covers both versions of the DSM.
Full scale IQ
Subjects with ASDs were assessed with the WASI [15]. Psychiatric comparison subjects were assessed with the WISC-R (N = 464) [16], WISC-3 (N = 97) [17], or the WASI (N = 77) [15]. The study was approved by the Institutional Review Board and in all cases parents gave written informed consent for participation.
Statistical Approach
Conditional probabilities were calculated by determining how many clinically diagnosed ASD subjects and psychiatric comparison subjects met ASD criteria using the structured interview ASD module. We also determined how many ASD subjects had SRS scores of 60 or above. Then we examined the relationship between the structured interview diagnosis of ASD and scores on the SRS using two-sample t-tests. We randomly excluded 30% of the females from the original pool of the psychiatric comparison group so that both groups had similar gender ratios.
Results
Out of 196 consecutive referrals to the program between October 2007 and August 2009, 123 individuals met the diagnostic criteria for ASD (75 autistic disorder, 22 Asperger's disorder, and 26 PDD-NOS) on clinical evaluation by the expert clinician (GJ). The ADHD psychiatric comparison group (ADHD group, N = 1563) did not significantly differ on age, sex, or ethnicity compared to the ASD group ( Table 2). The ASD group had, on average, seven points lower full scale IQ compared to the ADHD group (p < 0.001, Table 2).
Concurrent Validity
Ninety-four percent (116/123) of the clinically diagnosed subjects with ASD also met criteria for ASD on the DSM-based structured diagnostic interview for ASDs resulting in a sensitivity of the DSM-based structured diagnostic interview for ASDs of 94%. Ninety-five percent (n = 117) of the clinically diagnosed subjects with ASD had an SRS t-score of 60 or higher (in the clinical range for ASD). Of the 116 subjects with a positive diagnosis of ASD on the DSM-based structured interview for ASD, 112 (97%) also had an SRS t-score of 60 or higher. Of the 117 subjects with an SRS t-score of 60 or higher, 112 (96%) had a positive diagnosis of ASD on the DSM-based structured interview diagnosis for ASD. Figure 1 summarizes the 3-way agreement found between clinical interview, the structured interview, and the SRS. Figure 2A shows the correspondence between the DSM-based ASD structured interview diagnosis and the SRS t-scores. The small number (n = 7) of subjects that did not meet criteria for ASD on the DSM-based structured interview for ASD (but met criteria for ASD by the expert clinician assessment) had a mean SRS t-score in the clinical range (t = 65.0, SD = 10.3) that was significantly lower than the mean SRS t-score in subjects with a structured diagnostic interview for ASDs corresponding to PDD-NOS (t = 77.5, SD = 11.7) and subjects with an ASD structured interview diagnosis of autistic disorder (t = 83.8, SD = 7.8). As shown in Figure 2B, subjects diagnosed with autistic disorder had a significantly higher rate of abnormal SRS scores compared to the other two groups.
Discriminant Validity
Eleven percent (172/1563) of the ADHD group met structured interview criteria for ASDs on the DSMbased structured interview for ASDs. Therefore, a conservative estimate of the specificity of the DSM-based structured interview for ASD was 89%. Positive predictive value was 40% (116/288), and negative predictive value was 99.8% (1391/1398). In the ADHD group, subjects with a positive ASD diagnosis from the DSMbased structured interview were significantly younger than subjects without an ASD diagnosis (9.6 ± 3.5 versus 10.4 ± 3.3, z = -2.92, p = 0.003). Age was not significantly associated with the DSM-based structured interview diagnosis of ASD in the ASD group (z = 0.05, p = 0.96). In both of the groups, there was no relationship between meeting criteria for a structured interview diagnosis of ASDs and full scale IQ (ASD, z = 0.60, p = 0.54; ADHD, z = 0.51, p = 0.61). In the ASD group, subjects with a previous diagnosis of ASD were significantly more likely to receive a structured interview diagnosis of ASDs compared to subjects who not been previously diagnosed with ASD (previously diagnosed = 109/112, 97%; not previously diagnosed = 7/11, 64%; χ2(1) = 21.18, p < 0.001). Likewise, the clinician diagnosis was significantly associated with the structured interview diagnosis of ASD (autism = 100% received structured interview diagnosis of ASD; Asperger's disorder = 91%; PDD-NOS = 81%; χ2(2) = 13.88, p = 0.001).
Discussion
The main purpose of the present study has been to evaluate the concurrent and discriminant validity of a simplified DSM-based structured diagnostic interview for the assessment of ASDs in a clinical setting. The present study reports excellent agreement of the DSMbased ASD structured diagnostic instrument with a gold standard expert clinician diagnosis of ASD based on a DSM-IV clinical assessment through detailed interviews with the patient and the parent. Results also showed excellent sensitivity and specificity when comparisons were made between subjects with ASD and subjects with ADHD. These results indicate that our DSM-based structured diagnostic interview for ASD can be a useful and cost-effective standardized assessment instrument for reliably identifying ASD in clinical and research settings. Many clinics and research settings employ diagnostic structured interviews for screening a broad range of psychiatric disorders but these structured interviews lack measures to evaluate ASD. Therefore, this DSM based structured diagnostic interview -that is administered in a similar manner as structured interviews -would complement methods that are often used and can be easily added to the diagnostic interviews. This may improve the efficiency of the assessment, as it is included with screening of other psychiatric conditions.
Our design provides a reasonable estimate of sensitivity (i.e., the probability of our structured interview correctly identifying ASD cases). Remarkably, the sensitivity of the structured interview was extremely high, 94%. Equally remarkable is the finding of a 95% sensitivity of an SRS t-score of 60 or higher (accepted cut-off for a screen of ASD) as an indicator of the gold standard diagnosis. Consistent with these findings, 97% of subjects with our DSM based structured interview diagnosis of ASD also had an SRS t-score in the clinical range (≥ 60). If replicated, these findings would support the utility of a simple to use structured diagnostic instrument based on the defining items for ASD in the DSM-IV to help identify youth with ASD.
Results from our analysis show that the correspondence between our DSM-based structured interview for ASD with the expert clinician assessment was unrelated to the IQ of the subjects in both clinical samples. In addition, our structured interview allowed for the diagnosis of subjects with subthreshold disorders and different definitions of ASDs such as Asperger's Disorder or PDD-NOS, where the ADI-R algorithm score for social impairment may fall below the published clinical cutoff. Thus, our DSM-based structured interview for ASD may be useful and accurate for the assessment of ASD individuals across different cognitive and developmental levels with full and subsyndromal manifestations of these disorders.
Our results must be interpreted in the context of some methodological limitations. Since subjects in this study were referred for ASDs, our results may not generalize to other clinical and non-clinical settings. Because our sample consisted largely of Caucasian subjects, we do not know whether our results will generalize to other ethnic groups. The SRS is validated for youth ages 4 to 18 years but 4% (5/123) of our ASD sample was older than 18 years and received the scale. Although our DSM-based structured interview for ASD was DSM-III-R based, there have been very few changes between the DSM-III-R definitions of ASD and those in DSM-IV (see Table 1). Moreover, we documented a very high correspondence between the DSM-IV based clinician diagnosis of ASD and this instrument. Just as the current version of the structured interview is able to capture both DSM-III-R and DSM-IV diagnostic criteria, a revised version will capture DSM-V measures. As currently proposed, DSM-V criteria for Autism Spectrum Disorders are more narrow and unidimentional. As such, the DSM-based structured interview presented in this paper incorporates the criteria proposed in DSM-V.
Coding criteria could also be altered in the future to encompass the changes proposed in the diagnostic criteria for ASD in DSM-V.
Conclusions
Despite these considerations, our results document the utility of a DSM-based, simple to use and administer, relatively brief structured diagnostic instrument to aid in the identification of youth with ASDs in the clinical setting. If confirmed, these findings would suggest that our DSM-based structured diagnostic instrument for ASD could serve as a rapid and cost-effective assessment instrument to help identify cases likely to meet clinical criteria for ASDs in clinical and non-clinical settings.
Education grants from pharmaceutical companies, | 2016-05-09T04:22:50.683Z | 2011-12-30T00:00:00.000 | {
"year": 2011,
"sha1": "a83ab359d8704cc6a174addaa1a090df2485daa7",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/1471-244X-11-204",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c712d11db6ce1fcd643a9434478b6e3646eb2ae6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
9740033 | pes2o/s2orc | v3-fos-license | Comparative Genomic Analysis of Two Serotype 1/2b Listeria monocytogenes Isolates from Analogous Environmental Niches Demonstrates the Influence of Hypervariable Hotspots in Defining Pathogenesis
The vast majority of clinical human listeriosis cases are caused by serotype 1/2a, 1/2b, 1/2c, and 4b isolates of Listeria monocytogenes. The ability of L. monocytogenes to establish a systemic listeriosis infection within a host organism relies on a combination of genes that are involved in cell recognition, internalization, evasion of host defenses, and in vitro survival and growth. Recently, whole genome sequencing and comparative genomic analysis have proven to be powerful tools for the identification of these virulence-associated genes in L. monocytogenes. In this study, two serotype 1/2b strains of L. monocytogenes with analogous isolation sources, but differing infection abilities, were subjected to comparative genomic analysis. The results from this comparison highlight the importance of accessory genes (genes that are not part of the conserved core genome) in L. monocytogenes pathogenesis. In addition, a number of factors, which may account for the perceived inability of one of the strains to establish a systemic infection within its host, have been identified. These factors include the notable absence of the Listeria pathogenicity island 3 and the stress survival islet, of which the latter has been demonstrated to enhance the survival ability of L. monocytogenes during its passage through the host intestinal tract, leading to a higher infection rate. The findings from this research demonstrate the influence of hypervariable hotspots in defining the physiological characteristics of a L. monocytogenes strain and indicate that the emergence of a non-pathogenic isolate of L. monocytogenes may result from a cumulative loss of functionality rather than by a single isolated genetic event.
The vast majority of clinical human listeriosis cases are caused by serotype 1/2a, 1/2b, 1/2c, and 4b isolates of Listeria monocytogenes. The ability of L. monocytogenes to establish a systemic listeriosis infection within a host organism relies on a combination of genes that are involved in cell recognition, internalization, evasion of host defenses, and in vitro survival and growth. Recently, whole genome sequencing and comparative genomic analysis have proven to be powerful tools for the identification of these virulence-associated genes in L. monocytogenes. In this study, two serotype 1/2b strains of L. monocytogenes with analogous isolation sources, but differing infection abilities, were subjected to comparative genomic analysis. The results from this comparison highlight the importance of accessory genes (genes that are not part of the conserved core genome) in L. monocytogenes pathogenesis. In addition, a number of factors, which may account for the perceived inability of one of the strains to establish a systemic infection within its host, have been identified. These factors include the notable absence of the Listeria pathogenicity island 3 and the stress survival islet, of which the latter has been demonstrated to enhance the survival ability of L. monocytogenes during its passage through the host intestinal tract, leading to a higher infection rate. The findings from this research demonstrate the influence of hypervariable hotspots in defining the physiological characteristics of a L. monocytogenes strain and indicate that the emergence of a non-pathogenic isolate of L. monocytogenes may result from a cumulative loss of functionality rather than by a single isolated genetic event.
Keywords: comparative genomic analysis, Listeria monocytogenes, pathogenesis, hypervariable hotspots, attenuated virulence, stress survival islet, liPi-3, DPc6895, serotype 1/2b inTrODUcTiOn Listeria monocytogenes is a Gram-positive, facultatively anaerobic food-borne pathogen, and is the causative agent of the bacterial disease listeriosis in humans and animals. Recent figures demonstrate that approximately 99% of all human listeriosis cases arise due to the consumption of contaminated food produce (1), with serotypes 1/2a, 1/2b, 1/2c, and 4b implicated as the source of infection in Comparative Genomic Analysis of L. monocytogenes Frontiers in Nutrition | www.frontiersin.org December 2016 | Volume 3 | Article 54 95% of these cases (2). Its psychrotrophic nature coupled with its tolerance of low pH and high salt concentrations (3) allows the bacterium to survive in refrigerated foods and reach levels required for human infection, if the food can support growth. L. monocytogenes is also commonly found in farm environments and silage in particular (4), and as such contaminated feeds represent a similar vector for food-borne transmission of the bacterium to animals used in food production. Its ability to cause a systemic infection in humans and animals alike is reliant on a combination of physical attributes, including resistance to environmental stresses and a capacity for virulence and survival within the host. Traditionally, genetic relationships between L. monocytogenes strains are elucidated either by pulsed-field gel electrophoresis (PFGE) involving macrorestriction of genomic DNA to generate an associated DNA fingerprint (5) or by multilocus sequence typing (MLST) where specific sequences from a number of housekeeping genes are analyzed (6). These approaches are limited, however, in that they provide little insight into the pan genome of L. monocytogenes isolates. Comparative genome analysis has emerged as a robust tool for evaluating underlying genetic properties of bacterial strains, such as their evolutionary relationships, pathogenic potential, antibiotic resistances, and niche adaptation capabilities. With regard to Listeria, comparative genomics has proven to be particularly effective in determining the basis behind a number of observed phenotypic characteristics of L. monocytogenes, including the putative identification of many virulence genes responsible for L. monocytogenes pathogenesis on the basis of their relative absence in strains of the non-pathogenic Listeria innocua (7)(8)(9). Also, recent comparative analysis of two persistent L. monocytogenes strains that were isolated from separate fish processing plants almost 6 years apart (10) identified an extremely close relationship between their genomes. As such, it was proposed that strains with specific genetic traits may be selected for within a given environmental niche, providing a potential insight into the mechanisms of persistence of L. monocytogenes. Persistence is defined as the regular re-isolation of a given strain from the same environment over the course of several months or years. Comparative genomics has also been used to analyze L. monocytogenes isolates associated with listeriosis outbreaks (11,12), to understand the unique genomic properties harbored by these strains contributing to systemic infection, and to determine the most efficient manner in which to analyze the epidemiological traits of future outbreak strains (13).
From an evolutionary perspective, one particular study involving a range of L. monocytogenes genomes of differing lineage and serotype demonstrated that this bacterial species, like others, has a highly conserved set of genes shared by all sequenced strains known as the "core genome" (14). While this core genome is common to all strains, subtyping methods (such as PFGE, MLST, and ribotyping) have demonstrated that examined L. monocytogenes isolates form a structured population consisting of a number of different evolutionary lineages (15). The majority of tested strains of serotypes 1/2a, 1/2b, 1/2c, 3a, 3b, 3c, 4b, and 4e cluster to evolutionary lineages I and II. Flagellar type "a" isolates such as serotypes 1/2a and 3a cluster to lineage II along with serotype 1/2c and 3c isolates, while flagellar type "b" isolates such as the 1/2b, 3b, and 4b serotypes all cluster within lineage I along with serotype 4d and 4e isolates (15). Two other evolutionary lineages have also been discovered and characterized. Lineage III contains serotype 4a, 4c, and a small number of 4b isolates (16) and represents a sister group to lineage I (15). Lineage IV, which was originally thought to represent a subgroup of lineage III (IIIB), is the most recently discovered (17), though only a limited number of isolates belonging to this lineage have been characterized to date. In general, the genomes of lineage I strains of L. monocytogenes (serotypes 1/2b, 3b, 4b, and 7) share a much higher degree of sequence similarity and exhibit a much lower degree of recombination than their lineage II and III counterparts (18,19). Indeed, lineage I strains of L. monocytogenes predominantly differ from one another only in terms of their serotype, sequence type, prophage compositions, and a small fraction of chromosomal genes (12-23% of the total genome) that are collectively known as the accessory genome (14). Accessory genes are not as highly conserved as the core genes and in many cases can be strain-specific. Furthermore, while these accessory genes are located throughout the L. monocytogenes genome, their distribution is not entirely random. In certain chromosomal regions, accessory gene accumulations occur as a result of prophage acquisition (14), while other regions exhibit a non-random accumulation of these genes and are therefore denoted "hypervariable hotspots, " with nine such genomic regions recently defined in L. monocytogenes (19).
In this study, the genomes of two serotype 1/2b isolates of L. monocytogenes were subjected to comparative analysis in order to determine if there is a link between their core and accessory genome contents and their phenotypic characteristics. The two strains differed in their infection abilities. One of the isolates, strain DPC6895, was incapable of establishing a systemic infection within its animal host, despite it representing one of the four L. monocytogenes serotypes responsible for the vast majority of listeriosis cases (2,20). Instead, this isolate caused a subclinical infection (21), and such subclinical infections generally go undetected, resulting in a potential public health hazard. On the other hand, strain FSL J2-064 did cause a systemic infection within its animal host. The aim of this research was to focus on a broad comparison of genes responsible for infection, intracellular survival, and proliferation within the host, in an attempt to discover a genomic basis for the perceived attenuation of pathogenesis in strain DPC6895 compared to strain FSL J2-064, and to evaluate the importance of the accessory genome in L. monocytogenes virulence and disease manifestation.
MaTerials anD MeThODs input strains for comparative analysis
The two L. monocytogenes strains examined in this study were of the 1/2b serotype. Strain DPC6895 was originally isolated from raw milk expressed by a cow with subclinical bovine mastitis (20,21), while strain FSL J2-064 is a bovine clinical isolate (22,23), but of a ribotype (or restriction digest fingerprint) that is also commonly found among food isolates (DUP-1052), and is associated with human disease (24). The genomes of both strains are available from public databases. The Whole Genome Shotgun project for L. monocytogenes strain DPC6895 was deposited at DDBJ/EMBL/GenBank and is available for download under the accession number LABG00000000. The version described in this paper is version LABG01000000. The genome sequence of L. monocytogenes strain FSL J2-064 is available from GenBank under the accession number NC_021824.
identification of strain-specific genes in each of the input genomes Whole genome comparisons were undertaken using BLAST Ring Image Generator (25) and Mauve (26), in order to visually identify unique genomic regions belonging to each of the strains. Genes within these regions were then confirmed to be strainspecific to each particular isolate through BLAST comparisons of their translated protein sequences against the genome of the other isolate, using RAST (27,28).
Detection of clustered regularly interspaced short Palindromic repeat (crisPr)/crisPr-associated (cas) systems and Prophage identification
Clustered regularly interspaced short palindromic repeat clusters in each genome were identified using CRISPRFinder (29), with flanking sequences of these clusters subsequently scanned for the presence Cas gene sequences. Viable and cryptic prophages within each of these genomes were detected using the PHAST tool (30).
linear comparisons and identification of hypervariable hotspots
Linear comparisons of genes and gene clusters were prepared with Artemis (31) and subsequently visualized using the EasyFig software (32). Hypervariable hotspot locations in each genome were determined via BLAST analysis using conserved core gene identifiers and previously defined hypervariable hotspot locations for L. monocytogenes strain SLCC2755, which was used as a reference (19).
resUlTs anD DiscUssiOn (20), though this strain did survive numerous antibiotic treatments and continued to be detected in milk expressed from the host for a prolonged period of time. While the host's immune system may have been a contributing factor, comparative analysis of these two strains was undertaken in order to determine the extent of genetic diversity between them and to identify genomic characteristics, which may account for the observed physiological differences. L. monocytogenes strain DPC6895 was determined to contain a total of 123 genes that were not present in the genome of FSL J2-064 (Figure 1; Table S2 in Supplementary Material), while strain FSL J2-064 contained a total of 121 genes that were absent from the DPC6895 genome (Figure 2; Table S3 in Supplementary Material).
strain-specific genes in L. monocytogenes strain DPc6895
The strain-specific genes in DPC6895 predominantly had functions, which contributed to an enhanced survival ability of this strain in a number of unfavorable environmental conditions. First, a number of these strain-specific genes had annotated functions in bacteriophage resistance. L. monocytogenes utilizes a number of biological systems in order to achieve resistance to bacteriophage infection. Foremost among these are the CRISPR sequences together with adjacent Cas genes and the restriction modification (RM) systems, which are widely distributed among prokaryotes (33). CRISPR/Cas genes comprise the adaptive immune system in many bacterial species including L. monocytogenes and have a role in defense of the bacterial cell against invading bacteriophages or plasmid-derived elements (34). Immunity against foreign invasion in bacteria is achieved first by integration of a small piece of viral or plasmid DNA (known as a spacer sequence) into the CRISPR locus. During infection, CRISPR-RNAs are transcribed, which guide the Cas proteins to target DNA that matches these spacer sequences that are then cleaved (35). RM systems are also used by bacteria in order to protect themselves from foreign invading DNA, of which there are three distinct classical types in addition to several atypical systems, which differ from one another in terms of their composition and cofactor requirements (36). Each of the different classical RM system types have been previously observed in L. monocytogenes (12,(37)(38)(39). Strain DPC6895 contained one CRISPR cluster, which consisted of 22 highly conserved direct repeat (DR) regions interspersed with 21 spacer sequences ( Table 2). Flanking this CRISPR cluster were a total of seven open reading frames (ORFs) with annotated functions such as Cas genes. BLASTn analyses of the spacer sequences identified homologies to a number of different temperate serovar 1/2-specific L. monocytogenes phages including A006, A118, and LP-101, suggesting that this system has a functional role in resistance to infection from these particular siphoviruses. These homologies indicate a role for this particular system in expanding the immunity of these two particular strains to cover a range of both lytic and temperate L. monocytogenes phages. The presence of this CRISPR/Cas system may enhance the capacity of strain DPC6895 to withstand a wider array of extracellular threats posed by bacteriophages. No definitive CRISPR/Cas systems were detected in strain FSL J2-064; however, this strain contained all three subunits of a type I RM system, suggesting a difference between these strains in terms of their mechanisms of phage resistance. Strain DPC6895 contains two genes with 100% nucleotide identity to the R and M subunits of this system in FSL J2-064 but does not harbor the third S subunit, and as such this system is presumed to be non-functional in strain DPC6895. Second, a number of the strain-specific genes in DPC6895 had annotated functions associated with antibiotic and heavy metal resistance. L. monocytogenes has previously been demonstrated to have quite a broad spectrum of resistance to numerous antibiotics and antimicrobial agents (2,40) in addition to exhibiting an elevated tolerance to heavy metals (41). The two strains in this study were analyzed for the presence of antibiotic and antimicrobial resistance genes and for heavy metal transporters. The results of this analysis (Table S4 in Supplementary Material) demonstrated that each of the genomes harbor specific resistance genes to a number of antibiotics, including the β-lactams, quinolone, fosfomycin, lincomycin, vancomycin, and tetracycline. Additionally, these strains also contained a number of antimicrobial and quaternary ammonium compound resistance genes including mdrL and lde, which are believed to be associated with increased tolerance of L. monocytogenes to benzalkonium chloride (42). Furthermore, lde is also thought to function in resistance of L. monocytogenes to fluoroquinolone (43). While both strains encoded many non-specific multidrug resistance transporters, strain DPC6895 harbored one additional multidrug transporter (locus tag TZ05_2661c), which was absent from strain FSL J2-064. The exact function of this particular transporter has not yet been fully elucidated, but subsequent BLASTp analysis identified a conserved domain within the translated protein product of this gene, which has a putative function in resistance to the lantibiotic gallidermin (44), suggesting a potentially similar role for this gene in each of these strains. As previously stated, L. monocytogenes strain DPC6895 was originally isolated from raw milk expressed by a cow with subclinical bovine mastitis. Following the confirmation of a subclinical infection, the infected cow was medically treated with subsequent intramammary injections of Synulox LC, Tylosin, and oxytetracycline. However, the infected animal's milk continued to test positive for L. monocytogenes despite the intervention with antibiotic treatments (20). Synulox LC contains the antibiotics clavulanic acid and amoxicillin, which are both β-lactams, while oxytetracycline is an antibiotic that is related to tetracycline, and tylosin is a macrolide antibiotic. Analysis of the DPC6895 genome identified a total of eight genes encoding proteins with associated functions in resistance to β-lactams, while a gene associated with tetracycline resistance was also identified. The presence of the aforementioned mdrL gene in strain DPC6895 may account for its perceived resistance to Tylosin, given that previous research has demonstrated that disruption of this particular gene resulted in a higher susceptibility to macrolides (45).
In terms of heavy metal resistance, both of the strains contained a number of non-specific heavy metal transporters and specific lead/cadmium/zinc resistance genes. Interestingly, strain DPC6895 was also found to contain a novel 6.5 kb "islet" consisting of six genes, which included a heavy metal transporting ATPase, and a cadmium efflux system accessory protein (Table S5 in Supplementary Material). The G + C content of this islet was 35.9%, indicating that it is possibly of plasmid origin. This sixgene islet has not previously been observed in L. monocytogenes, with the only known Listeria homologs to these genes found in the recently sequenced L. innocua strain MOD1_LS888 (46). Linear alignments between these genomic regions identified a shared 99% nucleotide sequence identity, while also demonstrating that this islet is absent in strain FSL J2-064 and two other L. monocytogenes serotype 1/2b genomes that were available on Genbank (Figure 3). The product of one of the genes in this cluster, namely TZ05_0424, shares 100% amino acid sequence identity with the Staphylococcus aureus transposase Tn552 (47), suggesting strain DPC6895 may have acquired this cadmium resistance islet through a horizontal gene transfer event.
Finally, a number of the strain-specific genes in DPC6895 had annotated functions associated with peptide transport. The oligopeptide permease (opp) operon in L. monocytogenes consists of five genes (oppA, oppB, oppC, oppD, and oppF) that are essential for growth at low temperatures and contribute to intracellular growth of this bacterium (48). Comparative analysis of each of the isolates in this study identified the presence of this operon in both of the genomes (data not shown). However, in addition to the oligopeptide transporter operon, strain DPC6895 also contained a unique 5 kb cluster of genes (TZ05_2018-2022) within hypervariable hotspot 9, which BLASTp analysis indicated as a dipeptide transport system (dppABCDF) ( Table S2 in Supplementary Material). The role of this system in strain DPC6895 is unclear. However, previous research has indicated that the presence of dipeptide transporters may confer a selective advantage on L. monocytogenes, given the fact that unlike numerous competing bacteria within an environment, it would not need to expend vast amounts of energy on protease synthesis (49). In addition, the presence of a dipeptide transporter would allow the organism to thrive in food samples, which may be deficient in free amino acids but rich in peptides. The presence of this system in strain DPC6895, therefore, could allow it to proliferate in what would be otherwise considered unfavorable environmental conditions. strain-specific genes in L. monocytogenes strain Fsl J2-064 The strain-specific genes in FSL J2-064 predominantly had functions, which contribute to enhancing the pathogenicity of this isolate. First, a number of the strain-specific genes in FSL J2-064 had annotated functions associated with virulence. L. monocytogenes requires a wide array of genes in order to successfully establish a systemic infection within a host organism. These genes, termed "virulence factors, " have functions in a number of different biological processes throughout the infection cycle, including host interaction, internalization, host defense evasion, and in vitro proliferation. A large family of leucine-rich proteins, known as the internalins, are important virulence factors involved in host interaction and internalization of pathogenic strains (50). The internalins are classified into four general types on the basis of their specific surface binding domains (51). Type I are known as the LPXTG internalins due to the presence of this sorting signal motif and are covalently anchored to the bacterial cell surface by another virulence factor known as Sortase A. Type II are the GW and WxL internalins, of which just two members (including inlB) have been classified to date. Both members of this subfamily display a C-terminal domain that directs a noncovalent association with the L. monocytogenes cell surface (51). Type III internalins lack a cell wall-anchoring domain and are secreted by the bacterium. They are thought to promote the cell-to-cell spread of L. monocytogenes by relieving the cortical tension of the host cell and enhancing the ability of the bacterium to protrude into the plasma membrane (52). A fourth type of internalin, which contains an atypical leucine-rich repeat region, has also been recently described (53) with lmo0460 as the sole representative member. The genomes of both strains in this study were examined for the presence of internalin and internalin-like genes. A similar complement of internalins was observed in each input strain (Table S6 in Supplementary Material). Strain FSL J2-064 contained 20 type I internalins, while strain DPC6895 contains 21 (TZ05_2026c is novel to this strain). In addition, both isolates contained a virtually identical set of type II and type III internalins. Five homologs of the L. monocytogenes strain EGDe type IV internalin lmo0460 were identified in strain FSL J2-064, localized within hypervariable hotspots 7 and 9. Though the precise function of these type IV internalin proteins has yet to be fully established, they are known to be present in many strains of L. monocytogenes, but absent from non-pathogenic Listeria species such as L. innocua, and thus may have a role in L. monocytogenes virulence. Interestingly, strain DPC6895 lacked any identifiable homolog to the recently described type IV internalin of L. monocytogenes, and as such, the absence of a type IV internalin in strain DPC6895 may be a contributing factor to its perceived attenuated virulence. Further research, however, is necessary to fully establish their functional role in infection. Additionally, two homologs of the internalin-like gene lmo0463 of L. monocytogenes strain EGDe (hypervariable hotspots 7 and 9, respectively) were identified in strain FSL J2-064 but were absent once again from strain DPC6895. Likewise, their precise role in L. monocytogenes virulence remains unclear. Second, the stress survival islet (SSI-1) of L. monocytogenes was identified to be present in the genome of strain FSL J2-064 but is absent from that of strain DPC6895 (Figure 4). This islet is an 8.7 kb region of DNA located between lmo0443 and lmo0449 of strain EGDe and contains five genes that have been previously implicated to assist in the survival of the bacterium under suboptimal environmental conditions (54). Included within this cluster are the pva gene, which has a role in resistance of L. monocytogenes to acute toxicity from bile and bile salts (55), and the gadD1 and gadT1 genes, which contribute to the efficient growth of the bacterium at low pH (56). The corresponding region in strain DPC6895 was identified to contain a single gene encoding a protein that is 100% identical to LMOf2365_0481 of L. monocytogenes serotype 4b strain F2365. The presence of this 182 amino acid protein in strain DPC6895 represents a common feature of islet-negative strains of L. monocytogenes (54). Although this particular gene has been observed to be highly conserved within islet-negative strains of L. monocytogenes, the function of its hypothetical protein product has not yet been established. Prior research has shown that strains, which do not harbor SSI-1 were observed to grow less efficiently under acidic stresses (12), suggesting that the presence of this particular islet within the genome is beneficial in promoting survival in unfavorable acidic environments posed by the stomach and intestinal tract of a host organism. Additionally, the observed differences in growth efficiency between islet-positive and islet-negative strains of L. monocytogenes indicate that the presence of this islet could confer an enhanced ability to proliferate within an organism and lead to an overall higher rate of infection. Therefore, the absence of SSI-1 in DPC6895 may have been a contributing factor in the observed inability of this isolate to establish a clinical infection in its host (20). A number of the strain-specific genes in FSL J2-064 also had annotated functions associated with iron uptake that were absent in strain DPC6895, including the twin-arginine translocase system (57). Given the known association between iron uptake and L. monocytogenes virulence (58), the absence of these genes in strain DPC6895 provides another potential insight into its inability to cause a systemic infection. Finally, the genome of strain FSL J2-064 contained an intact copy of the comK gene, while a prophage insertion (contig 7, position 257959-311605) interrupted the comK gene in strain DPC6895. The entire comK gene sequence in DPC6895 is instead represented by two separate ORFs; TZ05_2272 and TZ05_2336, which together share 100% nucleotide sequence identity with the N-and C-terminal regions of the comK gene in FSL J2-064, respectively. A prophage insertion into the comK gene of L. monocytogenes is a common observation, as this gene represents a "hotspot" for integration of the serotype 1/2-specific bacteriophage A118 and other related phages (11,13,59). The phage insertion into this gene may hold downstream consequences for the pathogenic potential of strain DPC6895, as the comK gene has recently been shown to have an important role in phagosomal escape of L. monocytogenes during infection (60). As such, this interruption to the comK gene may be a contributing factor to the attenuated virulence of strain DPC6895. Interestingly however, the same research demonstrated that the comK prophage in L. monocytogenes strain 10403S excises during bacterial phagocytosis resulting in a reactivation of this gene and the production of a functional ComK protein product, and such an occurrence, therefore, must also be considered a possibility in strain DPC6895. Further investigation is required in order to fully understand the consequences of this prophage insertion.
influence of hypervariable hotspots on the Virulence of strains DPc6895 and Fsl J2-064 As previously mentioned, the absence of type IV internalins (all of which are located within hypervariable hotspots in the L. monocytogenes genome) may be a contributory factor to the inability of strain DPC6895 to establish a systemic infection in the host. In addition, Listeria pathogenicity island 3 (LIPI-3) is a relatively recently discovered pathogenicity island, which has been identified in a subset of atypical L. innocua isolates (61) and a number of lineage I strains of L. monocytogenes. LIPI-3 contributes to virulence and intracellular survival of the pathogen (62) and is located within hypervariable hotspot 8. Comparative Genomic Analysis of L. monocytogenes Frontiers in Nutrition | www.frontiersin.org December 2016 | Volume 3 | Article 54 The main function of this island is the production of a second L. monocytogenes hemolysin, namely listeriolysin S, which is induced under oxidative stress conditions (62). LIPI-3 consists of eight lls genes flanked on either side of the island by two related glyoxalase-encoding genes. Comparative analyses with other serotype 1/2b strains of L. monocytogenes identified that LIPI-3 was not found in either of the bovine isolates DPC6895 or FSL J2-064 (Figure 5), though homologs of the flanking glyoxalaseencoding genes were identified. The high variability generally observed within hypervariable hotspots of the L. monocytogenes genome may account for the absence of this island in these strains.
While the presence of LIPI-3 does not appear to be essential in order to establish a systemic infection, the absence of this island may hinder the ability of a particular strain in the establishment of a systemic infection within the host.
cOnclUsiOn
The results of this study demonstrate the high degree of variability that exists between the accessory genomes of closely related L. monocytogenes isolates. The hypervariable hotspots found in various areas of the genome may be crucial in defining the physiological characteristics of a particular strain, as evidenced by the presence of important gene clusters such as the type IV internalins and the LIPI-3 within these regions. L. monocytogenes strain DPC6895 was shown to be missing some of the key factors that are associated with in vivo survival and virulence, including SSI-1 and LIPI-3, providing insights into the inability of this strain to establish a systemic infection in its host. The results highlight a number of potentially crucial factors for L. monocytogenes virulence within the accessory genome and suggest that bacterial pathogenesis in L. monocytogenes relies on the cumulative effect of a number of genetic factors rather than any single attribute alone. From a regulatory perspective, differentiation of virulent from non-virulent strains is crucially important. As used in this study, whole genome sequencing can be employed as a tool to explore this differentiation.
aUThOr cOnTriBUTiOns
ACasey carried out the laboratory work; KJ, OM, ACoffey, ACasey, and EF were involved in obtaining funding, designing the experiments, interpreting the results, and writing the manuscript.
FUnDing
This work was supported by | 2017-05-04T20:22:01.233Z | 2016-12-21T00:00:00.000 | {
"year": 2016,
"sha1": "644d6e3b6e56341d79bf9b3c9209b54267c00f75",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2016.00054/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "15f64c47df3cbf5cd36bb982c883d4de41285f58",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119190448 | pes2o/s2orc | v3-fos-license | An Analytic Analysis of Phase Transitions in Holographic Superconductors
Using a simple analytic approach, we study the universal properties of second-order phase transition in holographic superconductor models. We explore a general model in arbitrary dimensions in which the condensation occurs via the St\"uckelberg spontaneous symmetry breaking mechanism. All the possible second-order phase transitions and their universal characteristics can be identified analytically. The relationship between the critical temperature and charge density is generic, and the critical exponents can be greater than the typical mean field value 1/2. In addition, the related numerical factors can also be computed qualitatively.
§1. Introduction
In the past years, the anti-de Sitter/conformal field theory (AdS/CFT) correspondence 1)-3) was intensively applied to study strongly coupled phenomena in various physical problems, such as in quantum chromodynamics (QCD), condensed matter physics etc. In virtue of the remarkable strong/weak duality, the AdS/CFT provides an astonishing advance to deal with certain field theories in the strong coupling region via their holographic dual weakly coupled gravitational descriptions. In particular, the investigation of the phase transitions in holographic superconductors has achieved significant progress, see excellent reviews. 4)- 6) The pioneering scheme of the gravitational dual description, 7), 8) the so-called abelian Higgs model, considered a scalar field and a U(1) gauge field in the planar Schwarzschild-anti-de Sitter black hole. A nontrivial profile of the scalar field manifests the condensate of the paring mechanism, and the U(1) gauge field exhibits electromagnetic properties in superconductors.
One of the most interesting phenomena in holographic superconductor research is the second-order phase transition. The correlation lengths near the critical point of second-order phase transitions become divergent, so the systems can be described by a scale-invariant theory. In such circumstances, the AdS/CFT is an exceptionally appropriate technique to investigate the corresponding universal natures, in particular, the critical exponents. It turns out that the critical exponent in the abelian Higgs model is 1/2, which is the universal value in the mean field theory. Later, a generalized model 9), 10) was proposed in which the condensation occurs via the Stückelberg spontaneous symmetry breaking mechanism. The new features of the Stückelberg holographic superconductor models include the presence of first-order phase transi-tions and second-order phase transitions with nonmean field behavior. 9), 10) Moreover, some analytic approaches have been proposed to address the universal properties of second-order phase transitions in holographic superconductors. One suggestion is to consider a simple match of the boundary and horizon solutions of both the scalar and U(1) gauge fields, 11) see also Refs. 12), 13) Another analytic approach introduces a trial function with a free parameter which that is suggested to be fixed by a minimizing procedure, 14) see also Refs. 15), 16) In this work, we apply the simple matching approach to analytically study the second-order phase transition in a general class of the Stückelberg holographic superconductor in arbitrary dimensions. We examine all the possibilities that the second-order can occur, and derive the corresponding critical exponents. Our results are consistent with the numerical analysis. 9), 10) Some related properties in three-and four-dimensional spacetimes have been discussed in. 17), 18) The outline of the paper is as follows. In §2, we review the Stückelberg holographic superconductor models. The analytic study of phase transition is given in §3, and explicit models are examined in §4. Some discussions are given in the last section. §2. Stückelberg holographic superconductor The Stückelberg holographic superconductor 9), 10) is a simple generalization of the abelian Higgs model 6)-8) in which the condensation occurs via the Stückelberg spontaneous symmetry breaking mechanism. In the probe limit, the action of the model can be decoupled into two parts. The gravitational section provides the background of a black hole in the anti-de Sitter (AdS) space providing the temperature in the holographic superconductor. The action of this section, in general (d+1)dimensions, is and the gravitational background is a planar Schwarzschild-anti de Sitter black hole where (the radius of AdS is fixed to be unity, i.e., 3) The parameter r H represents the radius of the black hole, which is determined by its mass r d H = M d−2 . The Hawking temperature of the black hole is In the later analysis, it is more convenient to redefine the radial coordinate as z = r H r , (2 . 5) such that the locations of the boundary (asymptotic) and horizon are at z = 0 and z = 1, respectively. The second part generically includes a pair of real scalar fields, (ψ, ϕ), and one U(1) gauge field, A µ , in response to the condensate and conductivity in the superconductor. All the fields are assumed as probes and their back reaction to gravity can be neglected. The action of this section is In this model, 9) the dynamics is significantly dependent on the choice of the function F(ψ). The abelian Higgs model indeed corresponds to a particular choice of F(ψ) = ψ 2 . In this paper, we will focus only on the case that F(ψ) is a polynomial of ψ.
The corresponding equations of motion are By the gauge freedom, one can assume ϕ = 0. Moreover, we will only consider the scalar potential excitation of the U(1) gauge field, i.e., A = φ dt. Therefore, the equations of motion in the gravitational background of (2 . 2) reduce to, in terms of the coordinate z, (2 . 11) §3. Analytic study of phase transition A simple analytic approach by taking a trivial match of asymptotic and nearhorizon solutions has been applied to address the properties of the phase transition in holographic superconductors. 11) It turns out that this simple analysis can really capture the universal properties of second-order phase transitions. In this section, we are going to reproduce such study for the Stückelberg holographic superconductor model.
At the boundary, z = 0, the solutions of φ and ψ have the following asymptotic forms The parameters µ and ρ are interpreted as the chemical potential and the charge density of the dual theory on the boundary. There are two possible condensates of the scalar field ψ by turning on either the D + or D − mode. Then, the associated vacuum expectation values of the dual operators are The parameters λ ± define the dimensions of operators O ± . From the expression for λ ± , the mass of the scalar ψ should be m 2 > −d 2 /4, which is consistent with the Breitenlohner-Freedman (BF) bound by stability of a scalar field in AdS d+1 . A typical choice in the literature is At the horizon, z = 1, the solutions can be expanded as To be regular at the horizon, we should impose an additional condition φ H (1) = 0. After that, by solving the equations of motion order by order of 1 − z, the expanding coefficients are related as, denoting a = φ ′ H (1) and b = ψ H (1), (3 . 9) The essential step in any analytic study is to construct a suitable scheme to resolve the relations of the coefficients in the boundary and horizon solutions. For our simple approach, we just take a trivial manner to match the boundary and horizon solutions of φ and ψ, and their first derivatives at a given matching point z m in the range of 0 < z m < 1. It turns out that different choices of the matching point will give different numerical coefficients, but the universal properties, such as the critical exponent, are actually independent of the choice of z m . Let us consider a generic choice of z m , and the matching conditions 10) give the following relations; here, (D, λ) could be either (
12)
14) wherez m = 1 − z m is introduced to simplify the expressions. By suitably combining equations (3 . 13) and (3 . 14), we can solve D as and also to get .
The solution of b is strongly dependent on the choice of the function F(ψ). In the next section, we will explore a general class of interesting models in detail. §4. Explicit models In this section, we will explore all the possible second-order phase transitions for the models that F(ψ) is a polynomial in ψ.
F(ψ) = ψ n
For the simplest case F(ψ) = ψ n , the master equation (3 . 18) for b reduces to (4 . 1) From this equation, it is obvious that n = 2 is a very special value in which the overall factor b 1−n/2 disappears. Indeed, this is the only case that the second-order phase transition can happen. The value of b should tend to zero near the critical point of the second-order phase transition. Therefore, as the temperature approaches to the critical value, the overall factor b 1−n/2 vanishes for n < 2. The above equation then implies ρ → 0, which indicates that no condensates occur. For the other case n > 2, however, we conclude ρ → ∞ near the critical point. This result seems to indicate that a condensate does appear but it is not a second-order phase transition. Naturally, one might expect that the phase transition in this case is first order.
This expectation is supported by numerical results analyzed in. 9) Nevertheless, our approach considered in this paper is not capable for studying these two cases. The special case n = 2 indeed is just the abelian Higgs model, 9), 10) which is well studied in the literature, in particular, for dimensions d = 3 and 4. There are nice reviews on this model. 4)-6) Generally, the solution for b can be expressed in terms of the black hole temperature and critical temperature, T c , as where the critical temperature is given by Table I. However, our approach can capture the universal properties, in particular, the relation between the critical temperature and charge density, and the critical exponent as Table I. "Qualitative" numbers derived by the simple analytic approach for the coefficients in the critical temperature and in the condensate of the abelian Higgs model with parameters: n = 2, d = 3, m 2 = −2.
F(ψ)
In this subsection, we will consider an interesting generalization to the abelian Higgs model in which the second-order phase transition can still occur. In this model, we consider F(ψ) = ψ 2 + C α ψ α . The ψ 2 term is needed to ensure the occurrence of the second-order phase transition, and the additional term C α ψ α with α > 2 is a likely generalization. For such a model, the Eq. (3 . 18) becomes quite complicated and it is impossible to obtain an explicit solution. However, our focus is on the properties near the critical point of the second-order phase transition, where the parameter b is expected to be small. Thus, Eq. (3 . 18) can be expanded in the order of the small parameter b. By keeping up to the leading order of b/F ′ (b), we have It is clear that the dominant term on the left-hand side of the equation is different for the two distinct ranges of α: (i) α > 4 and (ii) 2 < α < 4. In addition, the particular value α = 4 is a special case. Let us analyze these cases separately.
For the value of α in the range of α > 4, the b α−2 term is of higher order and can be neglected compared with b 2 . Therefore, the critical exponent is exactly identical to the abelian Higgs case, namely, the mean field value 1/2. Therefore, the additional term C α ψ α with α > 4 completely does not change any of consequences on the second-order phase transition.
For the case 2 < α < 4, instead of b 2 , the term b α−2 becomes dominant and Eq. (4 . 5) reduces to This equation allows a second-order phase transition type of solution for b, namely, with the same critical temperature given in (4 . 3). To ensure that b has a real value, some possible constraints on the parameter C α are required; for example, C 4 should be negative for α = 4. Therefore, the critical exponent in (4 . 8) is 1/(α−2), which is always bigger than the mean field result. Note that the exponent is discontinuous at α = 2 of the mean field result. Finally, α = 4 is a special value and in such a case, we should solve The coefficient of b 2 should be positive, which introduces a constraint to the parameter C 4 as .
(4 . 10) However, the explicit value of this constraint is not exact. It depends on the choice of z m , so our analysis can only give a qualitative result. Obviously, the critical exponent is identical to the mean field result, but the numerical factor is modified which again cannot be exactly obtained by our approach.
The most general model for a polynomial F(ψ) is F(ψ) = ψ 2 + C α i ψ α i with α j > α i > 2 when j > i. Again, the term ψ 2 is necessary for the existence of the second-order phase transition. It is not difficult to recognize a general feature: the much higher order extended terms are indeed more insignificant at the critical point. Therefore, for the study of universal properties, in addition to ψ 2 , one only needs to keep the lowest order correction term, assuming C α 1 ψ α 1 . Hence, a complete analysis was already given in the previous subsection. §5.
Conclusions
In this paper, we analytically study the universal properties of the second-order phase transition for a general class of holographic superconductors, the Stückelberg model with a polynomial F(ψ). The desired characteristics can be revealed by a simple approach, namely, by matching the asymptotic and horizon solutions of two essential scalar fields at an arbitrary point in between. The validity of this approach can be understood owing to the presence of a scale invariance at the critical point of the second-order phase transition. Firstly, we can classify all the possible cases admitting the second-order phase transition. Moreover, we can derive the explicit expression of the critical exponent. For most cases, the second-order phase transition has mean field critical exponent 1/2. The essential exceptional example is the case F(ψ) = ψ 2 + C α ψ α with 2 < α < 4. In this case, the critical exponent is (α − 2) −1 , which is always greater than the mean field value. All our results are consistent with the numerical analysis. 9), 10) A significant weakness of our simple approach is that it cannot give very exact numerical coefficients since those values are dependent on the choice of the match point. Another type of analytic approach has been proposed, requiring an eigenvalue minimization while introducing a trial function. 14) It has been checked that this approach can give very accurate results for the numerical coefficient in many models, including an external magnetic field 12) and the Gauss-Bonnet gravity. 15), 16) One should expect to improve the precision of the numerical coefficients in our results by considering other analytic approaches. | 2011-09-02T08:19:07.000Z | 2011-03-26T00:00:00.000 | {
"year": 2011,
"sha1": "9ff192458e217ca558a6eeba89ba67c705586098",
"oa_license": null,
"oa_url": "https://academic.oup.com/ptp/article-pdf/126/3/387/17572868/126-3-387.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "9ff192458e217ca558a6eeba89ba67c705586098",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
241956194 | pes2o/s2orc | v3-fos-license | A New Method of Calculation of the Exact Weight of Wet Aggregates and Adjusted Weight of Water in Dosing and Production of Concrete
Author of this paper presents a new method of determining the exact corrected mass of wet aggregates and water in the making and production of concrete. Namely, in the existing process [4] for determining the mass of wet aggregates which are dosed into the mixer, there is an overabundance measure of water and shortage of aggregates in the mixture. The error increases with the increase of the humidity of aggregate. Excess water in the mixture adversely affects the strength, resistance and durability of concrete. The author gives a new functional dependence (15) for calculating the wet aggregate mass (moa) in iterative procedure with pre-set accuracy defined by the formula (14). The author also provides a second formula (16) for direct calculation of the wet aggregate mass (moa) that is exact. The water mass (mw) is easily calculated using the formula (17), where the wet aggregate mass is previously calculated. This method of calculating the exact mass of the liquid and the exact mass of the solid components (mass of the material) according to the presented procedure and the new formulas (15), (16) and (17) is applicable to all types of mixtures that are used in the construction, food and pharmaceutical industries.
INTRODUCTION
In factories, concrete production is carried out according to precise recipes by weighing the mass of each component: cement mass (m c ), the mass of fully dry aggregates (m oa ), mass of reduced water (m * w ) and the mass of additives (m ad ). However, the aggregates are usually moist, hence there is a real problem in measuring masses of dry aggregate and water. By measuring the mass of a certain volume of aggregate, we indirectly measure the mass of water present in the aggregate. If the humidity of the aggregate is known, we are able to calculate the mass of water in the aggregate. The current methodology [4] of calculating the amount of water present in the aggregate is not sufficiently precise. This means, as of today, that neither the mass of water, nor the mass of aggregate calculated and dosed into the mixer during the production of concrete, is sufficiently accurate and precise. As the humidity of aggregate increases, the error in calculating also increases. The new method of determining the exact masses of the aggregates and water in the production of concrete, presented in this scientific article, is created as a result of detailed observation of the physical process of wet aggregate dosing, which shows that the increased mass of aggregates cannot be calculated by simply multiplying the free moisture in the aggregate Ha (%) with the mass of aggregate (m oa ).
THE CURRENT WAY OF APPROXIMATE AND INSUFFICIENTLY PRECISE CALCULATION OF THE MASSES OF THE WATER AND AGGREGATES IN THE PROCESS OF WET AGGREGATE DOSING
As mentioned in the introductory part, simply multiplying the mass of dry aggregate (moa) with the humidity of the aggregate H a (%), does not give us sufficiently accurate results. The mass of aggregate, which is calculated as the difference between the mass of completely dry and the mass of aggregate increased by the percentage of humidity, also contains a certain water mass, because the aggregate is moist. The mass of water contained in the corrected aggregate mass represents the size of the error, because the mass of dry aggregate has not been reached (according to the projected mixture of fresh concrete). This means that the mass of dry aggregate, according to the recipe for fresh concrete, will be reduced by the mass of water contained in the corrected mass of the aggregate, while the mass of water will be increased in relation to the recalculated mass of water in the recipe.
where: m oa1 -A part of the mass of completely dry aggregate contained in the total mass of the moist aggregate (kg); Δm aw1 -A part of the mass of water in the total mass of the moist aggregate (kg); Δm oa1 -The additional mass of the (moist) aggregate, which is the result of the humidity of the aggregate H a (kg). If there is no humidity (H a = 0), then there is no water in the aggregate (Δm aw1 = 0), so there is no need to put in the additional aggregate (Δm oa1 = 0).
Developed form of the Eq. (3) is: where: m oa2 -A part of the dry aggregate mass in the additional mass of the moist aggregate Δm oa2 (kg); Δm aw2 -A part of the water mass in the additional mass of the moist aggregate Δm oa2 (kg). Mass Δm aw2 represents an error, because it is not covered by the mass of water in the process of dosing the aggregate mass. This means that, in the concrete production, the mass of required aggregates in the dry state will be reduced by the mass -Δm aw2 . Likewise, the water mass in the concrete will be increased by the same mass -Δm aw2 .
For a better understanding of the physical state of water in a moist aggregate used in concrete production, the Eq. (2) can be written in a developed form: which brings us to the starting Eq. (2): The mass (m * oa ) is the aggregate mass increased by the water mass which is present in the aggregate Δm aw1 but the result yielded in this fashion is not sufficiently precise, because the mass which increases the mass of wet aggregate Δm oa1 = Δm aw1 = (m oa2 + m aw2 ) consists of humidity H a (%), too, which means that there is water mass Δm aw2 , as can be seen from the Eq. (4). The current method of calculating the mass of wet aggregate neglects the presence of water Δm aw2 in the corrected aggregate mass - The mass of aggregate which is increased by Δm aw1 is calculated in a manner that follows: Based on the presented analysis, we conclude that the aggregate mass (m * oa ) determined by the existing methods would have to be reduced with the value of the water mass -Δm oa2 , which is located in the increased aggregate mass Δm oa1 , compared to the present aggregate humidity H a (%). This water mass is calculated according to the formula (8), so we get: Using the existing methods, the mass of reduced water (m * w ) is calculated by subtractions of the mass of projected water in the concrete -m w , from the mass of water present in the aggregate: The water mass (m * w ) was reduced only by the water mass present in the aggregate -Δm aw1 , rendering the result not sufficiently precise. In fact, there is an error in calculating the exact amount of water present in the aggregate, because it does not include the water mass Δm aw2 (see Eq. (4)). The water mass in the fresh concrete will be increased by the water mass -Δm aw2 , in relation to the projected water as originally designed in the recipe.
Figure 2
Graphic presentation of the present procedure for measuring the moist aggregate mass (m * oa ) which is consisted of the dry aggregates Δm oa1 and the water mass in aggregate Δm aw1 , so it is necessary to increase aggregate mass with the size of the water mass present in the aggregate Δm aw1 = m oa2 + m aw2 , where the value of Δm aw2 represents a fault in this process, because such process ignores that the corrected addition to the aggregate is wet (ignores the present value of the water mass Δm aw2 ) The current method of calculating the mass of aggregate and water during the dosing into a concrete mixer would be correct if two conditions are met, and only then. The first condition is that Δm aw1 = 0, which means that the aggregate humidity is equal to 0. This practically means that the aggregate must be dry at the time of dosing into the mixer which is not feasible. The second condition is that Δm aw3 = 0, which means that the aggregate mass which is added in the mass Δm oa1 , which is also fed into the mixer, would have to be completely dry, too.
This leads us to the conclusion that the current method of calculating the aggregate mass and the water mass in the concrete production is not sufficiently accurate and precise. The error in this method increases as the aggregate humidity H a (%) increases. If the aggregate humidity is little, then the error is smaller.
A NEW METHOD OF DESIGNING THE ACCURATE AGGREGATE AND WATER MASSES IN THE CONCRETE PRODUCTION
The purpose of designing a mixture of fresh concrete and determining the cement, aggregate, water and additive masses is production of concrete mixtures, which has its pre-determined quality (class) C fck/fck,cube.
The individual aggregate fractions have different humidity H a (%) levels and bulk densities. Determining the mass of individual fractions is done by multiplying the total volume of aggregates (V a ) with granulometric participation of individual fractions (Δ pi ) and its bulk density -γ z,ai : In order to provide a clearer and simpler showing of the problem of calculating the wet aggregate and water masses in the concrete production, the author is analysing one type of aggregate and one fraction which have a single (the same) humidity H a (%).
In the process of concrete production, it is necessary to take into account the humidity of each fraction of the aggregate, especially the first fraction with the smallest grains (0/4 mm). As the aggregate grain is smaller, its specific surface (S = cm 2 /g) is greater. Accordingly, the humidity of the fine granules H a (%) may be high, because those granules have a large specific surface area (S = cm 2 /g). Similarly, the water absorption u (%) is higher for a smaller grain, because absorption is uniform at the surface and in the central part of the particles of such aggregate, thanks to the fact that the route is shorter and the resistance to the absorption is smaller.
Humidity H a (%) of the first fraction can be 10% (or even more than 20%). The mass of water placed around the aggregate grains must be accurately calculated and subtracted from the mass of water which was calculated as required in the concrete production.
The wet aggregate mass must be increased because the calculated mass of aggregate, which is fed into the mixer in the process of concrete production because this means that the aggregate is completely dry. In the attempt to explain the physical aspect of the problem, calculations of the actual mass of aggregate that needs to be dosed in the concrete production will be conducted iteratively. At the end, we will give a new equation of this, an iterative, process, and a new equation for a direct calculation of the aggregate and water mass without iteration.
The iterative procedure is performed until the value of the additional aggregate mass Δmoa1 is approximately equal to zero, because then the water mass Δm aw1 in the aggregate will be equal to zero, which is necessary to avoid dosing errors during the implementation of the dry aggregate.
Similarly, we are avoiding errors that occur due to the increased water mass.
where: m ** oa -The increased aggregate mass that is exact (increased by the mass of water present in the aggregate, because the aggregate is moist with a moisture level H a (%)); m oa -The mass of the completely dry aggregate calculated in the concrete recipe for the required class of concrete C f ck /f ck,cube . This illustration clearly shows that the second constituent of the equation again contains water with its added aggregate mass, which means that for that mass of water, the mass of aggregate fractions is reduced.
The author provides an iterative method for dosing the aggregate and water as a contribution to the technology of concrete production. This leads to a result which yields a low error in aggregate dosing which strives towards zero. The number of iterative steps is determined for a pre-set accuracy (lnq): where: k -The number of addends for a given accuracy q (k is adopted as an integer); l n q -The required accuracy (for accuracy to the fifth decimal place q = 0.00001); m oa -The aggregate mass in a completely dry state to be dosed into the mixer according to the recipe for concrete production (kg); l n H a -Logarithm of a number that represents the aggregate humidity H a ; H a -Humidity of the individual aggregate fraction. The method described above is reduced to the general form: where: m ** oa -The increased, exact aggregate mass with current humidity which should be dosed into the concrete (The aggregate mass is increased by the water mass present in the aggregate).
The aggregate mass m ** oa is calculated with the required accuracy (k).
The author gives a direct method for calculating the aggregate mass -m ** oa : 1 , kg 1 100 The same or similar result is obtained by the direct method using the Eqs. (16) and (15) with the use of the iterative procedure. The identity of the results acquired by the Eqs. (16) and (15) depends on the required accuracy (q), (q = 1/10 n ), 2 < n < 10.
For practical reasons, we recommend direct method (16).
After determining the mass of wet aggregates, access to the calculation of the mass of water: where: m ** w -The reduced, exact water mass which is to be dosed into the mixer in the production of fresh concrete mixtures; m w -The water mass which is calculated in the recipe for the required concrete class under the assumption that the aggregate is completely dry.
Comparative Analysis of the Results Obtained in Three Manners
A practical example of the calculation for concrete production, if the aggregate mass in dry condition and water mass are calculated to be moa = 1935 kg and m w = 174 kg, respectively, is analyzed. The aggregate moisture in the phase of concrete production and dosing in the mixer is H a = 7,9%. where: m * oa -The increased wet aggregate mass to be dosed into the mixer using the current method; m ** oa -The increased, exact wet aggregate mass to be dosed into the mixer using new methods presented above; m * w -The reduced water mass to be dosed into the mixer using today's approximate procedure; m ** w -The reduced water mass using new, exact procedure that needs to be conveyed in the concrete production (reduced by the quantity of water contained in the aggregate).
There are differences in the masses of aggregate and water at the point where the aggregate humidity is H a = 7,9%. In particular, the presence of water is increased mostly in the concrete created by the use of the current methods. The increase of the water mass can be calculated by using the following result: The increase of the water mass in the concrete production for the parameters of the existing approximate methods in relation to the new proposed exact method is 7,535%. This increase of the water mass negatively affects the quality of the concrete: it reduces the mechanical properties of concrete, increases the porosity and reduces resistance to the low temperature and chemical aggression [2,3]. Increasing the water significantly affects the quality of concrete in a negative way, even more than the effect of the aggregate mass reduction. The author believes that this scientific work puts forward sufficient reasons to urgently consider proposals to introduce new methods and new formulas, accurate and iterative, to calculate the wet aggregate and water masses during the dosing into the mixer.
CONCLUSIONS
The existing methods [4] of calculating the wet aggregate mass according to the Eq. (1) in the process of concrete production are approximate because they neglect the fact that even the corrected, enlarged mass of wet aggregate contains water (Δm aw2 ), which represents a mistake in the dosing. Because of this error, during the production of concrete, reduced aggregate mass and increased water mass are dosed into the mixture. Depending on the aggregate humidity, the error can adversely affect the concrete quality, because it reduces the mechanical properties and increases the concrete porosity, due to the fact that the mixture obtains the increased water mass. The author provides a new method [1] of calculating the exact aggregate mass (m * oa ) using an iterative process (15) or a direct process (16). The water mass m * w is precisely calculated using the Eq. (17). At the end of the paper, there is a comparative example of calculating the mass of wet aggregates and ground water under the assumption that the humidity of the aggregate is slightly higher -H a = 7,9%. Hence, in a method which is still applied in relation to the proposed new method, we get an error -an increase in the amount of water in the aggregate (Δm w = 13,112 kg), which represents an increase in the water percentage of 7,535% relative to the calculated water mass required for this class of concrete. It really is too big of an error that can adversely affect the concrete quality, as we have already pointed out. This method of calculating the exact mass of the liquid and the exact mass of the solid components (mass of the material) according to the presented procedure and the new Eqs. (15), (16), (17), is applicable to all types of mixtures that are applied in the construction, food and pharmaceutical industries. | 2019-10-10T09:31:52.594Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "ca0dff2049999b107ae2df0eb1baab8459fbd3df",
"oa_license": "CCBY",
"oa_url": "https://hrcak.srce.hr/file/329387",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f7401efefd857993f160ae7100b2e69df113b60",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
1758930 | pes2o/s2orc | v3-fos-license | Pushing Moral Buttons: The Interaction Between Personal Force and Intention in Moral Judgment
In some cases people judge it morally acceptable to sacrifice one person’s life in order to save several other lives, while in other similar cases they make the opposite judgment. Researchers have identified two general factors that may explain this phenomenon at the stimulus level: (1) the agent’s intention (i.e. whether the harmful event is intended as a means or merely foreseen as a side-effect) and (2) whether the agent harms the victim in a manner that is relatively “direct” or “personal.” Here we integrate these two classes of findings. Two experiments examine a novel personalness/directness factor that we call personal force , present when the force that directly impacts the victim is generated by the agent’s muscles (e.g., in pushing). Experiments 1a-b demonstrate the influence of personal force on moral judgment, distinguishing it from physical contact and spatial proximity. Experiments 2a-b demonstrate an interaction between personal force and intention, whereby the effect of personal force depends entirely on intention. These studies also introduce a method for controlling for people’s real-world expectations in decisions involving potentially unrealistic hypothetical dilemmas.
What explains this pattern of judgment?Neuroimaging (Greene, et al., 2001(Greene, et al., , 2004)), lesion (Ciaramelli et al., 2007;Koenigs et al., 2007;Mendez et al., 2005), and behavioral (Bartels, 2008;J. Greene et al., 2008;Valdesolo & DeSteno, 2006) studies indicate that people respond differently to these two cases because the action in the footbridge dilemma elicits a stronger negative emotional response.But what features of this action elicit this response?Recent studies implicate two general factors.First, following Aquinas (unknown/2006), many appeal to intention and, more specifically, the distinction between harm intended as a means to a greater good (as in the footbridge dilemma) and harm that is a foreseen but "unintended" side-effect of achieving a greater good (as in the switch dilemma) (Cushman et al., 2006;Hauser et al., 2007;Mikhail, 2000;Schaich Borg et al., 2006).Second, many studies appeal to varying forms of "directness" or "personalness," including physical contact between agent and victim (Cushman et al., 2006), the locus of intervention (victim vs. threat) in the action's underlying causal model (Waldmann & Dieterich, 2007), whether the action involves deflecting an existing threat (Greene et al., 2001), and whether the harmful action is mechanically mediated (Moore et al., 2008;Royzman & Baron, 2002).The aim of this paper is to integrate these two lines of research.
We present two experiments examining a directness/personalness factor that we call personal force.An agent applies personal force to another when the force that directly impacts the other is generated by the agent's muscles, as when one pushes another with one's hands or with a rigid object.Thus, applications of personal force, so defined, cannot be mediated by mechanisms that respond to the agent's muscular force by releasing or generating a different kind of force and applying it to the other person.Although all voluntary actions that affect others involve muscular contractions, they do not necessarily involve the application of personal force to another person.For example, firing a gun at someone or dropping a weight onto someone by releasing a lever do not involve the application of personal force because the victims in such cases are directly impacted by a force that is distinct from the agent's muscular force, i.e. by the force of an explosion or gravity.The cases of direct harm examined by Royzman and Baron (2002) are not so direct as to involve the application of personal force.The direct/indirect distinction described by Moore and colleagues (2008) is similar to the distinction drawn here between personal and impersonal force, but Moore and colleagues do not systematically distinguish between physical contact and personal force.
Experiments 1a-b aim to document the influence of personal force, contrasting its effect with those of physical contact (1a-b) and spatial proximity (1a) between agent and victim.Experiment 1a also introduces a method for controlling for effects of unconscious realism, i.e. a tendency to unconsciously replace a moral dilemma's unrealistic assumptions with more realistic ones.("Trying to stop a trolley with a person is unlikely to work.")Experiments 2a-b examine the interaction between personal force and intention.More specifically, we ask whether the effect of personal force depends on intention and vice versa.
Experiment 1a
We compared four versions of the footbridge dilemma to isolate the effects of spatial proximity, physical contact, and personal force on moral judgments concerning harmful actions.We also tested the unconscious realism hypothesis by controlling for subjects' real-world expectations.
The mean age was 31.Subjects were recruited anonymously in public venues in New York City and Boston.Subjects were paid $3.
Design, materials, and procedure
Subjects responded to one of four versions of the footbridge dilemma in a between-subject design, indicating the extent to which the proposed action is "morally acceptable."In the standard footbridge dilemma (n = 154, Figure 1a), the agent (named Joe) may save the five by pushing the victim off the footbridge using his hands.This action involves spatial proximity, physical contact, and personal force.In the remote footbridge dilemma (n = 82, Figure 1d), Joe may drop the victim onto the tracks using a trap door and a remote switch.This action involves none of the three aforementioned factors.The footbridge pole dilemma (n = 72, Figure 1b) is identical to the standard footbridge dilemma except that Joe uses a pole rather than his hands to push the victim.This dilemma involves spatial proximity and personal force without physical contact.The footbridge switch dilemma (n = 160, Figure 1c) is identical to the remote footbridge dilemma except that Joe and the switch are adjacent to the victim.This dilemma involves spatial proximity without physical contact or personal force.Comparing remote footbridge to footbridge switch isolates the effect of spatial proximity.Comparing standard footbridge to footbridge pole isolates the effect of physical contact.Comparing footbridge switch to footbridge pole isolates the effect of personal force.The text of the standard footbridge dilemma is as follows: An empty runaway trolley is speeding down a set of tracks toward five railway workmen.There is a footbridge above the tracks in between the runaway trolley and the five workmen.On this footbridge is a railway workman wearing a large, heavy backpack.If nothing is done, the trolley will proceed down the main tracks and cause the deaths of the five workmen.(See diagram below) It is possible to avoid these five deaths.Joe is a bystander who understands what is going on and who happens to be standing right behind the workman on the footbridge.Joe sees that he can avoid the deaths of the five workmen by pushing the workman with the heavy backpack off of the footbridge and onto the tracks below.The trolley will collide with the workman, and the combined weight of the workman and the backpack will be enough to stop the trolley, avoiding the deaths of the five workmen.But the collision will cause the death of the workman with the backpack.
Note: Joe cannot avoid the deaths of the five workmen by jumping himself because he is not heavy enough to stop the trolley.There is also not enough time to remove the backpack from the workman.
Is it morally acceptable for Joe to push the workman off of the footbridge in order to avoid the deaths of the five workmen, causing the death of the single workman instead?Subjects answered (YES/NO) and rated the moral acceptability of the action on a nine-point scale.The above text was accompanied by a diagram (Figure 1a).Similar text and diagrams (Figures 1c-d and 3) were used for other dilemmas, with changes reflecting the experimental manipulations.Complete materials are available at [url].
The instructions acknowledged that the dilemmas were not necessarily realistic and requested that subjects "suspend disbelief."Data from 31 (of 664) subjects who reported being unable/unwilling to suspend disbelief ("conscious realists") were excluded form analysis, as were data from 10 subjects reporting confusion.
To control for unconscious realism, we asked subjects (after they responded to the dilemma) to report on their real-world expectations concerning the likely consequences of Joe's actions.Subjects estimated the likelihood (0-100%) that the consequences of Joe's action would be (a) as described in the dilemma (five lives saved at the cost of one), (b) worse than this, or (c) better than this.These estimates (respectively labeled PLAN, WORSE, and BETTER) were modeled as covariates.The predictive value of these variables indicates the extent to which subjects' judgments may reflect unconscious realism.
Data were analyzed using a general linear model.Here and in Experiment 2a, the three "realism covariates" and gender were included as first-order covariates and allowed to interact with the dilemma variable.
In Experiment 2a these factors were allowed to interact with both main effects and the interaction of interest.Because the realism covariates are likely correlated, this analysis is adequate to control for the their collective effects but inadequate to resolve their respective contributions.
(See Figure 2) There was a significant main effect of WORSE (F(1, 417) 12 = 5.80, p = .02)with actions expected to be less successful eliciting lower moral acceptability ratings, consistent with unconscious realism.There were no significant effects of PLAN, BETTER, gender, or higher order covariates (p > .05).These results indicate that harmful actions involving personal force are judged to be less morally acceptable.Moreover, they suggest that spatial proximity and physical contact between agent and victim have no effect and that a previously reported effect of physical contact (Cushman et al., 2006) is in fact an effect of personal force.In all four of the dilemmas examined in this study, the harmful event is intended as a means to achieving the agent's goal, raising the possibility that the effect of personal force is limited to cases in which the harm is intended as a means.Experiments 2a-b examine the interaction between personal force and intention.
Experiment 1b
To ensure that the results concerning personal force and physical contact observed in Experiment 1a generalize to other contexts, we conducted an additional experiment using a different set of moral dilemmas, as well as a different rating scale.
Experiment 2a
This experiment examined the independent effects of personal force and intention and, most critically, their interaction, by comparing four dilemmas using a 2 (personal force absent vs. present) x 2 (means vs. side-effect) design.
Method
Methods follow Experiment 1a unless otherwise noted.
Design, materials, and procedure
Each subject responded to one of four dilemmas.In the loop dilemma (Hauser et al., 2007;Mikhail, 2000;Thomson, 1985;Waldmann & Dieterich, 2007), Joe may save the five by turning the trolley onto a looped side-track that reconnects with the main track at a point before the five people (n = 152, Figure 3a).There is a single person on the sidetrack who will be killed if the trolley is turned, but who will prevent the trolley from looping back and killing the five.Here the victim is harmed as a means (i.e.intentionally), but without the application of personal force.
The loop weight dilemma (Hauser et al., 2007;Mikhail, 2000) is identical to the loop dilemma except that a heavy weight positioned behind the victim on the side-track, rather than the victim, stops the trolley (n = 74, Figure 3b).Here the victim is killed as a side-effect (i.e.without intention) and, again, without the application of personal force.In the obstacle collide dilemma, the victim is positioned on a high and narrow footbridge in between Joe and a switch that must be hit in order to turn the trolley and save the five (n = 70, Figure 3c).To reach the switch in time, Joe must run across the footbridge, which will, as a side-effect, involve his colliding with the victim, knocking him off the footbridge and to his death.Thus, this dilemma involves personal force, but not intention.The obstacle push dilemma (n = 70) is identical to the obstacle collide dilemma except that Joe must push the victim out of the way in order to get to the switch.
Although the victim is not used to stop the trolley, Joe performs a distinct body movement (pushing) that is both harmful and necessary for the achievement of the goal.Thus, this dilemma involves the application of personal force that is intentional.
There was no significant effect of BETTER or other higher order covariates (p > .05).
Discussion
In two sets of experiments, harmful actions were judged to be less morally acceptable when the agent applied personal force to the victim.In Experiments 1a-b the effect of personal force was documented and distinguished from effects of physical contact (Cushman et al., 2006) and spatial proximity (1a only), which were not significant.Experiments 2a-b revealed that personal force interacts with intention, such that the personal force factor only affects moral judgments of intended harms, while the intention factor is enhanced in cases involving personal force.Put simply, something special happens when intention and personal force co-occur.
(We note that all key results held using categorical (YES/NO) judgments when they were collected.) In Experiments 2a-b, personal force exhibited no effect in the absence of intention, a striking result in light of Experiments 1a-b and previous work.In Experiment 2a, the action in the obstacle collide dilemma was judged to be as acceptable as those in the loop, and loop weight dilemmas despite the fact that obstacle collide, unlike the other two dilemmas, involves direct harm (Moore et al., 2008;Royzman & Baron, 2002), physical contact (Cushman et al., 2006), harm not caused by the deflection of an existing threat (Greene et al., 2001), and an alteration of the victim's causal path (Waldmann & Dieterich, 2007).(One may interpret Waldmann & Dieterich as assuming that victim interventions are necessarily intended, in which case this result is consistent with their theory.)Experiment 2b showed that this finding generalizes to several additional dilemma contexts, strongly suggesting that the effect of personal force is limited to cases involving harm as a means.
Experiments 2a and 2b also demonstrate that the effect of the intention factor on moral judgment is enhanced in cases involving personal force, and Experiment 2a found no effect of intention in the absence of personal force, suggesting that intention operates only in conjunction with other factors such as, but not necessarily limited to, personal force.Our finding of equivalence between the loop (intentional harm) and loop weight (harmful side-effect) dilemmas directly contradicts some earlier findings (Hauser et al., 2007;Mikhail, 2000), 2 but is consistent with other earlier findings (Waldmann & Dieterich, 2007).
Following Waldmann & Dieterich, we attribute the effects observed by Hauser et al. (2007) and Mikhail (2000) to a confound whereby the loop dilemma, but not the loop weight dilemma, refers to the victim as a "heavy object."("There is a heavy object on the side track… The heavy object is 1 man…" vs. "There is a heavy object on the side track… There is 1 man standing on the side track in front of the heavy object…").
The statistical significance of the "unconscious realism" covariates included in Experiments 1a and 2a provides limited support for the unconscious realism hypothesis.This support is limited for at least two reasons.First, subjects' assessments of the likely real-world effects of the actions in question may be post-hoc rationalizations (Haidt, 2001).
Second, a correlation between real-world expectations and moral judgments is not sufficient to establish a causal relationship.
Nevertheless, these results indicate that effects of unconscious realism may be real and that researchers who use hypothetical cases to study decision-making should consider controlling for such effects as done here.
One might wonder why the actions judged to be more acceptable in Experiment 1a (footbridge switch and remote footbridge) received comparable ratings (~5) to the action judged to be less acceptable in Experiment 2a (obstacle push).First, in considering why the footbridge switch and remote footbridge dilemmas received relatively low ratings, we speculate that this may be due to the fact that the actions in these dilemmas involve dropping the victim onto the tracks, constituting an additional intentional harm (Mikhail, 2007).Second, in considering why the ratings for the obstacle push dilemma are relatively high, we suggest that this may be due to the fact that the action in the obstacle push dilemma, while involving a distinct body movement that is harmful and necessary for the achievement of the goal, does not involve using the victim, as in the four footbridge dilemmas.Each of these hypotheses will be explored in future work.
The latter hypothesis highlights more general open questions concerning the scope of agents' intentions (Bennett, 1995).In the obstacle push dilemma, the pushing is necessary, but the consequent harm, strictly speaking, is not.This observation raises parallel questions about more paradigmatic cases of intentional harm.For example, one might claim that even in the standard footbridge dilemma the harm is unintentional because the agent merely intends to use the victim's body to stop the trolley, harming him only as a foreseen side-effect of doing this.
These observations highlight the need for a theory of intentional event segmentation (Zacks & Tversky, 2001).
Other open questions concern the proper characterization of personal force: Must it be continuous (as in pushing), or may it be ballistic (as in throwing)?Is pulling equivalent to pushing?We acknowledge, more broadly, that the effects documented here under the rubric of "personal force" may ultimately be refined and reinterpreted.For example, alternative interpretations may focus on the potential for dynamic interaction between agent and victim.
Finally, we consider the significance of our finding that personal force and intention interact: Why is it that the combined presence of personal force and intention pushes our moral buttons?The codependence of these factors suggests a system of moral judgment that operates over an integrated representation of goals and personal forcerepresentations such as "goal-within-the-reach-of-muscle-force."In a general sense, this suggests a mechanism of moral judgment that is a species of embodied cognition (Gallese, Keysers, & Rizzolatti, 2004;Lakoff & Johnson, 1999;Prinz, 2002;Wilson, 2002).One natural source of such embodied goal representations is system of action planning that coordinates the application of personal force to objects to achieve goal-states for those specific objects.A putative sub-system of moral judgment, monitoring such action plans, might operate by rejecting any plan that entails harm as a goal-state (Mikhail, 2000(Mikhail, , 2007) ) to be achieved through the direct application of personal force.We propose this "actionplanning" account of the present results as an important area for further research.
At a more general level, the present study strongly suggests that our sense of an action's moral wrongness is tethered to its more basic motor properties, and specifically that the intention factor is intimately bound up with our sensitivity to personal force.This perspective contrasts with at least some versions of the "universal moral grammar" perspective (Hauser, 2006;Mikhail, 2000Mikhail, , 2007)), according to which the present moral judgments depend on goal representations of the kind one might find in a legal system, leaving little room for an 'embodied' representation involving personal force.It also presents a challenge to philosophical theories that endorse the doctrine of double effect (i.e. the intention factor) on the basis of its intuitive plausibility (Aquinas, unknown/2006;Fischer & Ravizza, 1992).Will they bless its shotgun marriage to a normatively ugly bride: the doctrine of personal force?
Figure 1 .
Figure 1.Diagrams for the (a) standard footbridge dilemma (physical
Figure 2 .
Figure 2. Results of Experiment 1: Moral acceptability ratings for four
Figure 3 .
Figure 3. Diagrams for the (a) loop dilemma (means, no personal force),
Figure 4 .
Figure 4. Results of Experiment 2: Moral acceptability ratings for four | 2014-10-01T00:00:00.000Z | 2009-06-01T00:00:00.000 | {
"year": 2009,
"sha1": "d20eb29f6cc4d4ecb323f2afbf9603bb06918424",
"oa_license": "CCBY",
"oa_url": "https://dash.harvard.edu/bitstream/1/4264763/2/Greene_MoralButtons.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "da5391d3cc9ac99be70aa534c5b21c55edc647d5",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
229633709 | pes2o/s2orc | v3-fos-license | Milling yield components of local dryland rice varieties
ABSTRACT Family farmers in the western Santa Catarina state, Brazil, have conserved local dryland rice varieties. However, the literature lacks data about the milling performance of these varieties, as well as about the effects of genotypes, environment and genotype x environment interaction. The current study aimed to evaluate the milling yield, as well as whole, broken, white-belly and chalky grains, in experiments designed in complete randomized blocks, with four replications, in two sites. The rates for milling yield and whole and broken grains were, respectively, 57.93-69.90 %, 38.73-66.0 % and 3.40-22.15 %, with 15 local varieties reaching values similar to those recorded for modern dryland rice varieties. The Anchieta county (origin of the varieties) recorded the highest values for milling yield and whole grain. The incidence rates for white-belly and chalky grains were, respectively, 0.10-8.68 % and 0.02-3.12 %. Significant differences (p ≤ 0.05) were observed for genotype, environment and genotype x environment interaction effects, concerning the milling yield. For whole and broken grains, the differences were significant (p ≤ 0.05) for genotype and environment, but not significant for genotype x environment interaction. For white-belly and chalky grains, the differences were significant (p ≤ 0.05) for genotype and genotype x environment interaction, but not significant for environment. For all the studied milling yield variables, differences were observed for the varieties' stability, as well as established a stability ranking.
INTRODUCTION
Family farmers in the western Santa Catarina state, Brazil, traditionally grow local dryland Family farmers in the western Santa Catarina state, Brazil, have conserved local dryland rice varieties. However, the literature lacks data about the milling performance of these varieties, as well as about the effects of genotypes, environment and genotype x environment interaction. The current study aimed to evaluate the milling yield, as well as whole, broken, white-belly and chalky grains, in experiments designed in complete randomized blocks, with four replications, in two sites. The rates for milling yield and whole and broken grains were, respectively, 57. 93-69.90 %, 38.73-66.0 % and 3.40-22.15 %, with 15 local varieties reaching values similar to those recorded for modern dryland rice varieties. The Anchieta county (origin of the varieties) recorded the highest values for milling yield and whole grain. The incidence rates for white-belly and chalky grains were, respectively, 0.10-8.68 % and 0.02-3.12 %. Significant differences (p ≤ 0.05) were observed for genotype, environment and genotype x environment interaction effects, concerning the milling yield. For whole and broken grains, the differences were significant (p ≤ 0.05) for genotype and environment, but not significant for genotype x environment interaction. For white-belly and chalky grains, the differences were significant (p ≤ 0.05) for genotype and genotype x environment interaction, but not significant for environment. For all the studied milling yield variables, differences were observed for the varieties' stability, as well as established a stability ranking.
Milling quality is a determining factor featuring the rice variety potential, while milling yield -which is based on the sum of whole and broken grains, after husking and polishing, without counting husk and bran -is one of the main industrial quality aspects of rice varieties. Another fundamental industrial quality aspect lies on the whole and broken grain yield. Whole and broken grain rates affect the classification and market value of the product itself, as well as of its by-products (Castro et al. 1999, Brasil 2012. Broken grains are defined as the husked and polished piece of rice grain that corresponds to less than three quarters of the minimum length of the prevalent class (Brasil 2009). In addition, husked and polished whole grain dimensions are classified as long thin, long, medium, short and mixed (Brasil 2012).
Several biotic and abiotic stress types, such as hydric variation amplitudes, pest attacks and diseases, among others (Castro et al. 1999), may affect the whole grain yield. Rice varieties present different genetic potentials for whole-grain production, due to their susceptibility to present cracks or fissures, when they are subjected to climatic variations during the grain maturation process. Relative humidity and temperature are the main climate elements influencing the crack formation in rice grains (Santos 2012). Grain humidity at the harvesting point and quality of grain-drying processes also affect the whole grain yield. Water levels of 18.5-20.6 % in rice grains were reported to enable the highest whole grain values (Smiderle & Dias 2008).
Another aspect of milling quality lies on the incidence of chalky grains, defined as "peeled and polished grains, either whole or broken, which present opaque color similar to that of chalk" (Brasil 2009). Aspects such as the genetic quality of cultivars, cultural treatments, environmental conditions and harvest humidity influence the incidence of chalky grains.
The white-belly incidence has commercial relevance (Santos 2012), although it is not classified as a defect (Brasil 2012), and it does not affect the chemical and nutritional quality of rice grains. Whitebelly grains present plastering in the peripheral part of the endosperm (Santos 2012).
The milling yield and industrial quality of rice grains are quantitative parameters ruled in a complex way. Thus, it is necessary to understand the effects of genotype, environment and genotype x environment interaction on the expression of these parameters at the time to evaluate genotypes and breeding processes (Facchinelo 2017). Kibanda & Luzi-Kihupi (2007) have reported the scarcity of published data on genotype x environment interactions and milling yield features, mainly with respect to dryland genotypes. Similarly, the literature lacks data on the stability of local rice varieties investigated in the current study. Stability is herein understood, based on Mariotti et al. (1976), as the ability of a given genotype to present highly predictable performance under different environmental conditions.
Thus, the present study aimed to analyze the main milling yield variables of local rice dryland varieties grown in the western Santa Catarina state, as well as their genotype x environment interactions and stability, in two different experimental sites.
MATERIAL AND METHODS
Experiments were conducted in a smallscale farm in the Anchieta county, as well as at the experimental farm of the Universidade Federal de Santa Catarina (UFSC), in Florianópolis, Santa Catarina state, Brazil. Anchieta is located in southern Brazil, in the far west microregion (IBGE 2010) (26º30'53.93"S, 53º18'44.97"W and altitude of 717 m). This region has a humid mesothermal climate (Cfa -Köppen), mean annual temperature of 17.8 ºC and annual rainfall rates of 1,700-2,000 mm (IBGE 2010). The experimental farm in Florianópolis (27º41'06.28"S, 48º32'38.81"W and altitude of 5 m), according to the Köeppen's climatic classification, is located in a subtropical sub-region, which presents constantly humid subtropical climate, no dry season, hot summer and mean annual temperature of 20.1 ºC. The rainfall rate often ranges from 1,270 to 1,600 mm, on a yearly basis (IBGE 2010). The soil in the Anchieta farm is classified as a Cambisol, with wavy and stony terrain, whereas the soil in the experimental farm is classified as a hydromorphic quartz soil.
The local rice varieties investigated in the present study were collected by the agrobiodiversity e-ISSN 1983-4063 -www.agro.ufg.br/pat -Pesq. Agropec. Trop., Goiânia, v. 50, e65085, 2020 Milling yield components of local dryland rice varieties research group of the University Federal de Santa Catarina (NEABio/UFSC), together with farmers from Anchieta and Guaraciaba, counties in the far west of the Santa Catarina state, from 2012 to 2014. These varieties are part of the UFSC rice germplasm bank, where they are currently identified and preserved. They were selected for these experiments based on morphological and phenological dissimilarities of populations evaluated by Pinto et al. (2019). A total of thirty-four varieties were selected to represent -with more than 95 % of accuracy -the genetic diversity observed in the two counties, based on the following features: number of tillers, stem thickness and panicle length. Pinto et al. (2019) had previously classified these varieties in four different classes, by taking into consideration their grain size (Brasil 2012) (Table 1). IPR117 (Iapar/ PR), which is the only dryland rice variety with commercial seed production and recommendation for cultivation in southern Brazil (long thin class), was used as a statistical control variety.
The experiments conducted in both environments followed the technology mostly used by farmers in the region, which comprises tractorizationbased soil preparation, total organic fertilization with chicken manure at planting time (calculated and performed based on soil analysis carried out at each site) and manual weeding to control invasive plants. No pest or disease control was performed.
The experiments followed a complete randomized block design, with four replications at each site. Each experimental unit comprised four 3.0-m rows, spaced 0.34 m apart, and planted with 55 plants m -1 . The two central meters of the two middle rows were considered the useful plot, forming a useful area of 1.36 m², with approximately 220 plants (approximate density of 1.6 million plants ha -1 ).
The grains were harvested at the useful plots by manually threshing the plants' panicle. After the grain weighing procedure was over, the humidity was measured in an instant electronic measuring device (Dicke_Jonh, Multigrain model). All the harvested plots, at both sites, presented a field moisture of 18-22 %. The harvested grains were dried in an oven, at the temperature of 42-45 ºC, until they reached a moisture content of 11.8-13.6 %. A dry grain sample (100 g) was selected from each experimental unit and processed in a Suzuki testing mill (model . The equipment had been calibrated for long thin grains in March 2019. The milling yield and whole and broken grain rates were recorded. After each sample was processed, the broken grains were subjected to visual evaluation in trough equipment (Brasil 2009). Varieties presenting mid-sized grains (19, 54, 59 and 60) required the manual separation of broken grains in the trough and their re-weigh, in order to define the total weight of whole grains. Affected grains in three samples, comprising 100 whole grains from each plot, were manually counted, to enable the analysis of white-belly and chalky grain rates. White-belly and chalky grains were weighed using an electronic scale (0.1 g accuracy) and the rate of each type was calculated in comparison to the total percentage of whole grains.
A joint analysis of variance was performed based on the mixed mathematical-statistical model by Searle et al. (1992): Y ijk = m + G i + B/A jk + A j + GA ij + E ijk (with a fixed effect for environments and a random effect for genotypes and other effects of the model), wherein: Y ijk is the phenotypic value of the i-th genotype in the j-th environment and in the k-th block; m the overall parametric mean; G i the effect of the genotype i; B/A jk the block within the environment in the j-th environment and k-th block; A j the effect of the environment j; GA ij the effect of the interaction between the i-th genotype and the j-th environment; and E ijk the effect of the experimental error associated with the Y ijk observation.
In order to perform the analysis of variance, the homogeneity of residual mean squares relative to the ¹ Identified by access-collection number.
Grain classes
Local varieties¹ Long thin 12,13,17,29,35,50,61,67,68,83,84,98,104 Long 7,10,14,22,24,31,32,34,41,42,43,71,72,82,90 experiments conducted at both sites was previously evaluated by observing whether the association between the largest and the smallest residual mean square was lower than seven, as suggested by Pimentel-Gomes (1990). The stability and genotype x environment interaction were subjected to analysis of variance and comparison of means between the sites. The stability analyses were based on the traditional method described in Cruz & Regazzi (2001) and, subsequently, the rice varieties were ordered based on their mean position, regarding the five analyzed milling variables.
Variables showing significant differences between the treatments in the analysis of variance, at a 5 % significance level (p ≤ 0.05) by the F test, were subjected to the Scott-Knott test, at the same significance level. All statistical analyses were performed using the Genes software (Cruz 2013).
RESULTS AND DISCUSSION
The mean milling yield in Anchieta was 67.97 % (it ranged from 64.27 % to 69.90 %); while, in Florianópolis, it was 63.81 %, ranging from 57.93 % to 67.76 % (Table 2). There was a trend of higher milling yield values in Anchieta. Twenty-five varieties and the control (73 % of the treatments) presented a higher milling yield at this site (Table 2). Twenty-two local varieties grown in Anchieta recorded milling yield values above the market standard (Cepea 2015) and above those required by processing industries (68 %). Nine varieties grown in Anchieta (13, 22, 34, 43, 61, 67, 68, 72 and 84) were among the most productive of the twenty-two varieties presenting the best milling yield performance at the place of origin, with a yield ranging from 2,500 to 3,300 kg ha -1 (Maghelly 2020). None of the investigated varieties has reached this milling yield level in Florianópolis. Artigiani et al. (2012), who investigated a modern dryland rice variety (BRS Primavera), recorded higher milling yield values at the order of 74 %. This outcome shows the potential of these local varieties, since they recorded milling yield values equivalent to those registered for modern commercial varieties, although they did not undergo formal breeding processes. Processing industries currently require milling yield values of approximately 68-72 % in southern Brazil.
Milling yield values lower than 68 % reduce the price paid to farmers (Cepea 2015).
The mean whole grain yield recorded 55.31 %, ranging from 33.05 % to 66.00 %, in Anchieta; whereas the mean whole grain yield recorded in Florianópolis was 53.76 %, and ranged from 38.73 % to 60.23 %. Anchieta recorded the highest whole grain yield. Twenty-nine varieties and the control (86 % of the treatments), among the thirtyfour investigated varieties, presented a higher whole grain yield in Anchieta ( Table 2). The market standard (58 %) set for whole grains (Cepea 2015) was met by 46 % of the varieties grown in Anchieta, and by only 9 % in Florianópolis. The varieties 13, 43, 72 and 84 exceeded the market standard set for whole grains, among the ones presenting the highest yield and best milling yield in Anchieta (Maghelly 2020). Among the improved dryland rice varieties recommended for the central-western and northern Brazil (Embrapa 2018), 80 % recorded 62 % for whole grain-yield potential or lower. Nine varieties grown in the place of origin recorded yields higher than 62 %. These values were similar to the ones recorded by Artigiani et al. (2012) for modern dryland varieties. On the other hand, twenty-four varieties recorded whole grain yields higher than 52 %, being lower than the potential yield value set for officially recommended dryland varieties (BRS Pepita). According to Pereira & Rangel (2001), whole grain yields lower than 50 % make the selection of cultivars for genetic breeding processes unfeasible. Seven varieties investigated in the current study recorded whole grain yields below this level in the region of origin (19, 31, 34, 59, 61, 82 and 103). According to Farias Filho & Ferraz Júnior (2009), the whole grain yield is affected by grain size and shape. Based on their results, the genotype presenting a longer length and smaller width recorded the lowest yield.
Varieties classified in the current study as mid-sized (Table 1), shorter and wider, recorded lower whole grain yields. Although whole grains belonging to this class were manually separated in the trough, the herein used mill was regulated and calibrated for long thin grains, and it may have caused a greater breakage of grains belonging to the mid-sized class.
Anchieta recorded a mean broken grain rate equal to 12.66 %, which was similar to that recorded for Florianópolis (12.58 %). Values recorded for white-belly grains in Anchieta ranged from 0.08 % to 8.35 %, with a mean value of 2.40 %. Values recorded for white-belly grains in Florianópolis e-ISSN 1983-4063 -www.agro.ufg.br/pat -Pesq. Agropec. Trop., Goiânia, v. 50, e65085, 2020 Milling yield components of local dryland rice varieties 0.20** 0.00** -0.00** 0.00** -0.00** 0.00** -Prob. F Test G 5 --0.00** --0.00** --0.00** Prob. F Test E 6 --0.00** --51.76 ns --100.00 ns Prob. F Test G x E 7 --0.13** --0.00** --0.00** Means recorded for the combination of genotype x environment followed by equal lowercase letters in the column (genotype) and by uppercase letters in the line ( ranged from 0.08 % to 9.61 %, with a mean value of 3.64 % (Table 3). A lower incidence of white-belly grains was observed in Anchieta, where 28 of the varieties presented lower white-belly grain rates, Table 3. Percentage of white-belly and chalky rice whole grains based on joint analysis of variance, in Anchieta (Anc) and Florianópolis Milling yield components of local dryland rice varieties in comparison to Florianópolis, based on the joint analysis of variance. The incidence of white-belly in whole grains was higher in varieties classified as mid-sized grains, which resemble special grain types, with regard to shape, as well as grains traditionally used in risotto, which present a higher rate of white-belly or white-core grains. The mean value recorded for chalky grains, in Anchieta, was 0.23 %, with values ranging from 0.06 % to 2.34 %; while that recorded for chalky grains in Florianópolis was 0.92 %, with values ranging from 0.09 % to 3.12 %. Only two varieties grown in Anchieta recorded values higher than the standard accepted by processing industries (1 %), whereas fifteen varieties grown in Florianopolis recorded values higher than the acceptable limit set for chalky grains.
According to Facchinelo (2017), the quality attributes of rice grains are also significantly affected by the genotype x environment interaction. The milling yield recorded significantly different values (p ≤ 0.05), based on genotype, environment and genotype x environment interaction. Jing et al. (2010) recorded similar results for five genotypes grown in different environments.
Whole and broken grains recorded significantly different values (p ≤ 0.05), based on genotype and environment, but not on genotype x environment interaction (Table 3). These values were opposed to the ones observed by Blanche et al. (2009), who reported a significant genotype x environment interaction and a large magnitude for the variable 'whole grain yield'.
With respect to these first three variables (milling yield, whole and broken grains), which are the main factors influencing the prices paid to farmers, the genotypes have shown intrinsic differences among themselves and were influenced by different environments. This outcome emphasizes the significant role played by the genotype x environment interaction for milling yield.
White-belly and chalky variables recorded significant differences (p ≤ 0.05) based on genotype and genotype x environment interaction, and non-significant differences based on environment (Table 3).
Identifying more stable genotypes is an alternative to minimize the effects of the genotype x environment interaction (Borém & Nakano 2015). Based on the mean stability ranking (Table 4), which was calculated according to the mean position of each variety, regarding the stability of each variable, two varieties stood out among the ten most stable ones, and recorded the highest yield and whole grain rates (varieties 35 and 54). The variety 32 stood out as the most stable for yield and as the second most stable for chalky grains; the variety 35 stood out as the most stable for white-belly and as the fifth most stable variety for milling yield; and the variety 10 stood out as the most stable for broken and chalky grains.
Milling yield values higher than 69 % and whole grain values higher than 62 % were recorded for the eight varieties presenting the best industrial quality (7, 12, 13, 24, 35, 54, 72 and 83), in Anchieta. These varieties are promising alternatives to help expanding the rice production to the local or regional market, mainly if taking into consideration the technology level adopted by farmers, or local varieties not subjected to breeding. Besides showing a high milling yield and whole grain yield, the varieties 13 and 72 also stood out for the superior grain yield in Anchieta (Maghelly 2020).
The analyzed data enable farmers to define the most suitable varieties to expand their production and sell it in local markets, by giving priority to the ones presenting the best industrial yield and adjusted (according to grain class) to local mill regulations, mainly the ones set by cooperatives and farmers' associations. Further studies should focus on evaluating the potential of these local varieties in other environments, as well as on assessing their biochemical, nutritional and cooking quality parameters.
CONCLUSION
The investigated local varieties showed differences for industrial performance and stability, based on all evaluated variables. The industrial yield was superior for all the characteristics in Anchieta, region of origin for the varieties. The varieties presenting the best industrial performance for milling yield, whole and broken grains (7, 12, 13, 24, 35, 54, 72 and 83) recorded values equal to or higher than the ones recorded for modern improved dryland and irrigated rice varieties. Values recorded for white-belly and chalky grains were within the limits accepted by the market for the majority of the varieties, both in Anchieta and Florianópolis. | 2020-12-03T09:01:23.931Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "7c12a103bac45841b6182607ee2d2ef208e1d90b",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/pat/v50/1983-4063-pat-50-e65085.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d5a71522b37b63eeada5a60267eb854bf9154adb",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
222321203 | pes2o/s2orc | v3-fos-license | Effect of The Ratio of H-Zeolite Catalyst and Temperature in The Opening Ring Reaction in Bio Lube Base Oil Production from Palm Oil
Lubricants are materials that can reduce friction between two components. Lubricants are very important to keep the engine from being damaged quickly. Currently lubricants on the market generally comes from petroleum derivatives with limited availability. Therefore, technology needs to be developed to look for other raw materials as a substitute for petroleum based lubricating oil, namely by utilizing the potential of existing vegetable oils, one of which is palm oil. Biolubricant made from palm oil will be made using the method of transesterification, epoxidation and ring opening reaction. To increase yield and high product quality, a catalyst in the form of H-Zeolite is used. So that this study aims to determine the effect of the ratio of H-Zeolite catalyst to ring opening reactions. The quality of lube base oil could be indicated from density, viscosity index and acid number. Acid number will smaller in presence higher content in 10% weight ratio of H-Zeolite as catalyst and EPOME also higher temperature of ring-opening reaction because the catalyst H-Zeolit will work more effective in temperature 75C. Other evidence is of density and viscosity index would show good value than smaller amount of H-Zeolit and lower temperature of reaction. Advices for the future research are the purification needs to be done not only physically but also chemically to maximize the results obtained. Preferably on the reaction of opening the epoxides rings used a long chain alcohol compounds that have a high viscosity and boiling point to obtain a better viscosities value.
INTRODUCTION
According to Fisheries and Forestry (2003) Modern society life is supplied by energy and the use of common fossil-based products. The increase in the use of motor vehicles and the development of industrial machinery causes humans to more depend on fossil fuels including for producing lubricants. Lubricants or oils are liquids that determine the workability of the machine and motor vehicles. Lubricants are substances capable of reducing friction between two components. Erhan et al., (2002) states that generally, lubricants are made by combining base oils with additives to enhance the inherent characteristics of oil or to provide new performance properties to the mixture. Mobarak et al. (2014) Explains the current biolubricant can be made in 2 ways, namely the lubricant of hydrocarbon chemical modification and the second is the ester of vegetable oil.
Since the industrial Revolution, lubricants are used in various industrial sectors to lubricate machinery and materials. The Kline & Company Inc. Report (2006) indicates nearly 38 million metric tonnes of lubricants used around the world in 2005, with projections going up by 1.2% in subsequent decades. It is estimated that consumption for the industry reaches 32%, air and sea transportation 9.4% and lubricant for the 11.3% process. Coupled with the Statistics Indonesia (2015) that reported that the total needs of Indonesian lubricants reached 226,249 million liters in 2003. However, around 85% of lubricants used in the world are still based on fossil fuels. Birova et al. (2002) conveys that the negative effects of use of nonbiodegradable lubricants can cause surface and Effect of The Ratio of H-Zeolite Catalyst and Temperature…. (A. Izzan et al.) ground water contamination, air pollution, soil contamination, and consequently, contamination of agricultural products and food. In this regard, environmental issues have increased rapidly worldwide over the last 25 years so it is necessary to do business to replace non-biodegradable lubricants into bio lubricants.
Vegetable oil is a lipid component derived from plants. Research conducted by the Agency for the assessment and application of technology (BPPT) that Indonesia has 60 types of crops that could potentially become the base material of lubricants, such as sunflower, rapeseed, soy, palm oil, and distance. In addition to the properties of its decay can equal mineral oil, vegetable oil is also easily unraveled biodegradable. Vegetable oil contains a ratio of polyunsaturated fatty acids with saturated fatty acids and high thermal stability and a low ratio of unsaturated fatty acids to polyunsaturated fatty acids (Chandu et al., 2013). This is an important characteristic of lubricant. In particular, palm oil has become an attractive renewable energy resource to be processed into environmentally friendly lubricants due to its easily degraded nature, outstanding lubricating properties, high viscosity index, volatility and high flash points (Legarand and Dürr, 1998).
In previous studies, Kuweir (2010) performed a ring opening reaction using an H-zeolite catalyst that would be substituted by monoalcoholic compounds (ethanol, butanol, octanol and Heksadanol) and glycerol. However, the resulting yield is still quite low, it can be seen from not all the glycerol group is repressed into the oxirane as desired and then continued with Åkerman (2011) which uses the solid-Amberlyst-15 catalyst but Because the price is expensive then the usage is still limited so that in the research this time will be attempted using an H-Zeolit catalyst that was received by acid later used in the opening reaction of epoxide rings. Expected reaction with this catalyst will produce the product with the desired character.
The purposes of this research are to study the manufacture of lubricating oil with palm oil as raw materials using transesterification reaction, epoxydation and ring opening, to study the influence of H-zeolite catalyst weight ratio with palm oil on viscosity, density and acid number of ring-opening reactions and assess the effect of ring opening reaction temperature on viscosity, density and acid number of the opening reaction of the ring.
METHODOLOGY
The materials used in this study are palm oil, methanol, NaOH, formic acid, H2O2, natural zeolite catalyst (to be activated into H-zeolite) and 10 liters Aquadest. The tools used in this study are erlenmeyer, glass beaker, pH indicator, measuring cup, drop pipette, burette, klem and statif, syringe, picnometer, volume pipette, boiling three neck, thermometer, viscosimeter, and condenser.
The research begins with transesterification, reacting palm oil with methanol with the help of NaOH catalysts. This transesterification aims to produce POME (Palm Oil Methyl Esther). The procedure to be done is palm oil is heated to a temperature of 65 o C with constant stirring. NaOH catalysts are first dissolved in methanol. Comparison of methanol mol and moles of palm oil used is 6:1. The NaOH catalyst used is as much as 1% of the oil palm Mass. Methanol and NaOH solutions are slowly coupled with stirring. Attach the condenser to prevent methanol evaporation, the reaction is carried at 65 o C and kept constant for 1 hour. After the reaction carried out the resulting product is allowed for 3 hours, then the product will be separated into 2 phases namely POME and glycerin. POME and glycerin are separated by a separating funnel, then washed with aquades with a comparison of 1:1 volume and stirring evenly. After stirring is performed, the silence and POME will be separated again. Aquades are separated from POME with a separating funnel. Washing is done three times to remove the glycerin that is still mixed in POME. Heat POME in the oven until the POME is free of water (golden yellow).
Metana: Media Komunikasi Rekayasa Proses dan Teknologi Tepat Guna
Juni 2020 Vol. 16(1):1-10 Effect of The Ratio of H-Zeolite Catalyst and Temperature…. (A. Izzan et al.) 3 Pome results in the transesterification is epoxidized with hydrogen peroxide (H2O2) with the help of the formic acid catalyst to produce EPOME (Epoxidised Palm Oil Methyl Esther). The procedure is to heat the POME until it reaches a temperature of 50 0 C with stirring. Insert the formic acid catalyst into the pome which has been heated with a comparison of mole POME and the mole of the acid formic 1:0.3. Add 5 ml of H2O2 with continued stirring. Keep the reaction temperature at 65 o C. Add 5 ml of H2O2 with continued stirring. Keep the reaction temperature at 65 o C. Add the next 5 ml H2O2, keeping the temperature up to 65 o C back. Let stand to separate into two phases, EPOME and water with H2O and formiat acid. Separate using a split funnel. EPOME is washed with aquades (volume ratio is 1:1) and stir evenly. After stirring is done, let stand and EPOME will be separated again. Aquades is separated from the EPOME with a separating funnel. Washing is done three times. Heat the EPOME with an oven until the EPOME is free of water.
The zeolite that will be used in the ring opening reaction comes from Gunung Kidul regency. Zeolite is purchased in powder size. Before use, zeolite is activated first chemically and in physics. Chemical activation is done with an acid solution in this case HCl 4M while heated for 70 minutes, this chemical activation aims to clean the surface of the pore and dispose of the impuritiy compounds such as CaO and MgO. While the activation of physics is done by heating and Calcination aims to vaporize the water caught in the pores of the zeolite crystals by using a furnace with a temperature of 500 o C for 2 hours then cooled in the desicator.
After epoxidation rection, the next is ring opening reaction. The purpose of the ring opening reaction is to create the the Epoxides group formed in the epoxydation process becomes unreactive. This reaction procedure takes place with 2 variations of the temperature variation and the ratio of the catalyst/raw material. Heat up the mixture of ethanol and EPOME up to 50 o C with stirring, heat again the mixture up to variables changes of temperature (55 o C, 65 o C and 75 o C) without stirring. Add variable changes of H-zeolite ((1% W, 5% W and 10% W EPOME) for each variable change of temperature. The product is formed, then test product characteristics consisting of density, viscosity, and acid number.
RESULTS AND DISCUSSION
The increase in the quality of basic lubricating oils can be done by 3 stages of the process of transesterification carried out to obtain Palm Oil Methyl Ester compounds by reacting triglycerides and methanol with the help of NaOH catalysts, unstable double bonds, enhanced by performing epoxydation by peroxide acid and resulting Epoxidised Palm Oil Methyl Ester, after stabilized, ring opening reactions performed to reenhanced unreactive product with ethanol and Hzeolite catalysts and base on research resulting data (Table 1) From Table 1 we sould know that increasing value of density and viscosity index followed by decreasing of acid number in every variable. Quality of lube base oil could be indicated from density, viscosity index and acid number. Acid number will smaller in presence higher content in 10% weight ratio of H-Zeolite as catalyst and EPOME also higher temperature of ring-opening reaction in value 1.847 NaOH/100g Oil because the catalyst H-Zeolit will work more effective in temperature 75 o C. Other evidence is increment of density and viscosity index would show better product than smaller amount of H-Zeolit and lower temperature of reaction show the best value in density 0.936 gr/ml and VI 278.7.
The Effect Ratio of H-zeolite Catalyst and Epoxidized Palm Oil Methyl Ester to Viscosity, Index Viscosity, Density and Acid Number of Ring-opening Reactions
The existing EFAME of the glycerol group will experience increased density compared to the previous one. The larger the increase in density illustrates the increasing number of glycerol clusters. It is described in the following reactions.
The catalyst will open the ring on the EFAME so that there is an excess proton (CH +), then the glycerol group will attack the area. The more the glycerol groups that stick and the molecular weight will be increased, therefore the density will increase (Kuweir, 2010). The product density of the ring opening reaction can be seen in the following bar chart.
Effect of The Ratio of H-Zeolite Catalyst and Temperature…. (A. Izzan et al.)
From the image above shows that the greater the number of catalysts added then the density of the resulting product is greater. This is because more and more catalysts on variables, the greater the decline in activation energy and the more numerous the epoxides rings that the efame opened by the catalyst along with the increase in the number of catalysts added so that it will The greater the number of glycerol that can be radiated (Sanjaya, 2008).
Acid number is a weak and strong organic acid size in the lubricant. The smaller the number of acids from a lubricant, the better the quality of the lubricant. The high acid number will lead to the formation of a viscous layer consisting of resin/varnish, increasing the viscosity of the lubricant to reduce the flow/pump efficiency, the risk of corrosion of the machine especially when there are water pollutants, etc. (Susilowati, 2016). To handle this, it can be done by the neutrality of the residual acid using Na2CO3 on the washing process (Arizona, 2010). Na2CO3 will react with acids and the following neutrality reaction occurs.
HCOOH + Na2CO3 HCOONa + CO2 + H2O The viscosity of a lubricant is a measure of the large resistance given by the lubricant to flow or in other words is a viscosity measure of the lubricant. The greater the viscosity (the more viscous) means the greater the durability to flow. The viscosity index is a measure of the viscosity change to temperature (Godfrey et al, 1995). The viscosity of the lubricant will decrease if the temperature rises, otherwise the viscosity will rise if the temperature drops. This change will not be the same for all lubricants. A good lubricant is expected to have a sufficiently low viscosity at low temperatures, so that it can flow easily when the machine starts up. Conversely, at high temperatures, the viscosity should be high enough to keep it flowing and coat the surface of the machine well.
The viscosity value of the product is influenced by its molecular structure. The more the Hydroxylol group is the radiated then the viscosity value will be increased, this is due to the increasing group of glycerol containing the polar Group -OH the radiation, the tensile style of intermolecular attraction will be greater which results The greater the obstacles he had to flow (Utami, 2011). The following is the viscosity of the products produced at various ratios of H-zeolite catalysts. Figure 4 Indicates that the value of viscosity tends to increase with increasing number of added H-zeolite catalysts. The larger the number of Hzeolite catalysts added, the more glycerol groups containing polar guus -OH the radiation. So, the tensile style of pulling between molecules will be greater which will result in the greater resistance that is possessed by the resulting product to flow (Kuweir, 2010). This leads to a higher catalyst ratio, so the product viscosity increases.
The Effect of Ring-opening Reaction's Temperature on Viscosity, Index Viscosity, Density and Acid Number
Ring-opening reaction do after EPOME which have oxirane ring structure. From the graph we could see that in temperature reaction at 55 o C have highest value of density and followed by 75 o C and the lowest value density is 65 o C, this condition occure in all weight catalyst ratio variables Zengshe et al. (2010) doing experiment in ring-opening polymerization reaction was conducted at different temperature ranging from 0 to 50 o C. The glass transition temperature (Tg) of insoluble Epoxidised Vegetable Oil Ester polymers after extraction was measured by DSC. It is well known that the crosslinking density influences Tg and as the crosslinking density decrease, the free volume of a material will increase, thereby, decreasing Tg correspondingly. Ring-opening polymerization of 3-membered rings is thermodynamically favored (ΔG, free-energy change is negative) (Chanda, 2000). Considering the thermodynamic relation
ΔG = ΔH -T ΔS
where ΔH is the enthalpy change, ΔS is entropy change, T is temperature (K). ΔH is the major factor for determining ΔG for ring opening in 3membered rings, while ΔS is very important for 5and 6-membered rings. ΔH was reported negative (-113.0 kJ/mol) for the 3-membered ring-opening (Allcock, 1970). Reaction temperature 55 o C have lowest value of density because of its temperature glass, so the volume increase and decreasing density. But, by raising temperature, the required conditions were optimized, so it creates increment of density in temperature 75 o C (Karadeniz, 2015).
Acid number is important to know because lube base oil will use as lubricant of engine or the other lubricant. The occurrence is vaue of acid number is decreasing with higher temperature on ring-opening reaction.
The total acid number (TAN) of the starting material Epoxidised Oil Methyl Ester, and reaction product were determined to check if there was any formation of free fatty acids during the reaction. Palm oil has a TAN value of 1.12 g of NaOH/100g, EPOME has 0.12 g of NaOH/100g, and all four reaction products have more than 0.5 g of NaOH/100g. This further confirms that there is hydrolysis of a triacylglycerol backbone to form free fatty acids in these reaction conditions, and also decreasing of acid number means with increasing the temperature, catalyst H-Zeolit work more optimal (Josia, 2016).
The kinematic viscosity is the most important physical property of lubricating base oil (Sharma et al., 2006). It is an index for analyzing of internal resistance in the motion of the lubricating base oil. The kinematic viscosity of lubricating base oil possible to change with temperatures; it increases while the temperature decreases. The effect of molecular weight and molecular structure of alcohols on the kinematic viscosity at 40°C and 100°C of alkyl esters are expressed. The viscosity index is an arbitrary number indicating the effect of changing temperature on the kinematic viscosity of alkyl esters. A high viscosity index signifies the relatively small change of kinematic viscosity with temperature. The study results demonstrated that viscosity performance was suitable for biolubricant production (Inkerd, 2015).
Results of Syaima, (2015) clearly demonstrate that when the temperature increased, the reaction rate increased. This has been anticipated since an increase in temperature would absolutely accelerate the process. reactive and could do ring-opening reaction better. But, common problem with performing this reaction with strong acid is that epoxides of the major fatty acid component in the vegetable oil were found a significant amount of structure and this structure depend of the variety of the strong acid itself via intra-molecular etherification or inter-molecular polymerizing. This selfetherification/ polymerization leads to undesired product with increased molecular weight and viscosity (Benecke, 2017). This viscosity increase may be due to a higher molecular weight resulting from the ring opening polymerization of polyols and may also be due to the free hydroxyl groups. EPOME in 75 o C has a much higher viscosity than EPOME in 65 o C due to its higher molecular weight resulting from the high degree of chain coupling (Karadeniz, 2015). | 2020-07-16T09:02:11.941Z | 2020-05-30T00:00:00.000 | {
"year": 2020,
"sha1": "59ac0000ac0e373af2b2990c95aefb3a57d915b7",
"oa_license": null,
"oa_url": "https://doi.org/10.14710/metana.v16i1.30363",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fe85cf2b57480d16943f3f8a85d79bbf20c0703c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
251934437 | pes2o/s2orc | v3-fos-license | Respect for illiterate or unconscious patient’s autonomy as a requirement for the legality of medical procedures in the polish healthcare system: a case report and review of the literature
According to the Polish law, each patient has the right to refuse to consent to a medical procedure, even if the refusal concerns a life-saving procedure. It may be difficult for a physician to accept this kind of decision. In each case, however, medical intervention requires patient’s consent. Its lack makes physician’s actions illegal. Such a situation becomes more complicated when the patient who is intellectually incompetent, unconscious or illiterate is unable to express a consent to a medical procedure. Then, the possibility and the need to document and prove the patient’s consent becomes crucial from the point of view of the legality of medical personnel’s conduct. In this article, two representative clinical cases are discussed in the context of the legal assessment of the physician’s conduct in the event of legal complications related to the process of consenting to medical treatment. The authors analyze ethical dilemmas and legal risks that doctors may face in the process of consenting to risky medical procedures by unconscious and illiterate patient.
Progress in medical science has always been a source of enthusiasm and hope among patients and physicians alike, causing many ethical and legal doubts and reflections, at the same time. The current medical paradigm places a patient in the centre of interest in these fields of science, granting him or her respect and autonomy understood as competences regarding individual making important decisions about one's own health and life.
A physician may perform surgery or apply a method of treatment or diagnosis posing an increased risk for a patient, after obtaining his or her written consent [1]. The Polish law does not define either surgery or any other medical procedure at an increased risk. The subject-matter literature states that the term "surgery", as understood today, was firstly used as early as in 1597. Currently, in the medical term, surgery is defined as a medical act consisting in performing a maneuver or a series of maneuvers with or without the use of surgical instruments in order to treat a disease [2]. The term "increased-risk procedure" used in medical terminology is also not defined by the Polish legislator in any way. The only regulation specifying the need to obtain patient's consent because of an increased-risk procedure only applies to patients admitted to a psychiatric hospital without their consent, and concerns a cisternal or lumbar puncture for collecting cerebrospinal fluid or administering drugs and performing electroconvulsive treatment [3].
According to the PWN Dictionary of Polish Language [4], "increased" means higher than normal, more intense. In a colloquial and fairly common understanding, surgery is a medical procedure performed in an operating theatre by a physician specialising in surgery on a patient being anaesthetised by an anaesthesiologist. Such a procedure often, but not always, involves tissue disruption. Therefore, it should be assumed that the category of an increased-risk medical procedure is a concept broader than surgery, often performed outside an operating theatre, for example, in endoscopic, invasive cardiology, and invasive radiology laboratories. A high-risk procedure is a dynamic and individual category, because it is a derivative of the health of a patient undergoing surgery, current medical knowledge, necessary skills and competences of both a surgeon and an anaesthesiologist, as well as appropriate resources (equipment, drugs) needed to perform a given procedure.
Medicine has made tremendous progress over the last 30 years. Possibilities of performing complicated and, at the same time, less invasive procedures have significantly expanded, thanks to the wide use of technologically advanced devices. The availability of technologically advanced devices supporting many organs and systems of patients is a milestone in the development of anaesthesiology and intensive care medicine. The human body's response to surgical manipulations is proportional to the extent and duration of surgery. The effects of interference in a human body also include a post-surgery period. Therefore, the anaesthetic procedure includes elements of intra-and post-surgery intensive care. The development of new surgical techniques using modern devices provides surgical teams with the possibility of selecting a treatment method appropriate for an individual patient. The development of medical knowledge and skills as well as the possibility of using modern devices have significantly minimised risks connected with surgery. Thanks to the progress in this field, complicated surgical procedures have become possible even for patients at the borderline age, for example, infants and premature infants, and elderly persons, including patients whose clinical condition is burdened with not only with current diseases, but also coexisting chronic diseases, including those limiting patients' intellectual competences.
At the beginning of the 1990s, the medical community adopted the concept of perioperative medicine understood not as a separate medical specialisation, but as a special care for each patient from the moment of making a decision about surgery, through the surgical period, to full recovery at home. This concept primarily takes into account the set of clinical symptoms and the pathophysiology of vital systems and organs important for humans' life, which have or may have an impact on the clinical course of a patient qualified for surgery in the pre-, intra-and post-surgery period. The bases for optimising patient's perioperative safety are: thorough assessment of patient's health in terms of potential risk; patient, surgeon and anaesthesiologist's joint decision making; selection of such medical procedure technique which minimises a perioperative risk and shortens a post-surgery period [5].
A dynamic nature of procedures categorised as those at an increased risk means that a treatment process should also take into account the knowledge about the type and severity of potential complications, the possibility of preventing them and/or reversing their negative effects. The individual nature of an increased risk is closely related to the specific health conditions of a patient which may substantially affect the nature and extent of the risk associated with a specific procedure. For these reasons, it seems reasonable to assume that a higher-risk procedure should be understood as any medical procedure that carries a greater risk for an individual patient than the existing risk for the majority of patients undergoing the same procedure.
Performing surgery or a medical treatment at an increased risk for a patient who is minor, incapacitated or incapable of expressing consent in writing, requires a substitute consent. Substitute consent in the case of incapacitated children or patients, i.e. legally equated in legal status to minors, is expressed by their statutory representative, who is usually a parent in the case of children, and a guardian or probation officer appointed by the court when it comes to incapacitated patients. In a situation where a patient is of legal age and incapacitated, or when it is impossible to communicate with the legal representative of a minor or incapacitated person, performing surgery or a medical treatment at an increased risk is possible after obtaining a consent made by the guardianship court [6].
In medical practice, however, there are situations where the fulfilment of this stringent legal condition is significantly difficult or even impossible, because a patient is in a state of immediate threat to his or her life and therefore he or she requires emergency surgery. If a delay caused by a consent-obtaining procedure would pose a threat to patient's life, serious injury or serious health impairment, a physician may perform such medical procedures without the substitute consent of the patient's statutory representative or the consent of the competent guardianship court. In such a case, the physician is obliged, if possible, to ask another physician, possibly of the same medical specialisation, for advice. Then, after carrying out all necessary treatment activities for the patient, the physician is obliged to immediately notify his or her statutory representative, actual guardian or the guardianship court about the performed activities [7].
Advance healthcare directive in case of unconsciousness
In 2005, the Polish Supreme Court issued a judgement which became an important guideline in the legal assessment of patients' declarations of will expressed in the event of loss of consciousness. There is no system of common law in Poland, but the judgements of the Supreme Court play the role of quasi precedents in practice due to the judiciary authority of this court [8]. The factual circumstances, the judgement issued was based on, concern a road accident the claimant suffered on 18 August 2004. As a result of it, she lost consciousness, and her health condition due to serious injuries, required transfusions of blood and blood products. A written declaration of will found with the patient, entitled "Health statementno blood", stipulated that she would not agree "under all circumstances" to "any form of blood transfusion", even if it would be necessary to save her health and life. The woman also explained in her statement that she was one of Jehovah's Witnesses and that she wanted to obey the Bible's commandments, among which there was one telling: "Abstain from … blood" [9]. As a result of the accident participated by the woman, her husband died and her son also suffered.
The physicians, despite the fact that they knew the contents of the patient's statement, applied to the guardianship court for permission to perform a blood transfusion. A court of first instance issued this kind of consent, pointing to the overriding need in the system of social values to save human life, which justified subjecting the claimant to medical procedures specified by a specialist in anaesthesiology and intensive care medicine as necessary. Based on the consent granted by the court, the patient underwent the transfusion of blood and its products, contrary to the instruction contained in her written health statement. After full recovery, she appealed against the court's decision, challenging its lawfulness. At the stage of appeal proceedings, the claimant's son joined the case as a participant. A court of second instance supported the arguments of the district court and discontinued the proceedings, but the claimant appealed against this decision to the Supreme Court.
When examining a cassation appeal filed in that case, the Supreme Court indicated that the regulations being in force in Poland regarding patient's consent or its lack, despite the legal climate conducive to respecting patient's will, do not apply directly to advanced directives, although similar regulations already exist in many countries (Patiententestament, testament de vie, living will). At the same time, the Supreme Court noted that there was no provision which would exclude patient's right to determine which medical procedures should be abandoned in the event of patient's loss of consciousness and his or her inability to effectively object to them. Therefore, it considered this type of declaration of the patient's will to be assessed in this case as legally permissible and binding by deciding in the thesis of the judgement that the patient's statement in the event of loss of consciousness, defining the will regarding physician's conduct in relation to the patient in therapeutic situations that may arise, is for the physician -if it has been made explicitly and unambiguously -binding.
In the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine of Oviedo, signed by Poland but still not ratified by it [10], it is indicated that the previously expressed wishes of the person concerned regarding the medical intervention should be taken into account, if at the time of its implementation he or she is not able to express his or her will. As indicated in the subject-matter literature, this regulation is categorical and constitutes an imperative of the need to respect such a declaration of will, which, however, does not apply to this type of patient's wish being against the law, medical knowledge or physician's conscience [11]. It may be treated as an interpretative guideline when assessing such cases under the Polish law. It expresses the axiom of respecting patient's will regardless of the fact that his or her declaration of will is not expressed at the time of performing or resigning from a medical procedure, but in a way for the future, updating itself in connection with the fact that due to, for example, loss of consciousness, the patient cannot effectively express this will at a given moment.
The current standards of medical practice, which constitute Evidence Based Medicine (EBM), assume that the declaration of a Jehovah's Witness, an adult and fully legally capable person, must be respected, and physicians should be prepared to use alternative methods of medical treatment. Patient's decision, in physicians' opinion may be wrong, but it must be respected by them [12]. Thus, despite the lack of an explicit legal regulation, patient's will expressed for the future in the event of loss of consciousness, regardless of religious, ideological or other motives, is considered by judicature, legal doctrine and EBM as binding for physicians and requires its absolute respect.
Presentation of a representative clinical case
It is a common situation in hospitals when it is necessary to urgently perform a higher-risk procedure or surgery, however, a patient is not able to sign a consent document due to certain physical obstacles, although he or she is fully conscious. Such a case took place during the hospitalisation of an over seventy-year-old patient from one of the hospitals in Poland [13], who expressed verbal objection to a blood transfusion procedure, but due to his health condition (bilateral paresis of the upper limbs), he could not sign the relevant document by hand. The patient was admitted to the hospital with a femur fracture. A diagnosis indicated: contact difficult due to deafness, correct mental orientation. The patient was unable to write by hand, but he was fully aware and acted with due discernment. On 8 December 2019, the patient, in the presence of his family, gave his consent orally to the surgical treatment of a femur fracture, which also includes blood transfusion, indicated in a therapeutic instruction. On 10 December 2019, a fracture anastomosis was performed, which resulted in blood loss, as a result of which the patient required a blood transfusion. On 11 December 2019, he orally refused the blood transfusion for the first time. The extensive written objection, drawn up by the physician receiving this declaration of will, did not bear the patient's handwritten signature, but was articulated to the physician orally in the presence of the patient's family. On 13 December 2019, the patient orally refused the blood transfusion again. The refusal to consent to the blood transfusion was expressed by the patient two times more, each time being recorded in a written form without the patient's signature, but in the presence of witnesses. The family was insisting on the hospital performing the blood transfusion for the patient who suffered atherothrombotic stroke on 17 December 2019 against the patient's will being known to the physicians.
Pursuant to the provision of Art. 79 of the Polish Civil Code [14], a person unable to write may submit a written declaration of will in such a way that he or she will make an ink fingerprint on a document, and a person authorised by him or her will write his or her name and surname and sign it next to this print, or in such a way that, instead of the person making the declaration, a person authorised by him or her will sign, and his or her signature will be certified by a notary public, a mayor of a municipality, a governor of a county, or a marshal of a province, with an indication that it was put on the document at the request of the person unable to write. Under the Polish law, a person who is unable to write, who has the ability to discern the essence of activities performed, may express his or her will either through an ink fingerprint in the presence of witnesses, or by using the official formula of expressing consent. It may seem that expressing the will by an illiterate person by making an ink fingerprint on a document requires the possibility of moving his or her hand, because performing this action must be an independent act, and the opposite interpretation could be a source of abuse. By analogy, it may be assumed that since a patient is not able to sign himself or herself, he or she is also not able to make an ink fingerprint on a document himself or herself. Involving an official, on the other hand, in a decision-making process, before whom patient's will for a specific medical procedure may be expressed, seems difficult to enforce in practice, and it will not be applicable in emergency situations, when the need to obtain consent is so urgent that it is not possible to organise such a type of activity in the required time.
Such a situation took place in the described factual circumstances, because the patient due to bilateral paresis of his upper limbs was not able to sign the document of consent and then objection to a medical procedure. Because of the urgent nature of the medical activity, it was also not possible to use the official formula for signing a declaration of will by an illiterate person. In such cases, it seems that the only option is to receive patient's declaration of will and document it in all possible ways which will sufficiently prove his or her will. It is possible to record patient's statements, under his or her oral consent, by using sound or video recording devices. It should be assumed that the issue of the method of documenting a consent or objection to a medical procedure in a situation where a written form is objectively impossible is of enormous evidential significance, as it determines whether a physician can prove that he or she has performed a given medical procedure legally.
Discussion
Human safety was and still is one of the highest values in almost all cultures and civilisations. Lack of safety causes anxiety and a sense of threat in both a child and an adult. Abraham Maslow, an outstanding American psychologist of the twentieth century, who also graduated in law, included safety among the elementary needs of every human being. Many fields of science deal with the study of safety aspects, including philosophy, psychology, medicine, and law. The interdisciplinary approach makes it possible to notice the multidimensionality of person's safety, pointing to ethical and deontological standards for ensuring medical safety, as well as legal norms recognising safety as a protected value [15].
Medical risk is an inseparable element accompanying every patient undergoing the process of diagnosis and treatment. This is a danger that cannot be completely eliminated, but must and can be minimised. In each health care system, the safety of a person treated is the resultant of several basic factors: human, technical and environmental ones -and depends on the dynamic process of managing the risks associated with these factors throughout the course of treatment. For optimal patient's safety, it is important to be aware of risk factors, as it allows them to be identified and eliminated in advance, avoiding the emergence of a risky situation crisis. Optimisation of patients' safety is based primarily on compliance with procedures covering the organisation of medical activities, including careful qualification and preparation of patients. There are tools for this, such as guidelines and standards established by the bodies of scientific societies of all medical specialisations based on the best evidence medical research (BEMR). It is extremely important to identify patients at increased and high risk resulting from the coexistence of chronic systemic diseases. Such identification allows physicians to optimise patient's condition before surgery, individualise diagnostic and treatment procedures, choose a type and method of surgery and anaesthesia. In the immediate post-surgery period, it allows for considering the patient's treatment as intensive care or in an intensive care room [16].
It should be noted that limiting ourselves to algorithms as procedures based on BEMR may pose a specific risk of underestimating the variability of individual reactions of a human organism under the influence of context factors which define the individual situation of a specific person. Contextualisation is a process of identifying specific factors in patient's life situation, focused on personalised care. In the light of the subject-matter literature, contextualisation is an integral part of therapy participated actively by a patient and/or his or her caregivers. Among many factors which make up the functioning of a person, family and socio-material situation, access to professional health care, and the possibility of exercising self-care are the main contextual factors directly and indirectly influencing patient's health condition. The benefit of contextualisation in the treatment of an individual patient is still not sufficiently widespread, although it is an important element of a decision-making process, with a proven impact on the effectiveness and quality of care and patient's satisfaction [17]. The implementation of each medical procedure for a patient should take into account the reduction of risks associated with it. Therefore, it is worth emphasising at this point that additional and equally important tools, enhancing patient's safety, are also provided by broadly understood medical law, integrated in its content with medical determinants. Starting from identifying personal information through informed consent and ending with exercising due diligence in carrying out medical procedures.
For the legal validity of the statement of an adult and incapacitated patient made in the event of loss of consciousness, it is of prime importance to establish the authenticity of the signature on the document containing the content of his or her statement. The physician does not have any professional qualifications to assess whether the signature belongs to a particular patient. It may be able to compare it with the signature previously placed in the medical records, but this is not the case for patients who have not been hospitalised at all in a given hospital or in the absence of access to data on previous patient's stays. A handwritten signature may be illegible and cause serious doubts as to evidence which has to be resolved definitively at a given moment under pain of illegal performance of a medical procedure. It has a negative impact on the possibility of uninterrupted provision of health services. It is also possible that the content of a statement itself is so ambiguous that it would be difficult for a physician to determine the actual will of a patient, and as a result, he or she may not be able to establish what the patient did not wish for at the time the statement was made, not only with sufficient certainty, but even with high probability. There may also be additional, legally irrelevant, but not indifferent ethically or emotionally, arguments of people related to a patient, especially when they constitute a source of information about a potential change of decision, which the patient simply did not manage to reveal earlier.
The Polish legal system does not provide for the possibility of making medical decisions for an adult and incapacitated patient by members of his or her family. In a situation where a patient with full civil rights, who has reached 18 years of age, cannot express his or her will to a medical procedure himself or herself, it is necessary to obtain a substitute consent from a guardianship court. The Polish legislator does not provide for the possibility of an alternative solution. The substitute consent of the guardianship court is the only legal option in this type of cases, as long as patient's condition allows for waiting for its issuance. In a situation where a delay related to the procedure of obtaining the substitute consent would risk serious health consequences for the patient, a physician may even perform a high-risk medical procedure without this consent and notify the guardianship court of this fact after that.
The possibility of making decisions for a patient by another person applies only to those who are minor and fully or partially incapacitated. A person who is over 13 years old may be completely incapacitated if, due to a mental illness, mental retardation or other type of mental disorder, especially alcohol or drug addiction, he or she is unable to guide his or her conduct. For an incapacitated person, custody is completely established, unless he or she is still under parental authority. An adult person may be incapacitated partially due to a mental illness, mental retardation or other mental disorder, especially alcohol or drug addiction, if the person's condition does not justify total incapacitation, but the help is needed to conduct his or her affairs. A guardianship is established for a partially incapacitated person [18].
In theory, it is possible for a patient to grant a power of attorney in the event of loss of consciousness, in which he or she would give a family member or other person the power to make a decision for him or her in a situation when he or she is unable to do so himself or herself. In practice, numerous and serious doubts regarding this type of legal transaction should be indicated, including the lack of subjective and objective guarantees of protection of the patient against the actions of his or her attorney [19]. Equally problematic seems to be empowering a substitute decision maker to make medical decisions in the part concerning a decision-making process itself, because if a patient agrees or opposes a specific medical procedure, a decision-making process takes place in the sphere of his or her will. This cannot be carried out by the substitute decision maker, as such a maker assesses the necessity of performing a medical procedure based on his or her opinion in relation to the risks and benefits associated with carrying the medical procedure or not, previously presented to him or her [20]. Meanwhile, it is a patient who should evaluate the proposed medical procedure from the point of view of possible complications and consequences of its failure, and it is not possible to transfer this element of a decision-making process to another person in this respect. Only in a situation where the power of attorney concerns a specific medical procedure, for which a patient had already received comprehensive therapeutic instruction and carried out an internal volitional process regarding this procedure, it may be assumed that the patient had relatively effectively empowered another person to make a proper decision for him or her. Also in such a case, a change in circumstances cannot be ruled out. It would in fact invalidate the authorisation granted, for example, in the case of a change of a physician for whom a consent was granted. A blank and general medical power of attorney, in turn, should be regarded as deprived of legal value of effectiveness due to the lack of the attribute of awareness and information regarding the activity to which a consent or objection is to be expressed.
In the judgement being discussed, the Supreme Court determines the need to respect patient's will, but in the Polish legal system a quasi-precedent, formally binding only in the case on which it was issued, should not constitute the basis for decisions made by physicians in emergency circumstances. The legislator is expected to have legal certainty, the knowledge of which will be sufficient to make a legal decision regarding patient's health or life, while such statements raise serious doubts among representatives of legal science, thus being completely ambiguous for physicians [21].
In the case of each medical procedure, patient's consent is preceded by a therapeutic instruction under which a physician is obliged to provide the patient or his or her statutory representative with accessible information about his or her health, diagnosis, proposed and possible diagnostic and treatment methods, foreseeable consequences of their use or not using them, treatment results and prognosis for recovery. In a situation when a patient makes an advance healthcare directive, even if it concerns an objection to a specific medical procedure that is yet to be performed, he or she is not able to anticipate the consequences of failure to do so. For these reasons, such a patient's decision is incomplete as made in circumstances other than those at the moment when the necessity to use it is updated [22]. It is possible that in a situation of imminent threat to life, in the absence of a medical alternative, a patient would change his or her mind. However, when at the moment of updating the necessity to use an advance healthcare directive, he or she is unable to revise his or her opinion, for example, due to loss of consciousness, his or her potential -previously expressed -objection becomes questionable. In the circumstances of the case of the patient injured in the road accident, her statement was categorical and authentic, and it was confirmed in a later court trial; however, it is difficult to deny the supposition that, due to the death of her husband in the same accident, she might want to change the previously made and expressed in writing decision about a possible objection at that moment, which became real in completely different circumstances.
In Poland, there is no legal regulation for advance healthcare directives, although such regulations already exist in many countries, including Belgium, the UK, the Netherlands, Spain, Austria, Finland, and Hungary [23]. The Polish legal doctrine points to the negative effects of the lack of statutory solutions regarding the admissibility and rules for submitting declarations of will for the future by patients, considering their legal admissibility in the numerus clausus system of unilateral legal actions, among which they are not mentioned [24]. Meanwhile, due to the legislator's lack of actions, it is a physician who is forced to make a legal assessment of declarations of will made by patients in the event of loss of consciousness with consequences for their life or health, but each time there is no guarantee that the decision made by him or her to grant or refuse such a document is proper.
In the case of an illiterate patient, the legislator also does not provide physicians with sufficient help, because the legal solutions which are generally provided as a way of expressing will by people unable to write date from 1964 and have been in force almost unchanged from that time until today, and thus do not fit in with the present reality in any way. These regulations may be applied only partially, in the event that there are real, temporary and organisational possibilities to provide a patient with the possibility of submitting his or her declaration of will before an official such as a notary public. Involving such authorities as municipality mayors, county governors or province marshals seems completely redundant and practically impossible, given the political dimension of these positions and the scale of the potential demand for participation in activities with patients. Current medical knowledge which physicians are obliged to apply is dynamic, while the almost half-century-old legal solution concerning the way of documenting wills by patients unable to write is completely inconsistent with the challenges of social standards of dealing with patients.
Regardless of the lack of a contemporary feature, the existing legal regulation is incomplete as it does not regulate cases where, due to dynamic changes in patient's health condition, it is not possible to call an official to document his or her will. In such situations, it seems necessary to use all the legally available options to record patient's decision to consent or oppose a medical service, even by using audio-visual devices under the patient's consent. However, if there is no consent to record patient's oral statement, it is necessary to arrange the participation of witnesses in such activities, who shall then give a guarantee of credibility for a statement made by the patient. Obviously, physicians' choices are limited to a group of people present at a hospital ward or simply available at a given moment. It is also be crucial to prepare a document that faithfully reflects the real will of a patient, but at the same time, contains all the obligatory elements for a physician, in particular, an accessible instruction of the patient about the consequences of expressed objection or the consequences of consent or declared objection. It seems that despite the lack of proper regulations at the level of generally applicable legislation, some internal instructions of a medical entity may be helpful in indicating the method of conduct of a physician in the event of inability to apply an imperfect legal solution, in order to minimise the need to independently search for ad hoc substitute solutions in case of emergency.
Conclusions
According to the Polish Criminal Code, anyone who performs a medical procedure without patient's consent is subject to a fine, restriction of liberty or imprisonment for up to 2 years [25]. Determining patient's will in a situation where he or she previously expressed a potential objection in writing, i.e. articulated in the conditions of ignorance about the medical consequences of such objection, or when he or she cannot sign an objection or consent document at the time of expressing them, should be made with due diligence and with available methods and means.
A physician is obliged to practice in accordance with the indications of current medical knowledge, methods and ways of preventing, diagnosing and treating diseases available to him or her, according to the principles of professional ethics and with due diligence [26]. In the same way, a physician is obliged to seek to know and document the actual will of a patient and should then be assessed through this prism, taking into account the need to replace ad hoc the legislator who has refrained from sanctioning a specific model of conduct in such situations.
The physician's obligation is an obligation of careful action, not the result, and by analogy to the nature of the obligation that is imposed on the physician in terms of the manner of performing his or her profession, it is appropriate to evaluate his or her conduct in the process of determining and documenting the content of patient's actual will in a situation where he or she encounters the difficulties discussed in this regard.
Therefore, physician's conduct should be assessed by the judicial authorities, which should confront his or her diligence in establishing and consolidating patient's declaration of will about consent or objection to a medical procedure with the legislator's carelessness in meeting the requirement of certainty and up-to-date legal provisions. Summarising the above considerations in the aspect of performing medical activities, the law should be a tool to optimise the safety of both people being treated (patients) and those who treat (medical personnel). | 2022-08-31T13:42:00.516Z | 2022-08-31T00:00:00.000 | {
"year": 2022,
"sha1": "8cad612af03e9c1d8c5f0b9b3cac4633dddd14a9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8cad612af03e9c1d8c5f0b9b3cac4633dddd14a9",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
205308621 | pes2o/s2orc | v3-fos-license | Nonlocality-driven supercontinuum white light generation in plasmonic nanostructures
Structured plasmonic metals are widely employed for achieving nonlinear functionalities at the nanoscale due to their ability to confine and enhance electromagnetic fields and strong, inherent nonlinearity. Optical nonlinearities in centrosymmetric metals are dominated by conduction electron dynamics, which at the nanoscale can be significantly affected by the nonlocal effects. Here we show that nonlocal corrections, being usually small in the linear optical response, define nonlinear properties of plasmonic nanostructures. Using a full non-perturbative time-domain hydrodynamic description of electron plasma under femtosecond excitation, we numerically investigate harmonic generation in metallic Archimedean nanospirals, revealing the interplay between geometric and nonlocal effects. The quantum pressure term in the nonlinear hydrodynamic model results in the emergence of fractional nonlinear harmonics leading to broadband coherent white-light generation. The described effects present a novel class of nonlinear phenomena in metallic nanostructures determined by nonlocality of the electron response.
N anostructured electromagnetic environments enable tailoring and manipulation of optical interactions on subwavelength scales 1 . The introduction of high refractive index dielectrics and plasmonic metals, such as gold, silver and recently developed semiconductor-based compounds 2 enable achieving subwavelength confinement 3 and, as a result, increase the strength of light-matter interactions. Particularly, metallic nanostructures support plasmonic modes, which at a certain frequency, come in resonance with the incident field and drastically increase the intensity of the local field with extremely small mode volumes. Generally, nonlinear interactions depend on the local field intensity at powers predefined by the order of the interaction (for example, a power of two for second harmonic generation (SHG) which is a second-order nonlinear process). This power law indicates the advantage of the field enhancement and modal volume reduction for enhancing strengths of nonlinear interactions. The majority of experimental demonstrations, utilizing this approach, rely on strong local field enhancement, delivered by plasmonic structures in the vicinity of nonlinear materials of different kind, for example, polymers 4 , noble gasses 5 and others (reviewed in ref. 6).
However, metal composites themselves are strongly nonlinear media and capable of generating nonlinear harmonics. At the same time, since the permittivity of noble metals in visible and infra-red spectral range is negative, electromagnetic waves cannot propagate inside the bulk material where only evanescent components are supported, minimizing the overlap between the field and nonlinear material, and consequently reducing the efficiency of the overall response. Nevertheless, nanostructures with features smaller than the skin depth will allow the field to penetrate into the metal thus paving the way for tailoring their linear 7 properties and maximize their nonlinear response by designed structuring 8 .
A theoretical description of inherent nonlinear responses of metals is a fairly complicated task due to the natural complexity of their solid state structure. Along with phenomenological models, relying on experimental retrieval of nonlinear susceptibilities, a hydrodynamic model, treating the electron plasma by means of a charged fluid, was shown to give a qualitative description of the nonlinear interaction. In particular, predictions of second-order responses 9,10 and Kerr-type nonlinearities 11 qualitatively agree with existent experiments, obtained for pump wavelengths away from interband transitions. A hydrodynamic model with additional Lorentzian resonances terms is capable to reproduce metal susceptibilities over the entire spectral range 12 . Furthermore, the mesoscopic hydrodynamic approach is integrable with electromagnetic modelling, enabling studies of large scale electromagnetic systems with nontrivial geometries. It is worth noting that ab initio microscopic models are hardly extendable beyond bulk and flat surfaces descriptions due to enormous computation complexity involved 13 .
Here, using a fully non-perturbative time-domain hydrodynamic model coupled with Maxwell's equations, we demonstrate higher harmonics and supercontinuum generation from metal nanostructures of Archimedean spiral shapes. Properly tuned geometry with the lack of any symmetry (both rotational and reflection) maximizes the nonlinear response and shows the appearance of the 6th and higher harmonics, which was neither theoretically analysed nor experimentally observed to date. The robustness of the approach is evident from the fact that a non-perturbative time-domain hydrodynamic description in spherical/cylindrical geometries reliably reproduced experimental data 14 , allowing to extend this method for addressing novel physical scenarios. Being able to investigate the interplay between the topology of the nanostructure and various sources of nonlinearities of metal plasma, we demonstrate that electromagnetic nonlocality, manifesting itself via the quantum pressure term, has the prominent role in the nonlinear response of small and nanostructured (on B10 nm length scales) geometries. The properly tuned interplay between nonlocality and nonlinearity is responsible for very efficient harmonics mixing and the resulting broadband white light generation, as demonstrated below.
Results
Coupled electromagnetic-hydrodynamic nonlinearities. The interaction of electromagnetic waves with material bodies can be described via an induced polarization (P(r,t)) inside the latter. In the time domain, the interaction dynamics in the case of nonmagnetic structured media is given by where E(r,t) is the electric field, c is the speed of light in vacuum and m 0 is the vacuum permeability. In general, the spatiotemporal polarizability holds all the information on both linear and nonlinear responses of the material, also including the chromatic dispersion. Nonlocality will enter the expression through spatial derivative terms in the real coordinate's space. The polarizability of metal structures can be introduced in this equation via natural hydrodynamic variables: the macroscopic position-dependent electron density n(r,t) and velocity t(r,t), which are subsequently related to the polarization current as On the other hand, the dynamics of the free electron gas is determined by a set of hydrodynamic equations 10 m e n @ t y þ y Á ry ð Þ þ gm e ny ¼ À en E þ yÂH ð ÞÀr p; ð3Þ where m e and e are the electron mass and charge, respectively, g is the effective scattering rate, representing optical losses in a phenomenological way, and p ¼ 3p 2 ð Þ 2=3 ' 2 5m e n 5=3 is the quantum pressure term evaluated within the Thomas-Fermi theory of an ideal fermionic gas. Equations (1-4) are inherently nonlinear and provide a self-consistent formulation of nonlinear optical processes originating from free conduction electrons in plasmonic systems. In the perturbative regime of nonlinear interaction (weak pump field approximation), the second-order nonlinear polarisation plays the leading role, resulting from the convective acceleration term y Á ry, the magnetic component of the Lorentz force À eyÂH, and nt term (ref. 9). The quantum pressure term was shown to have minor contribution in 100-nm-size geometries 14 . The further increase of the peak power of the pump pulse brings higher-order nonlinear terms into consideration, resulting in intermixing of bulk and surface nonlinear effects. It is worth noting that high harmonics (higher than 3rd) with the metal clearly identified as the nonlinear source were not yet reported. Recently, attosecond pulse generation from metallic structures were predicted by applying ultra-strong electromagnetic fields, changing the quantum wavefunctions of electrons 15 .
Another very important feature of the hydrodynamic description is its inherent ability to describe nonlocal electromagnetic effects. Generally, nonlocality is the result of strong coupling between adjacent unit cells in a material, either in natural crystals or artificial materials (metamaterials), many body effects in solid state systems, and others (for example, ref. 16). Hydrodynamic nonlocality is the typical example of strong electron-electron interactions between quasi-free electrons of the metal plasma and was proven to describe a variety of phenomena, governing the optical response of small plasmonic structures 17 . In the linear optics regime, the quantum pressure term is the one responsible for the appearance of nonlocal effects, as it contains a spatial derivative in the linearized model due to the presence of the fractional power (5/3) in the electron density 18 . However, the interplay of nonlocality and nonlinearity was neither considered nor investigated before.
Non-perturbative time-domain numerical model. For a comprehensive analysis of nonlinear interactions from nanostructures, the set of equations (1-4) was implemented with the help of a finite-element time-domain method. Two-dimensional geometries under transverse magnetic field illumination with driving electric field polarized across the nanostructure ( Fig. 1) were considered to reduce the computational complexity (electric field has only in-plane components, while magnetic field is directed out of plane). In such configuration, plasmonic resonances in small metal structures may be studied without requiring the presence of the third dimension, which introduces just a geometrical correction factor. A Gaussian pulse was specified at the source boundary of the simulation domain with a spatiotemporal profile where the fundamental frequency o 1 ¼ 1.257 Â 10 15 s À 1 corresponds to a free-space wavelength of l 1 ¼ 1,500 nm, the temporal width t ¼ 20 fs (FWHM B47 fs) and the spatial width w 0 ¼ l 1 /2 ( Fig. 1). The resulting propagating pulse has a linear polarization in the y direction and is incident on the nanostructures along the x direction. Fundamental light intensities of up to I 0 ¼ 1.8 Â 10 18 W m À 2 were considered which the nanostructure can still withstand 19 .
The role of quantum pressure in nonlinear response. To investigate the effect of the nanostructure's geometry on nonlinear generation, we first compare the performance of a perfectly symmetric cylindrical nanorods (diameters d ¼ 200 and 12 nm) with an Archimedean spiral shape nanostructure (spiral angle a ¼ 5/2p, arm width w ¼ 12 nm, overall size s ¼ 70 nm (Fig. 2a,b). Spirals have no symmetry of any kind and, hence, are good candidates for nonlinear optical interactions as they are not obeying any geometrical selection rules 20 . Initially, for fair comparison of nonlinear responses of different shapes, we consider a nonresonant excitation when the excitation frequency is lower than the lowest plasmonic resonances of both nanorods and nanospirals.
The unique ability to either include or exclude the quantum pressure term in the numerical model enables investigations of the impact of nonlocality on the nonlinear generation. For large cylinders of 200 nm diameters (blue solid and dashed lines in Fig. 2c), the nonlinear scattering intensity (with linear scattering field subtracted) shows a clear signature of higher harmonics up to the 3rd order, though no significant impact of the nonlocality. For smaller cylinders of 12 nm in size, the role of the quantum pressure is more significant. At the length scale of few nm (r ¼ 6 nm in our case), which is smaller than the mean free pass of electrons and becomes comparable with the radius of nonlocality related to the electron Fermi wavelength (B0.5 nm), the nonlocal response starts playing an important role in the nonlinear scattering of the nanostructures. While the structure of the local and nonlocal spectra remains almost unchanged up to the 3rd harmonic (dashed and solid green lines in Fig. 2c, respectively), the generated intensity between integer harmonics in the nonlocal case is tremendously enhanced compared with the local counterpart, indicating to the presence of fractional harmonics. The effect of nonlocality, however, is much more pronounced in the case of the spiral nanostructure (red lines in Fig. 2c). First, the appearance of fractional harmonics, due to the quantum pressure term, is evident. A pronounced and remarkable difference between local and nonlocal scenarios manifests itself in the broadband supercontinuum generation. It originates from collective electron-electron interactions giving rise to distinct fractional harmonics (due to the power of 5/3 in the quantum pressure term in Eq. (3)), which provide an efficient pathway for enhancing frequency mixing between natural integer harmonics. For example, in conventionally used nonlinear materials for supercontinuum generation, broad white-light spectrum is achieved via higher-order interactions of the finite width peaks, centred at around integer multiples of the fundamental frequency. Consequently, the appearance of fractional harmonics relaxes the demand for multiple interactions, making the non phase-matched nanoscale process extremely efficient. It is worth noting also, that the efficiency of nonlinear generation from the spiral structure is substantially higher than that of cylinders.
The second factor underlying efficient supercontinuum generation is the abundance of high-frequency modes (Fig. 2b) available for coupling of the light generated through nonlinearity, which in turn enhances far-field radiated light due to the so-called double-resonant regime 8 . This is evident on the example of two lowest resonances of the nanospiral at 1.2 and 2.9 eV (Fig. 2b) showing a prominent nonlinear signal increase at their frequencies including the spectral range between the first and second harmonics (cf. red lines in Fig. 2b,c). If the nanospiral is illuminated from the different side (rotated 90°clockwise, Fig. 2b), the second resonance is excited via nonlinear response much less efficiently, which leads to the decrease of the corresponding nonlinear signal (cf. red and black curves in Fig. 2c at the frequency of 3 eV). Judging from the intensity distributions and field orientation (white arrows in the insets in Fig. 2b), this resonance corresponds to the excitation of the nanospiral with the field concentration at the extremities of the spiral. The lowest resonance is due to electromagnetic coupling of turns of the spiral, similar to modes of metal-dielectric-metal waveguides 21 . As a concluding remark, we note that the linear spectra of the nanospiral illuminated from opposite sides are very similar, while controlling direction of illumination allows to control the relative contribution of difference resonances (Fig. 2b).
Resonant coherent nonlinear response. The plasmonic resonances of the spiral (Fig. 2b) can be easily tuned to the required wavelength by changing geometrical parameters of the spiral, boosting the intensity of the nonlinear effects via the field enhancement associated with coupling of the excitation light to plasmonic resonances. Resonant properties of plasmonic nanostructures give them decisive advantage for generation highintensity nonlinear signals at the nanoscale. When the frequency of the excitation light matches the frequency of the plasmonic mode, the latter is resonantly excited, which leads to pronounced enhancement of the local fields. The enhancement of the associated nonlinear signal is even more dramatic, since its intensity is proportional to higher powers of the excitation intensity, given by the order of the interaction. From this point of view, nanospirals present a very robust and efficient class of nanostructures, offering well-defined narrow resonances, which can be easily adjusted to a given frequency.
In the case of spiral parameters considered above, the local fields are enhanced by factors of B35 and 50 in the first (1.2 eV) and second (2.9 eV) plasmonic resonances, respectively (Fig. 2b). Taking the advantage of flexibility of the geometry, the lowest plasmonic resonance of the nanospiral was matched to the frequency of the pump (1,550 nm or 0.83 eV) by increasing the angle of the spiral (and consequently its length) to the value a ¼ 1.03 Á 3p slightly above 3p (Fig. 3a,b). This optimized structure provides the highest field enhancement in the first resonance, rather than the second one.
As one can see by comparing the nonlinear responses of the resonant spiral to the nonresonant spiral and the nanorod (d ¼ 200 nm), the intensity of the SHG signal increased by 45 and 4 orders of magnitude, respectively, with further several orders of magnitude increase throughout the nonlinear supercontinuum spectrum (Fig. 3c). Comparison with the nonlinear spectrum of a nanorod with a diameter equal to the nanospiral arm width (d ¼ 12 nm), the difference is even more striking with more than 9 orders of magnitude increase of the SHG intensity and 8 to 3 orders of magnitude increase of higher-harmonic intensities. Such striking difference highlights the importance of the resonant effects for enhancing the nonlinear interactions as well as the importance of topology of the nanostructure: the surface areas and volumes of these two structures are different by only one order of magnitude: 10 and 20 times, respectively. An effective second-order nonlinear susceptibility w (2) ¼ 600 pm V À 1 of a nanospiral can be estimated considering a material with w (2) that produces the same overall SHG flux. This effective susceptibility is 20 times higher than that of lithium niobate 22 . This value for the simulated nanostructures in the absence of nonlocal effects is consistent with the experimentally observed values of the effective second-order susceptibility of metallic nanostructures of 10 pm V À 1 (ref. 20) and 3.2 pm V À 1 (ref. 23), taking into account the local field enhancements and the surface areas. The SHG enhancement is robust with respect to geometrical scaling under the resonant excitation of the fundamental nanospiral mode which size-dependent spectral position can be matched to the fundamental wavelength by varying a (Fig. 4a). The SHG intensity from the twice larger nanospiral increases by the factor of B1.4 in accordance with a similar increase of the surface area. In the nonresonant case of the nanorods much smaller than the wavelength and without nonlocality, the SHG intensity scales with size as / d 4 (ref. 24), which is in excellent agreement with 5 orders of magnitude difference in the SHG intensities from 12 and 200 nm nanorods (dashed lines in Fig. 2). Such behaviour is the result of much smaller dipolar (which is retardation-related) and quadrupolar moments excited at the SH frequency for a smaller nanorod.
To demonstrate robustness of the effect for a wide range of experimentally relevant scenarios, the three-dimensional spirals of different thicknesses under various illumination conditions were simulated (Fig. 4b-d). The high-Q factor nanospiral resonance, a key to the observed high nonlinear susceptibilities, was confirmed for the nanospiral thickness h ranging from infinity (two-dimensional case considered above) down to the deep-subwavelength thicknesses (Fig. 4c). With the decrease of the thickness of the spiral, the fundamental resonance experiences a red shift. However, the nature of the resonance, as can be seen in the field maps in Fig. 4b, remains the same with the similar field distributions and field enhancement values. Thus, by adjusting the angle a, the resonance position can be kept at the same fundamental wavelength for spirals of different finite thicknesses, so that the efficiency of the nonlinear processes is similar. Furthermore, this behaviour remains the same when the light is obliquely incident at a nanospiral (Fig. 4b,d).
Discussion
The interplay between nonlocality and nonlinearity on metal nanostructures was investigated. It was shown that the quantum pressure, the manifestation of collective many body dynamics of electron plasma in metals, together with structure topology plays the key role in the process of nonlinear harmonic generation. The appearance of high harmonics (up to 6th) and broadband white light generation from spiral shaped nanostructures is the result of the interplay between local geometry and fractional harmonic generation by the nonlocal quantum pressure term. Fractional harmonics were shown to mix efficiently with natural integer ones via nonlinear interaction processes to give rise to strong supercontinuum generation. The interplay between integer and fractional harmonics is unique for plasmonic systems, favouring them over existent nonlinear materials particularly on the nanoscale.
Coupled nonlinearities, macroscopic and microscopic effects (nonlinear mesoscopic phenomena), being one of the hardest tasks for analytical and numerical solutions, have been solved here for the first time. The implemented semi-phenomenological method enables addressing basic collective and, as a result, nonlocal effects of the electron plasma and is applicable in a range of validity of the hydrodynamic model. The latter neglects a number of quantum and classical phenomena, such as the electron spill out, tunnelling across small gaps, temperature effects, ultra-fast nonequilibrium dynamics and few others. Nevertheless, the majority of the effects, beyond the scope of the model, provide higher-order corrections. As a result, the hydrodynamic model is known to provide qualitative results in a good agreement with the majority of experimental observations. Its integrability with large scale, geometry-invariant structures makes the approach a universal tool of nonlinear analysis. From the application stand point, the developed approach provides a guideline for designing nanoscale nonlinear devices important in modern photonic technologies.
Methods
Modelling of the transient nonlinear optical response. The nonlinear optical response of the plasmonic nanostructures was studied using time-domain finite element method simulation of an electromagnetic problem defined by a set of Maxwell's equations coupled in a self-consistent way to additional partial differential equations implementing the hydrodynamic model in a framework of Comsol Multiphysics software. The Maxwell's equations were expressed in terms of a vector potential A rÂm À 1 rÂA ð Þþm 0 s@A t þ m 0 @ t e 0 @A t À P ð Þ¼0; ð5Þ while the hydrodynamic description of nonlinear transient response of plasmonic nanostructures given by equations 2-4 was introduced through coefficient form (equations (2) and (3)) and general form (equation (4)) of partial differential equations. The material for nanostructures was chosen to be gold. The constants for gold permittivity were taken to be n 0 ¼ 5.98 Â 10 28 m À 3 , g ¼ 1.075 Â 10 14 s À 1 , o p ¼ 13.8 Â 10 15 s À 1 , relying on available and widely used tabulated data. The simulation domain size was set to be 6 Â 6 mm 2 to ensure that the outer domain boundaries had no effect on the simulation results, which was checked. The simulation time span T ¼ 7t and offset t 0 ¼ 3t were chosen so that both pumping and scattered light pulses (the latter containing higher harmonics) entirely propagated across the simulation domain, ensuring the complete and time-interval-independent modelling of nonlinear effects.
Modelling of resonant properties of plasmonic nanostructures. The spectral response of the plasmonic nanostructures was simulated using frequency-domain finite element method in scattered-field formulation. The size of the simulation domain and the linear optical parameters of gold were set to be consistent with the time-domain simulations. The nanostructures were illuminated with a plane wave with parametrically changed frequency, while perfectly matched layers were implemented around the simulation domain to ensure the absence of reflection of the scattered waves from the outer boundaries. The nanostructure's extinction spectrum was calculated through the sum of the scattering and absorption cross-sections. The latter two parameters were calculated via integrating the incoming total power flow (for the absorption) and outcoming scattered power flow (for the spattering) over a nanoscale cylindrical region around the nanostructure and normalizing the obtained value to the power flow incident on the nanostructure's geometrical cross-section. | 2017-08-30T16:03:31.102Z | 2016-05-09T00:00:00.000 | {
"year": 2016,
"sha1": "75ab2c691384baea355e40033a269a8d30fccddf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/ncomms11497",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75ab2c691384baea355e40033a269a8d30fccddf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
259945045 | pes2o/s2orc | v3-fos-license | Safety and immunogenicity of a heterologous booster with an RBD virus-like particle vaccine following two- or three-dose inactivated COVID-19 vaccine
ABSTRACT LYB001 is an innovative recombinant SARS-CoV-2 vaccine that displays a repetitive array of the spike glycoprotein’s receptor-binding domain (RBD) on a virus-like particle (VLP) vector to boost the immune system, produced using Covalink plug-and-display protein binding technology. LYB001’s safety and immunogenicity were assessed in 119 participants receiving a booster with (1) 30 μg LYB001 (I-I-30 L) or CoronaVac (I-I-C), (2) 60 μg LYB001 (I-I-60 L) or CoronaVac in a ratio of 2:1 after two-dose primary series of inactivated COVID-19 vaccine, and (3) 30 μg LYB001 (I-I-I-30 L) after three-dose inactivated COVID-19 vaccine. A well-tolerated reactogenicity profile was observed for LYB001 as a heterologous booster, with adverse reactions being predominantly mild in severity and transient. LYB001 elicited a substantial increase in terms of the neutralizing antibody response against prototype SARS-CoV-2 28 days after booster, with GMT (95%CI) of 1237.8 (747.2, 2050.6), 554.3 (374.6, 820.2), 181.9 (107.6, 307.6), and 1200.2 (831.5, 1732.3) in the I-I-30 L, I-I-60 L, I-I-C, and I-I-I-30 L groups, respectively. LYB001 also elicited a cross-neutralizing antibody response against the BA.4/5 strain, dominant during the study period, with GMT of 201.1 (102.7, 393.7), 63.0 (35.1, 113.1), 29.2 (16.9, 50.3), and 115.3 (63.9, 208.1) in the I-I-30 L, I-I-60 L, I-I-C, and I-I-I-30 L groups, respectively, at 28 days after booster. Additionally, RBD-specific IFN-γ, IL-2, IL-4 secreting T cells dramatically increased at 14 days after a single LYB001 booster. Our data confirmed the favorable safety and immunogenicity profile of LYB001 and supported the continued clinical development of this promising candidate that utilizes the VLP platform to provide protection against COVID-19.
from China, accounted for about 45% of global delivered doses in 2021 and notably contributed to worldwide vaccine coverage. [11] These ICVs also proved to be highly effective against severe COVID-19 disease outcomes. [12,13] However, they exhibited poor or even absent neutralizing antibody (NAb) activity and effectiveness against infection with Omicron sublineages after the two-dose primary series, a primary booster or even a secondary booster. [14][15][16][17] The receptor-binding domain (RBD) on the Spike glycoprotein of SARS-CoV-2 is an immunodominant antigen which contains epitopes for most neutralizing antibodies, [18] and LYB001 is an innovative recombinant vaccine with display of repetitive RBDs on the surface of a virus-like particle (VLP) vector. [19] The array of RBD on the VLP was achieved using a Covalink plug-and-display protein binding technology (isopeptide bond 4T/4C conjunction in Figure 1), similar to platforms described in other research. [20] Because the VLP and RBD can be expressed separately, the modular production of VLP in Escherichia coli and RBD in CHO cells is highly scalable. This platform also offers a shortened research and development cycle of a variant-adapted vaccine, if needed, against rapidly evolving pathogens. Vaccine adaptation can thus be easily accomplished, offering a major advantage to tackling major global health challenges in human infectious disease. Additionally, the highly repetitive antigen array (mimicking an actual virus) and relatively large particle size can enhance B cell receptor cross-linking and antigen presenting cell uptake and presentation, leading to strong stimulation of immune cells in the draining lymph nodes and overcoming insufficient immunogenicity that can occur with soluble or monomeric recombinant subunit vaccines. Furthermore, optimal orientation of neutralizing epitope display on the VLP surface can result in higher proportion of neutralizing antibodies. [21] Herein, we therefore present the safety and immunogenicity results of LYB001 used as a booster vaccine at an interval of 6-12 months in two-or three-dose ICV recipients.
Ethical approval
The study was registered with Clinicaltrilas.gov (NCT05928455), and approved by the Institutional Review Board of Chengdu Xinhua Hospital and Chongqing Red Cross All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted July 18, 2023. ; https://doi.org/10.1101/2023.07. 17.23292762 doi: medRxiv preprint Hospital. The trial was conducted in accordance with the Good Clinical Practice guidelines and Declaration of Helsinki. Written informed consent form from each participant was obtained before any study-related procedures.
Study design and participants
This study was conducted at Chengdu Xinhua Hospital Affiliated to North Sichuan Medical College and Chongqing Red Cross Hospital (People's Hospital of Jiangbei District). It was aimed at evaluating the safety and immunogenicity of the heterologous LYB001 booster at an interval of 6-12 months following two or three doses of ICV in healthy participants aged 18-59 years. The study was carried out in two parts: In part 1, a randomized, open-label, positive-controlled design was utilized to evaluate the safety and immunogenicity profile following different heterologous booster doses (30μg and 60μg) of LYB001 using a dose-escalation design. This was compared to a homologous booster dose of CoronaVac in adults 18-59 years of age who had completed a two-dose primary series of ICV 6-12 months earlier. In part 2, a designated dose (30μg) of LYB001 based on the preliminary results from part 1 was used as booster in adults 18-59 years of age who had completed a three-dose primary series of ICV 6-12 months earlier. Participants with a known COVID-19 vaccination history other than ICV, history of SARS-CoV-2 infection, history of severe, uncontrolled chronic disease, or other conditions that, per the judgement of the investigator, might interfere with safety and immunogenicity assessment or pose possible risks to participants, were excluded from the study.
Randomization and masking
In part 1, the participants were recruited using a dose-escalation study design.
Participants who had completed two-dose primary series of ICV were randomly assigned in a ratio of 2:1 either to receive 30μg LYB001 or a CoronaVac booster. After confirmation of an acceptable 7-day safety profile in this cohort, the study was able to proceed to the cohort of two-dose ICV recipients randomly assigned in a ratio of 2:1 either to receive 60μg LYB001 or CoronaVac booster. Randomization of participants and vaccines were performed by an independent statistician using SAS statistical All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted July 18, 2023. ; https://doi.org/10.1101/2023.07.17.23292762 doi: medRxiv preprint software version 9.4 or higher. Randomization numbers were allocated to eligible participants in the order of enrollment. Participants were randomly allocated to each group in line with the randomization table. In part 2, randomization was not applicable because it was a single-arm study.
Blinding and masking were not applicable for this open-label study as the CoronaVac booster information of each participant had to be mandatorily recorded in the national vaccination system. However, all laboratory staff responsible for immunogenicity assessment and laboratory safety measures were blinded to group allocation.
Procedures
The design of the investigational vaccine is summarized in Figure 1. Briefly, LYB001 is a recombinant vaccine made by a procedure that expresses VLP vector (NPM-4C) in Safety assessments. In this trial, participants were required to stay at the trial site for a 30min safety observation for potential development of immediate adverse events (AEs) after the vaccine booster. During the observation period, participants were instructed to fill out the diary card and given a thermometer and a measurement scale for recording the AEs experienced within 7 days after the booster, including solicited local/systemic and unsolicited AEs. Solicited local AEs included injection-site pain, induration, redness, swelling, rash, and pruritus; solicited systemic AEs included fever, diarrhea, nausea, vomiting, headache, myalgia (non-injection site), cough, fatigue, and acute allergic reaction. On day 8 after booster, participants returned to the trial site for submitting diary cards which were reviewed by the investigator, and contact cards were dispensed to participants for recording unsolicited AEs within 8-28 days after the All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted July 18, 2023. ; https://doi.org/10.1101/2023.07.17.23292762 doi: medRxiv preprint booster. The intensity of AEs was graded using appropriate guidelines issued by the National Medical Products Administration (NMPA) of China, and the assessment of causality was determined by the investigators.
Immunogenicity assessments. Blood samples for humoral immunogenicity assessment were drawn from participants at baseline (day 0 before vaccination), and at days 14, 28 and 90 after booster, and used to determine: (1) Spike glycoprotein binding IgG levels, and (2) NAb titers against prototype SARS-CoV-2 and circulating VOCs.
Outcomes
The primary objective of this study was to assess the safety and immunogenicity of LYB001 following a heterologous booster in adults 18-59 years of age who had previously completed a two-or three-dose primary course of ICV vaccination. The primary immunogenicity outcome was the geometric mean titer (GMT), geometric mean fold rise (GMFR) and seroconversion rate (SCR) of spike glycoprotein-binding IgGs, NAb titers against prototype SARS-CoV-2 and circulating VOCs at baseline and All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The secondary objective was to assess the immune response durability, which included the GMT and SCR of spike glycoprotein-binding IgGs, NAb titers against prototype SARS-CoV-2 and circulating VOCs measured at 90 days after booster. The secondary safety outcomes were the severe adverse events (SAEs), adverse events of special interest (AESIs) within 90 days after booster, and safety laboratory measures at 3 days after booster. The exploratory outcome was to assess the cellular immune response following a heterologous booster dose of LYB001, and the corresponding exploratory outcome was the RBD-specific IFN-γ, IL-2, IL-4 secreting T cell response as measured by ELISpot assay at baseline and 14 days after booster.
Statistical analysis
The sample size of this trial was not based on formal statistical hypothesis. Safety analyses were evaluated in the safety set (SS), including all participants who received the booster dose. The immunogenicity analysis was performed in the per protocol set for immunogenicity (I-PPS) following an intention-to-treat principle, including participants who had completed the booster immunization with immunogenicity results at day 0 before vaccination, and at least one available post-boost immunogenicity result with no major protocol deviations. The counts and percentages of participants who experienced AEs were presented in safety analyses, including solicited local/systemic AEs, unsolicited AEs, AEs graded as grade 3 or worse, AEs leading to a participant's withdrawal, SAEs and AESIs. The NAb GMTs against prototype SARS-CoV-2 and circulating VOCs at different timepoint after booster were calculated with Clopper-Pearson 95% confidence intervals (CIs), and the t-test was used for comparison of logtransformed antibody titers between groups. Additionally, the GMFRs and SCRs at different timepoints after booster, relative to the baseline, were calculated along with their Clopper-Pearson 95% CIs. The cellular immune responses (cytokine secreting T cells by ELISpot assay) and their changes from baseline were statistically analyzed for each group at 14 days after booster, and the differences were statistically tested by All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Wilcoxon rank-sum test. The χ² test or Fisher's exact test was used to analyze other categorical data. The statistical analysis was performed using GraphPad Prism 9.0, and P < 0.05 was considered statistically significant. there were no medication (or vaccination) and medical (or allergic) history which, in the opinion of the investigator, might compromise the participants' wellbeing, or confound the protocol-specified assessments. The NAb titers against the prototype SARS-CoV-2 and Omicron BA.4/5 variants were low or absent at baseline, and these were comparable across the I-I-30L, I-I-60L, I-I-C groups, and were lower than that of the I-I-I-30L group as anticipated (Table S1).
Between
The LYB001 as a heterologous booster after a two-or three-dose ICV was safe and well tolerated with adverse reactions (vaccination related AEs) being predominantly mild in severity (only two participants reported grade 2 adverse reactions) ( Table 1).
The majority of these adverse reactions spontaneously resolved/recovered with a median duration of 2 days after symptom onset and were those commonly anticipated for intramuscularly administered vaccines. The overall incidence rate of adverse reactions was 76.7% (23/30), 66.7% (20/30), 31.0% (9/29) and 63.3% (19/30), which were largely contributed by solicited local adverse reactions, accounting for 70.0% All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (Table 1). All abnormal values spontaneously came back to normal at the subsequent visit without any clinical consequence. There were no SAEs, AESIs, deaths, or AEs that led to withdrawal reported within 90 days after booster. Only one participant in the 60μg LYB001 booster group experienced a grade 3 or worse AE (Preferred terms: Pyrexia) but this was judged as unrelated to the investigational vaccine.
As shown in Figure 3 and Table S1, the heterologous LYB001 booster elicited a potent NAb response in participants previously immunized with two-or three-dose ICV, which was low or undetectable at baseline, considerably increased at day 14, peaked at day 28 after booster, and moderately declined by 90 after booster. The VSV-based pseudovirus NAb GMTs (95% CI) against the prototype SARS-COV-2 were 8.5 (6.3, 11.6), 7.0 (5. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted July 18, 2023. days after booster was 6.9 times higher (P < 0.0001) in the I-I-30L group compared to the I-I-C group from an equivalent baseline. The spike glycoprotein binding IgGs exhibited a similar trend to those seen for NAb responses, but with a slower waning at 90 days after booster ( Figure S1). Table S2 and Figure 4, the LYB001 booster induced significantly higher cytokine responses to the SARS-CoV-2 RBD peptide pool both in the 30μg LYB001 and 60μg LYB001 groups as compared to the CoronaVac group (P < 0.001). In most treatment groups, a minority of participants in each group had relatively low pre- (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
As shown in
The copyright holder for this preprint this version posted July 18, 2023. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted July 18, 2023. ; https://doi.org/10.1101/2023.07.17.23292762 doi: medRxiv preprint I-I-60L, I-I-C, and I-I-I-30L groups did not indicate a particular trend. After further analysis, we found that the higher incidence rate of solicited local/systemic for heterologous LYB001 booster (in I-I-30L, I-I-60L, and I-I-I-30L groups) compared to CoronaVac booster (in I-I-C group) was predominantly contributed by injection-site pain, accounting for 70.0% (n=21), 56.7% (n=17), 60.0 (n=18), versus 17.2% (n=5) of the participants in the I-I-30L, I-I-60L, I-I-I-30L groups, versus the I-I-C group, respectively. The possible explanations for the increased incidence of injection-site pain are: (1) Aluminum adjuvant content in LYB001 was higher than that of the inactivated vaccine (higher aluminum adjuvant was previously reported to correlate with higher risk of pain), [22] or (2) LYB001, with display of repetitive RBD on VLP vector, has larger particle size, which may result in longer local recruitment time of relevant molecules of the innate immune system and activation of antigen-presenting cells. [21] Similarly, mRNA vaccines elicit transient increases in C-reactive protein (CRP), which is an indicator of vaccine adjuvant activity. [23] Our results were also consistent with a self-assembling, two-component nanoparticle vaccine (approved in South Korea) displaying the RBD of the SARS-CoV-2 Spike glycoprotein in a highly immunogenic array. This vaccine demonstrated injection-site pain in 88.1% of participants receiving 10μg GBP510 and 92.3% of participants receiving 25μg GBP510 in AS03 adjuvant. [24] In a phase Ⅲ trial of another coronavirus-like particle (CoVLP) vaccine, injection site pain was reported in 85.0% of participants after the first dose of CoVLP with AS01 adjuvant versus 29.4% of participants in the placebo group. [25] The results from our study indicate that one heterologous booster dose of LYB001 can profoundly restore the NAb response irrespective of the baseline antibody levels. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Albeit the LYB001 was designed using the RBD from the prototype SARS-CoV-2, it demonstrates satisfactory immunogenicity against the prototype SARS-CoV-2 and robust cross-neutralizing activity against Omicron BA.4/5 -SARS-CoV-2 variants that showed extensive immune escape. The conserved neutralizing epitopes between Omicron BA.4/5 and prototype SARS-CoV-2 might contribute to the cross neutralization. The immunogenicity could be also augmented by the innovative platform using the RBD-VLP protein binding technology, which enhance B cell activation, APC uptake and presentation, and efficient draining to lymph nodes. The optimal orientation for maximizing neutralizing epitope display, leading to a higher proportion of functional antibody, is also reflected by a higher fold rise with regard to the ratio of NAb titer versus Spike glycoprotein binding antibody concentration when comparing LYB001 booster with CoronaVac booster. Thus, the innovative design All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Although a correlate of protection has not been established for predicting individuallevel risk of SARS-CoV-2 infection, a spike glycoprotein binding IgG concentration of 1148 BAU/mL (also reported as BAUs in accordance with the WHO Standard) may provide 75% protection against symptomatic infection with BA.5, [28] indicative of a promising efficacy against BA.5 after LYB001 booster.
T cell responses were also important in controlling disease development in patients with COVID-19. Targeted T cell epitopes were broadly conserved between prototype SARS-CoV-2 variant and Omicron. [29,30] Generally, the cellular immune response is absent or at least weak after booster in healthy adults receiving two-dose ICV, in consistent with previous findings. [31,32] The results from this trial indicated that the LYB001 booster induced robust cellular responses to the SARS-CoV-2 RBD-specific peptide pool in the I-I-30L, I-I-60L, and I-I-I-30L groups, as compared to the absent cellular responses in I-I-C group. The RBD-specific IFN-γ secreting T cells measured by ELISpot assay were dramatically increased (more than 10 times versus baseline) after a single LYB001 booster, comparable to one shot of adenovirus type-5-vectored COVID-19 vaccine (a median of about 10 IFN-γ secreting SFCs per 1×10 5 PBMCs) which generally elicited robust cellular immune response. [33] A proportion of the participants in this study appeared to have pre-existing cellular responses to the RBDspecific peptide pool used for PBMC re-stimulation. Such cross-reactive T cell memory was possibly due to previous exposure to common human coronaviruses. [34,35] This study also has limitations. First, the safety and immunogenicity findings were concluded based on a small sample size (about 30 in each group), so the results should be interpreted with caution. Besides, our study is an open label study (the participants Tables and Table legends | 2023-07-18T19:02:16.761Z | 2023-07-18T00:00:00.000 | {
"year": 2023,
"sha1": "0d28bed7d310b404d41cca658b13758fd8ce904c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MedRxiv",
"pdf_hash": "0d28bed7d310b404d41cca658b13758fd8ce904c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267783292 | pes2o/s2orc | v3-fos-license | Towards effective CAIX-targeted radionuclide and checkpoint inhibition combination therapy for advanced clear cell renal cell carcinoma
Background: Immune checkpoint inhibitors (ICI) are routinely used in advanced clear cell renal cell carcinoma (ccRCC). However, a substantial group of patients does not respond to ICI therapy. Radiation is a promising approach to increase ICI response rates since it can generate anti-tumor immunity. Targeted radionuclide therapy (TRT) is a systemic radiation treatment, ideally suited for precision irradiation of metastasized cancer. Therefore, the aim of this study is to explore the potential of combined TRT, targeting carbonic anhydrase IX (CAIX) which is overexpressed in ccRCC, using [177Lu]Lu-DOTA-hG250, and ICI for the treatment of ccRCC. Methods: In this study, we evaluated the therapeutic and immunological action of [177Lu]Lu-DOTA-hG250 combined with aPD-1/a-CTLA-4 ICI. First, the biodistribution of [177Lu]Lu-DOTA-hG250 was investigated in BALB/cAnNRj mice bearing Renca-CAIX or CT26-CAIX tumors. Renca-CAIX and CT26-CAIX tumors are characterized by poor versus extensive T-cell infiltration and homogeneous versus heterogeneous PD-L1 expression, respectively. Tumor-absorbed radiation doses were estimated through dosimetry. Subsequently, [177Lu]Lu-DOTA-hG250 TRT efficacy with and without ICI was evaluated by monitoring tumor growth and survival. Therapy-induced changes in the tumor microenvironment were studied by collection of tumor tissue before and 5 or 8 days after treatment and analyzed by immunohistochemistry, flow cytometry, and RNA profiling. Results: Biodistribution studies showed high tumor uptake of [177Lu]Lu-DOTA-hG250 in both tumor models. Dose escalation therapy studies in Renca-CAIX tumor-bearing mice demonstrated dose-dependent anti-tumor efficacy of [177Lu]Lu-DOTA-hG250 and remarkable therapeutic synergy including complete remissions when a presumed subtherapeutic TRT dose (4 MBq, which had no significant efficacy as monotherapy) was combined with aPD-1+aCTLA-4. Similar results were obtained in the CT26-CAIX model for 4 MBq [177Lu]Lu-DOTA-hG250 + a-PD1. Ex vivo analyses of treated tumors revealed DNA damage, T-cell infiltration, and modulated immune signaling pathways in the TME after combination treatment. Conclusions: Subtherapeutic [177Lu]Lu-DOTA-hG250 combined with ICI showed superior therapeutic outcome and significantly altered the TME. Our results underline the importance of investigating this combination treatment for patients with advanced ccRCC in a clinical setting. Further investigations should focus on how the combination therapy should be optimally applied in the future.
Cell viability assay
Radiosensitivity to TRT was assessed using cell viability assays using the CellTiter-Glo assay (Promega, G7570) according to manufacturer's instructions.Cells were seeded in 96-well plates to attach overnight, treated for 24h with 0-6 MBq/mL [ 177 Lu]Lu-DOTA-hG250 in 200 µL culture medium, and cultured for another 8 days in fresh medium.Luminescence values were normalized to untreated cells and the data was described using the log(inhibitor)-response variable slope model with constrained bottom (0.0) and top (1.0) values.Differences in IC 50 between cell lines was statistically tested using one-way ANOVA with Tukey's multiple comparison correction.
Animal experiments
Protocols for each experiment were registered at preclinicaltrials.eu (PCTE0000440-PCTE0000445). For each animal experiment, the number of animals injected with tumor cells was determined accounting for an expected tumor take rate of 83% for Renca-CAIX and 71% for CT26-CAIX.For ex vivo biodistribution studies, sample size (n=5/group, 30 total) was selected based on previous biodistribution studies [3], but there was a dropout of 8 animals due to low tumor take resulting in the group sizes presented in Table S3.A priori sample size calculations for therapy studies were based on an effect size of 75% decline in normalized area under the curve (nAUC) compared to control, significance level of 5% and power of 80%.This resulted in a sample size of 10 mice per group for all therapy experiments.In the experiment with CT26-CAIX tumor-bearing mice, failed tumor cell injection causing intraperitoneal tumor growth resulted in 3 dropouts, resulting in the group sizes presented in Table S4.Experiments for ex vivo analyses were determined on 5 (endpoint day 0 or 5) or 6 (endpoint day 8, to account for possible humane endpoints).Initially, nine extra mice were included, because tumor take was higher and humane endpoints were lower than expected.However, two mice were excluded from analyses because of failed tumor injection and complete tumor remission, resulting in the group sizes presented in the heatmaps of Figures 6-7.Selection of mice for each groups was determined by block randomization based on tumor volume, using a random number generator.Mice from different groups were housed together to minimize confounding effects and cages were stored randomly.The order in which mice were treated and measured was random, as well as the measurements of tumor microenvironment characteristics ((immunohistochemistry, flow cytometry, RNA analysis).Biotechnicians, who performed all injections and measurements and determined whether an animal must be sacrificed due to a humane endpoint, were blinded for group allocation.The investigator was blinded for group allocation during assessment of the tumor microenvironment (immunohistochemistry, flow cytometry, RNA analysis), by concealing of animal number.
Dosimetry estimations
Time-activity curves (TAC) for tumors were fitted to a power law function followed by a monoexponential decay in Python.Tumor growth was modelled with a mono exponential function and used to correct both bound activity (%IA/g) over time and define a time-dependent S-value (e.g.absorbed dose rate per unit activity), evaluated by Geant4 11.02 as previously reported [4].The tumor-absorbed dose was calculated by numerical integration of the dose-rate curve (Figure S1).The absorbed dose rate error was determined by propagating uncertainties from activity measurements and tumor growth curve fitting, affecting both bound activities and S-value.The difference between the areas under the dose rate error bounds was used to calculate the uncertainty in tumor-absorbed doses.The time-integrated activity coefficient for livers was calculated according to the trapezoid integration method [5].Normal organ S-values for 177 Lu were obtained from OLINDA/EXM 2.0 and multiplied with obtained TIACs for each organ to calculate absorbed doses according to Medical Internal Radiation Dose (MIRD) Committee methodology [6,7].
Ex vivo flow cytometry
For panel 3, cells were first stained with AH1 dextramer according to the manufacturer's instructions.For all panels, cells were incubated with live/death marker in PBS for 20 min on ice, and subsequently incubated for 10 min on ice with CD16/CD32 (1:800, BD, 553142) for Fc blocking.Cells were incubated with the given extracellular antibodies (Table S2) for 30 min on ice.For panel 1, cells were incubated with BV510-streptavidin secondary antibody (1:300, BD, 563261) for 15 min on ice.For all panels, cells were fixed for 30 min on ice (foxp3/transcription factor staining buffer set, Invitrogen, 00-5523).For panel 2, cells were incubated with FoxP3 antibody for 30 min at RT.Samples were analyzed on a FACSCanto II Flow Cytometry System (BD Biosciences) and results were analyzed with FlowJo using the specified gating strategies (Figure S2).
Figure S2 .
Figure S2.Gating strategies for all flow cytometry panels.Gating for all panels started with general gating (upper panels) followed by panel-specific gating as indicated.Gate setting for each marker was based on Fluorescence Minus One (FMO) controls
Figure S4 .
Figure S4.In vitro radiosensitivity to TRT.Viability of cells after 24 h of treatment with 0-6 MBq/mL [ 177 Lu]Lu-DOTA-hG250 and 8 days additional culturing.Data represent mean ± SD of 2 independent experiments.Nonlinear regression using the log(inhibitor)-response variable slope model was used to fit the data and 95% confidence bands are shown (dashed lines).
Figure S6 .Figure S7 .
Figure S6.Renca-CAIX tumor weights after dissection, before processing for flow cytometry (left panel) or immunohistochemistry and RNA expression profiling (right panel).Data represents mean + SD of all mice (dots) per group.*Missing value.Day 0 Day 5 Day 8
Figure S10 .
Figure S10.Flow cytometry analyses of AH antigen-specific CD8+ T cells in lymph nodes (upper panels) and tumors (lower panels).(A) Dot plots for AH dextramer-positive CD8+ T cells.(B) Graphs show mean (lines) percentage of AH1 dextramer+ CD8+ T cells of individual mice (dots) per group.
Figure S11 .
Figure S11.Undirected global significance scores for nanostring-annotated gene sets for pair-wise comparisons of treatment groups with control group and combination with ICI treatment groups, as determined by RNA expression nanostring analysis and obtained from the Rosalind Platform.
Table S2 .
Antibody panels for flow cytometry
Table S3 .
Statistical parameters for radiosensitivity assays.α/β ratios derived from a linear quadratic model non-linear regression fit to clonogenic survival data and surviving fraction at 2 Gy (SF2) after EBRT and IC50 values after TRT are shown.Data represents mean ± SEM of 2 independent experiments and p-values are given for comparison between the cell lines using one-way ANOVA. | 2024-02-23T14:11:19.755Z | 2024-02-21T00:00:00.000 | {
"year": 2024,
"sha1": "6ca460d1178ce72fd9af833ba1035f340eb9843c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7150/thno.96944",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "97e34411c4713f21e17d33da9d701b2a669e3c47",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.